All of lore.kernel.org
 help / color / mirror / Atom feed
* Slow performance with librspreload.so
@ 2013-08-28 15:20 Gandalf Corvotempesta
       [not found] ` <CAJH6TXgf2LeMH+1L290w_KZ5tTN7NWpQxntF58Z506G3h_qKVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-28 15:20 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Hi
i'm trying the preloader librspreload.so on two directly connected hosts:

host1:$ sudo ibstatus
Infiniband device 'mlx4_0' port 1 status:
default gid: fe80:0000:0000:0000:0002:c903:004d:dd45
base lid: 0x1
sm lid: 0x1
state: 4: ACTIVE
phys state: 5: LinkUp
rate: 20 Gb/sec (4X DDR)
link_layer: InfiniBand

Infiniband device 'mlx4_0' port 2 status:
default gid: fe80:0000:0000:0000:0002:c903:004d:dd46
base lid: 0x0
sm lid: 0x0
state: 1: DOWN
phys state: 2: Polling
rate: 10 Gb/sec (4X)
link_layer: InfiniBand


host2:$ sudo ibstatus
Infiniband device 'mthca0' port 1 status:
default gid: fe80:0000:0000:0000:0008:f104:0398:14cd
base lid: 0x2
sm lid: 0x1
state: 4: ACTIVE
phys state: 5: LinkUp
rate: 20 Gb/sec (4X DDR)
link_layer: InfiniBand

Infiniband device 'mthca0' port 2 status:
default gid: fe80:0000:0000:0000:0008:f104:0398:14ce
base lid: 0x0
sm lid: 0x0
state: 1: DOWN
phys state: 2: Polling
rate: 10 Gb/sec (4X)
link_layer: InfiniBand



i've connected just one port between two hosts.
Ports is detected properly as 20Gb/s  (4x DDR) but i'm unable to reach
speed over 5Gbit/s:

host1:$ sudo LD_PRELOAD=/usr/lib/x86_64-linux-gnu/rsocket/librspreload.so
NPtcp -h 172.17.0.2
Send and receive buffers are 131072 and 131072 bytes
(A bug in Linux doubles the requested buffer sizes)
Now starting the main loop
  0:       1 bytes  17008 times -->      1.24 Mbps in       6.13 usec
  1:       2 bytes  16306 times -->      2.02 Mbps in       7.56 usec
  2:       3 bytes  13223 times -->      3.10 Mbps in       7.38 usec
  3:       4 bytes   9037 times -->      4.21 Mbps in       7.25 usec
  4:       6 bytes  10345 times -->      6.49 Mbps in       7.05 usec
  5:       8 bytes   7093 times -->      7.77 Mbps in       7.85 usec
  6:      12 bytes   7957 times -->     17.08 Mbps in       5.36 usec
  7:      13 bytes   7772 times -->     14.75 Mbps in       6.73 usec
  8:      16 bytes   6861 times -->     16.11 Mbps in       7.58 usec
  9:      19 bytes   7424 times -->     18.91 Mbps in       7.67 usec
 10:      21 bytes   8237 times -->     17.69 Mbps in       9.06 usec
 11:      24 bytes   7361 times -->     19.72 Mbps in       9.28 usec
 12:      27 bytes   7628 times -->     24.14 Mbps in       8.53 usec
 13:      29 bytes   5207 times -->     29.81 Mbps in       7.42 usec
 14:      32 bytes   6504 times -->     29.42 Mbps in       8.30 usec
 15:      35 bytes   6401 times -->     39.08 Mbps in       6.83 usec
 16:      45 bytes   8362 times -->     45.19 Mbps in       7.60 usec
 17:      48 bytes   8774 times -->     46.10 Mbps in       7.94 usec
 18:      51 bytes   8654 times -->     55.19 Mbps in       7.05 usec
 19:      61 bytes   5562 times -->     57.42 Mbps in       8.10 usec
 20:      64 bytes   6068 times -->     72.31 Mbps in       6.75 usec
 21:      67 bytes   7636 times -->     42.93 Mbps in      11.91 usec
 22:      93 bytes   4512 times -->     55.84 Mbps in      12.71 usec
 23:      96 bytes   5246 times -->     60.13 Mbps in      12.18 usec
 24:      99 bytes   5558 times -->     59.49 Mbps in      12.70 usec
 25:     125 bytes   2864 times -->     75.25 Mbps in      12.67 usec
 26:     128 bytes   3913 times -->     75.78 Mbps in      12.89 usec
 27:     131 bytes   3940 times -->     74.77 Mbps in      13.37 usec
 28:     189 bytes   3883 times -->    113.42 Mbps in      12.71 usec
 29:     192 bytes   5243 times -->    109.85 Mbps in      13.33 usec
 30:     195 bytes   5038 times -->    115.66 Mbps in      12.86 usec
 31:     253 bytes   2710 times -->    146.61 Mbps in      13.17 usec
 32:     256 bytes   3782 times -->    142.77 Mbps in      13.68 usec
 33:     259 bytes   3683 times -->    144.75 Mbps in      13.65 usec
 34:     381 bytes   3733 times -->    201.64 Mbps in      14.42 usec
 35:     384 bytes   4624 times -->    204.22 Mbps in      14.35 usec
 36:     387 bytes   4665 times -->    204.65 Mbps in      14.43 usec
 37:     509 bytes   2364 times -->    265.12 Mbps in      14.65 usec
 38:     512 bytes   3406 times -->    267.89 Mbps in      14.58 usec
 39:     515 bytes   3442 times -->    266.90 Mbps in      14.72 usec
 40:     765 bytes   3429 times -->    381.51 Mbps in      15.30 usec
 41:     768 bytes   4357 times -->    384.85 Mbps in      15.23 usec
 42:     771 bytes   4387 times -->    386.35 Mbps in      15.23 usec
 43:    1021 bytes   2214 times -->    495.38 Mbps in      15.72 usec
 44:    1024 bytes   3176 times -->    499.56 Mbps in      15.64 usec
 45:    1027 bytes   3203 times -->    497.19 Mbps in      15.76 usec
 46:    1533 bytes   3188 times -->    692.19 Mbps in      16.90 usec
 47:    1536 bytes   3945 times -->    688.52 Mbps in      17.02 usec
 48:    1539 bytes   3920 times -->    693.85 Mbps in      16.92 usec
 49:    2045 bytes   1981 times -->    858.05 Mbps in      18.18 usec
 50:    2048 bytes   2748 times -->    862.22 Mbps in      18.12 usec
 51:    2051 bytes   2761 times -->    832.50 Mbps in      18.80 usec
 52:    3069 bytes   2666 times -->   1174.72 Mbps in      19.93 usec
 53:    3072 bytes   3344 times -->   1183.58 Mbps in      19.80 usec
 54:    3075 bytes   3368 times -->   1177.98 Mbps in      19.92 usec
 55:    4093 bytes   1678 times -->   1495.79 Mbps in      20.88 usec
 56:    4096 bytes   2394 times -->   1486.91 Mbps in      21.02 usec
 57:    4099 bytes   2380 times -->   1490.11 Mbps in      20.99 usec
 58:    6141 bytes   2385 times -->   2417.56 Mbps in      19.38 usec
 59:    6144 bytes   3439 times -->   2491.24 Mbps in      18.82 usec
 60:    6147 bytes   3543 times -->   2393.71 Mbps in      19.59 usec
 61:    8189 bytes   1703 times -->   2486.93 Mbps in      25.12 usec
 62:    8192 bytes   1990 times -->   2501.61 Mbps in      24.98 usec
 63:    8195 bytes   2001 times -->   2470.25 Mbps in      25.31 usec
 64:   12285 bytes   1976 times -->   3335.91 Mbps in      28.10 usec
 65:   12288 bytes   2372 times -->   3346.71 Mbps in      28.01 usec
 66:   12291 bytes   2380 times -->   3325.57 Mbps in      28.20 usec
 67:   16381 bytes   1183 times -->   3404.87 Mbps in      36.71 usec
 68:   16384 bytes   1362 times -->   3396.27 Mbps in      36.81 usec
 69:   16387 bytes   1358 times -->   3338.60 Mbps in      37.45 usec
 70:   24573 bytes   1335 times -->   3952.93 Mbps in      47.43 usec
 71:   24576 bytes   1405 times -->   3870.35 Mbps in      48.45 usec
 72:   24579 bytes   1376 times -->   3947.46 Mbps in      47.50 usec
 73:   32765 bytes    701 times -->   3708.77 Mbps in      67.40 usec
 74:   32768 bytes    741 times -->   3670.93 Mbps in      68.10 usec
 75:   32771 bytes    734 times -->   3713.07 Mbps in      67.34 usec
 76:   49149 bytes    742 times -->   4269.21 Mbps in      87.83 usec
 77:   49152 bytes    759 times -->   4213.58 Mbps in      89.00 usec
 78:   49155 bytes    749 times -->   4261.68 Mbps in      88.00 usec
 79:   65533 bytes    378 times -->   4397.40 Mbps in     113.70 usec
 80:   65536 bytes    439 times -->   4495.83 Mbps in     111.21 usec
 81:   65539 bytes    449 times -->   4373.61 Mbps in     114.33 usec
 82:   98301 bytes    437 times -->   4581.69 Mbps in     163.69 usec
 83:   98304 bytes    407 times -->   4643.01 Mbps in     161.53 usec
 84:   98307 bytes    412 times -->   4574.63 Mbps in     163.95 usec
 85:  131069 bytes    203 times -->   4663.35 Mbps in     214.43 usec
 86:  131072 bytes    233 times -->   4643.97 Mbps in     215.33 usec
 87:  131075 bytes    232 times -->   4663.00 Mbps in     214.46 usec
 88:  196605 bytes    233 times -->   4820.71 Mbps in     311.15 usec
 89:  196608 bytes    214 times -->   4838.05 Mbps in     310.04 usec
 90:  196611 bytes    215 times -->   4833.56 Mbps in     310.34 usec
 91:  262141 bytes    107 times -->   4946.10 Mbps in     404.35 usec
 92:  262144 bytes    123 times -->   4955.13 Mbps in     403.62 usec
 93:  262147 bytes    123 times -->   4940.46 Mbps in     404.83 usec
 94:  393213 bytes    123 times -->   5061.76 Mbps in     592.67 usec
 95:  393216 bytes    112 times -->   5053.05 Mbps in     593.70 usec
 96:  393219 bytes    112 times -->   5022.92 Mbps in     597.27 usec
 97:  524285 bytes     55 times -->   5125.96 Mbps in     780.34 usec
 98:  524288 bytes     64 times -->   5117.60 Mbps in     781.62 usec
 99:  524291 bytes     63 times -->   5122.30 Mbps in     780.90 usec
100:  786429 bytes     64 times -->   5189.59 Mbps in    1156.16 usec
101:  786432 bytes     57 times -->   5186.53 Mbps in    1156.84 usec
102:  786435 bytes     57 times -->   5183.52 Mbps in    1157.52 usec
103: 1048573 bytes     28 times -->   5217.00 Mbps in    1533.44 usec
104: 1048576 bytes     32 times -->   5198.91 Mbps in    1538.78 usec
105: 1048579 bytes     32 times -->   5218.60 Mbps in    1532.98 usec
106: 1572861 bytes     32 times -->   5242.06 Mbps in    2289.17 usec
107: 1572864 bytes     29 times -->   5242.86 Mbps in    2288.83 usec
108: 1572867 bytes     29 times -->   5249.47 Mbps in    2285.95 usec
109: 2097149 bytes     14 times -->   5252.47 Mbps in    3046.18 usec
110: 2097152 bytes     16 times -->   5260.67 Mbps in    3041.44 usec
111: 2097155 bytes     16 times -->   5255.55 Mbps in    3044.40 usec
112: 3145725 bytes     16 times -->   5255.34 Mbps in    4566.78 usec
113: 3145728 bytes     14 times -->   5259.21 Mbps in    4563.43 usec
114: 3145731 bytes     14 times -->   5263.82 Mbps in    4559.43 usec
115: 4194301 bytes      7 times -->   5256.99 Mbps in    6087.13 usec
116: 4194304 bytes      8 times -->   5265.97 Mbps in    6076.75 usec
117: 4194307 bytes      8 times -->   5257.70 Mbps in    6086.32 usec
118: 6291453 bytes      8 times -->   5242.18 Mbps in    9156.50 usec
119: 6291456 bytes      7 times -->   5238.10 Mbps in    9163.64 usec
120: 6291459 bytes      7 times -->   5223.28 Mbps in    9189.64 usec
121: 8388605 bytes      3 times -->   5192.27 Mbps in   12326.00 usec
122: 8388608 bytes      4 times -->   5206.80 Mbps in   12291.61 usec
123: 8388611 bytes      4 times -->   5197.97 Mbps in   12312.50 usec


host1:$ sudo LD_PRELOAD=/usr/lib/x86_64-linux-gnu/rsocket/librspreload.so
iperf -c 172.17.0.2
------------------------------------------------------------
Client connecting to 172.17.0.2, TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  3] local 172.17.0.1 port 36085 connected with 172.17.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  7.82 GBytes  6.72 Gbits/sec



I'm also trying to set connected as IB mode but without success:

host1:$ sudo echo connected > /sys/class/net/ib0/mode
host1:$ sudo cat /sys/class/net/ib0/mode
datagram


Any advice ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found] ` <CAJH6TXgf2LeMH+1L290w_KZ5tTN7NWpQxntF58Z506G3h_qKVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-28 15:50   ` Hefty, Sean
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237388CA937E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Hefty, Sean @ 2013-08-28 15:50 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta',
	'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'

> i've connected just one port between two hosts.
> Ports is detected properly as 20Gb/s  (4x DDR) but i'm unable to reach
> speed over 5Gbit/s:

It's possible that this is falling back to using normal TCP sockets.

Can you run the rstream test program to verify that you can get faster than 5 Gbps?

rstream without any options will use rsockets directly.  If you use the -T s option, it will use standard TCP sockets.  You can use LD_PRELOAD with -T s to verify that the preload brings your per performance to the same level as using rsockets directly.

- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]     ` <1828884A29C6694DAF28B7E6B8A8237388CA937E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 16:19       ` Gandalf Corvotempesta
       [not found]         ` <CAJH6TXjEx+41G_7wvQybMXzb60tu-ha2d2Bu_J_erNDPJRbQFw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-28 16:19 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/28 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> Can you run the rstream test program to verify that you can get faster than 5 Gbps?
>
> rstream without any options will use rsockets directly.  If you use the -T s option, it will use standard TCP sockets.  You can use LD_PRELOAD with -T s to verify that the preload brings your per performance to the same level as using rsockets directly.

5Gb/s with rstream:

$ sudo ./rstream -s 172.17.0.2
name      bytes   xfers   iters   total       time     Gb/sec    usec/xfer
64_lat    64      1       100k    12m         0.70s      0.15       3.52
4k_lat    4k      1       10k     78m         0.29s      2.23      14.69
64k_lat   64k     1       1k      125m        0.21s      4.94     106.07
1m_lat    1m      1       100     200m        0.30s      5.61    1495.89
64_bw     64      100k    1       12m         0.25s      0.42       1.23
4k_bw     4k      10k     1       78m         0.13s      5.17       6.34
64k_bw    64k     1k      1       125m        0.19s      5.58      94.03
1m_bw     1m      100     1       200m        0.30s      5.64    1486.53
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]         ` <CAJH6TXjEx+41G_7wvQybMXzb60tu-ha2d2Bu_J_erNDPJRbQFw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-28 17:16           ` Hefty, Sean
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237388CA96AD-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Hefty, Sean @ 2013-08-28 17:16 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta'
  Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'

> > Can you run the rstream test program to verify that you can get faster
> than 5 Gbps?
> >
> > rstream without any options will use rsockets directly.  If you use the -
> T s option, it will use standard TCP sockets.  You can use LD_PRELOAD with
> -T s to verify that the preload brings your per performance to the same
> level as using rsockets directly.
> 
> 
> 5Gb/s with rstream:

Can you explain your environment more?  The performance seems low.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]             ` <1828884A29C6694DAF28B7E6B8A8237388CA96AD-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-28 18:24               ` Gandalf Corvotempesta
       [not found]                 ` <CAJH6TXhAuSDytS5O1cJMg3iatq+STkwhPUG2zmexJ5tmt3Foqg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-28 18:24 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/28 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> Can you explain your environment more?  The performance seems low.

Ubuntu 13.04 Server on both nodes.

node1:

$ cat /proc/cpuinfo | grep 'model name'
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz

$ free -m
             total       used       free     shared    buffers     cached
Mem:         16022        966      15056          0         95        534
-/+ buffers/cache:        336      15686
Swap:        16353          0      16353


node2:

$ cat /proc/cpuinfo | grep 'model name'
model name : Intel(R) Xeon(R) CPU            3065  @ 2.33GHz
model name : Intel(R) Xeon(R) CPU            3065  @ 2.33GHz

$ free -m
             total       used       free     shared    buffers     cached
Mem:          2001        718       1282          0         53        516
-/+ buffers/cache:        148       1853
Swap:         2044          0       2044
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                 ` <CAJH6TXhAuSDytS5O1cJMg3iatq+STkwhPUG2zmexJ5tmt3Foqg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-28 19:20                   ` Hefty, Sean
  2013-08-28 20:05                   ` Hefty, Sean
  1 sibling, 0 replies; 27+ messages in thread
From: Hefty, Sean @ 2013-08-28 19:20 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta'
  Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'

> 2013/8/28 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> > Can you explain your environment more?  The performance seems low.
> 
> Ubuntu 13.04 Server on both nodes.
> 
> node1:
> 
> $ cat /proc/cpuinfo | grep 'model name'
> model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz


> $ cat /proc/cpuinfo | grep 'model name'
> model name : Intel(R) Xeon(R) CPU            3065  @ 2.33GHz

Can you run rstream using the loopback address?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                 ` <CAJH6TXhAuSDytS5O1cJMg3iatq+STkwhPUG2zmexJ5tmt3Foqg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-08-28 19:20                   ` Hefty, Sean
@ 2013-08-28 20:05                   ` Hefty, Sean
       [not found]                     ` <CAJH6TXgA0ghKX1P8UUAMFKY9o0xBJ0j4-kFa_M4a4ecdzoD0HA@mail.gmail.com>
  1 sibling, 1 reply; 27+ messages in thread
From: Hefty, Sean @ 2013-08-28 20:05 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta'
  Cc: 'linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org'

> Ubuntu 13.04 Server on both nodes.
> 
> node1:
> 
> $ cat /proc/cpuinfo | grep 'model name'
> model name : Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz

If you can provide your PCIe information and the results from running the perftest tools (rdma_bw), that could help as well.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Fwd: Slow performance with librspreload.so
       [not found]                       ` <CAJH6TXgA0ghKX1P8UUAMFKY9o0xBJ0j4-kFa_M4a4ecdzoD0HA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-29 13:03                         ` Gandalf Corvotempesta
  2013-08-29 13:03                         ` Gandalf Corvotempesta
  1 sibling, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-29 13:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

---------- Forwarded message ----------
From: Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Date: 2013/8/29
Subject: Re: Slow performance with librspreload.so
To: "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>


2013/8/28 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> If you can provide your PCIe information and the results from running the perftest tools (rdma_bw), that could help as well.

node1 (172.17.0.1 is ip configured on ib0):

$ sudo ./rstream -s 172.17.0.1
name      bytes   xfers   iters   total       time     Gb/sec    usec/xfer
64_lat    64      1       100k    12m         0.26s      0.40       1.28
4k_lat    4k      1       10k     78m         0.17s      3.96       8.28
64k_lat   64k     1       1k      125m        0.11s      9.86      53.19
1m_lat    1m      1       100     200m        0.14s     12.34     679.73
64_bw     64      100k    1       12m         0.06s      1.75       0.29
4k_bw     4k      10k     1       78m         0.06s     11.79       2.78
64k_bw    64k     1k      1       125m        0.09s     12.20      42.97
1m_bw     1m      100     1       200m        0.13s     12.78     656.55

$ lspci | grep -i infiniband
04:00.0 InfiniBand: Mellanox Technologies MT25418 [ConnectX VPI PCIe
2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)


node2 (172.17.0.2 is ip configured on ib0):
$ sudo ./rstream -s 172.17.0.2
name      bytes   xfers   iters   total       time     Gb/sec    usec/xfer
64_lat    64      1       100k    12m         1.10s      0.09       5.49
4k_lat    4k      1       10k     78m         0.43s      1.53      21.49
64k_lat   64k     1       1k      125m        0.29s      3.64     143.99
1m_lat    1m      1       100     200m        0.37s      4.53    1852.70
64_bw     64      100k    1       12m         0.42s      0.24       2.12
4k_bw     4k      10k     1       78m         0.16s      4.16       7.87
64k_bw    64k     1k      1       125m        0.23s      4.49     116.69
1m_bw     1m      100     1       200m        0.36s      4.63    1813.52

$ lspci | grep -i infiniband
02:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex
(Tavor compatibility mode) (rev 20)
(this is a Voltaire 400Ex-D card)

Same result by using 127.0.0.1 on both hosts, obviously.

I'm unable to run rdma_bw due to different CPU speed any my versions
doesn't have the ignore flag.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                       ` <CAJH6TXgA0ghKX1P8UUAMFKY9o0xBJ0j4-kFa_M4a4ecdzoD0HA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-08-29 13:03                         ` Fwd: " Gandalf Corvotempesta
@ 2013-08-29 13:03                         ` Gandalf Corvotempesta
  1 sibling, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-29 13:03 UTC (permalink / raw)
  To: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/29 Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> node1 (172.17.0.1 is ip configured on ib0):
>
> $ sudo ./rstream -s 172.17.0.1
> name      bytes   xfers   iters   total       time     Gb/sec    usec/xfer
> 64_lat    64      1       100k    12m         0.26s      0.40       1.28
> 4k_lat    4k      1       10k     78m         0.17s      3.96       8.28
> 64k_lat   64k     1       1k      125m        0.11s      9.86      53.19
> 1m_lat    1m      1       100     200m        0.14s     12.34     679.73
> 64_bw     64      100k    1       12m         0.06s      1.75       0.29
> 4k_bw     4k      10k     1       78m         0.06s     11.79       2.78
> 64k_bw    64k     1k      1       125m        0.09s     12.20      42.97
> 1m_bw     1m      100     1       200m        0.13s     12.78     656.55

With standard sockets:

$ sudo ./rstream -s 172.17.0.1 -T s
name      bytes   xfers   iters   total       time     Gb/sec    usec/xfer
64_lat    64      1       100k    12m         1.07s      0.10       5.36
4k_lat    4k      1       10k     78m         0.13s      4.89       6.70
64k_lat   64k     1       1k      125m        0.06s     18.38      28.52
1m_lat    1m      1       100     200m        0.06s     25.90     323.89
64_bw     64      100k    1       12m         0.98s      0.10       4.91
4k_bw     4k      10k     1       78m         0.12s      5.29       6.20
64k_bw    64k     1k      1       125m        0.04s     27.04      19.39
1m_bw     1m      100     1       200m        0.05s     31.52     266.14
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                             ` <1828884A29C6694DAF28B7E6B8A8237388CA9D8A-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-29 18:57                               ` Gandalf Corvotempesta
  2013-08-30  8:19                               ` Gandalf Corvotempesta
  1 sibling, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-29 18:57 UTC (permalink / raw)
  To: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/29 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> 12 Gbps on a 20 Gb link actually seems reasonable to me.  I only see around 25 Gbps on a 40 Gb link, with raw perftest performance coming in at about 26 Gbps.

Ok.
I think that i've connected the HBA to the wrong PCI-Express slot.
I have a DELL R200 that has 3 PCI-Express slot but one of them is just
x4. probably i've connected the card to this.

Tomorrow i'll try to connect the HBA to the x8 slot.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                             ` <1828884A29C6694DAF28B7E6B8A8237388CA9D8A-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  2013-08-29 18:57                               ` Gandalf Corvotempesta
@ 2013-08-30  8:19                               ` Gandalf Corvotempesta
       [not found]                                 ` <CAJH6TXhWWMBbopDLZY2+rrNOm2m5gcmObj7Sr16u2qrNW_NHgw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-30  8:19 UTC (permalink / raw)
  To: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/29 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> 12 Gbps on a 20 Gb link actually seems reasonable to me.  I only see around 25 Gbps on a 40 Gb link, with raw perftest performance coming in at about 26 Gbps.

Is this a rstream limits or an IB limit? I've read somewhere that DDR
should transfer at 16Gbps

By the way, moving the HBA on the second slot, brought me to 12Gbps on
both hosts.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                 ` <CAJH6TXhWWMBbopDLZY2+rrNOm2m5gcmObj7Sr16u2qrNW_NHgw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-30  8:23                                   ` Gandalf Corvotempesta
       [not found]                                     ` <CAJH6TXgR=wVGyyHrpTKkBw-5M=A9-tGzjpYcV6NWpk8uKvFn8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-30  8:23 UTC (permalink / raw)
  To: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/30 Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> By the way, moving the HBA on the second slot, brought me to 12Gbps on
> both hosts.

This is great:

$ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 172.17.0.2
------------------------------------------------------------
Client connecting to 172.17.0.2, TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  3] local 172.17.0.1 port 34108 connected with 172.17.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  12.2 GBytes  10.5 Gbits/sec
$ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c
172.17.0.2 -P 2
------------------------------------------------------------
Client connecting to 172.17.0.2, TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 172.17.0.1 port 55323 connected with 172.17.0.2 port 5001
[  3] local 172.17.0.1 port 36579 connected with 172.17.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  7.46 GBytes  6.41 Gbits/sec
[  3]  0.0-10.0 sec  7.46 GBytes  6.41 Gbits/sec
[SUM]  0.0-10.0 sec  14.9 GBytes  12.8 Gbits/sec


with 2 parallel connection i'm able to reach "rate" speed with iperf,
the same speed archived with rstream.
Is iperf affected by IPoIB MTU size when used with librspreload.so ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                     ` <CAJH6TXgR=wVGyyHrpTKkBw-5M=A9-tGzjpYcV6NWpk8uKvFn8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-30 10:29                                       ` Gandalf Corvotempesta
       [not found]                                         ` <CAJH6TXidOVJDTokOwxjCT9RRTOO6E_xbyG_K++YVkFx8NbhMTA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-08-30 15:51                                       ` Hefty, Sean
  1 sibling, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-30 10:29 UTC (permalink / raw)
  To: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/30 Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> Is iperf affected by IPoIB MTU size when used with librspreload.so ?

Another strange issue:

$ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 172.17.0.2
------------------------------------------------------------
Client connecting to 172.17.0.2, TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  3] local 172.17.0.1 port 57926 connected with 172.17.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  12.2 GBytes  10.4 Gbits/sec

$ iperf -c 172.17.0.2
------------------------------------------------------------
Client connecting to 172.17.0.2, TCP port 5001
TCP window size:  648 KByte (default)
------------------------------------------------------------
[  3] local 172.17.0.1 port 58113 connected with 172.17.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  14.5 GBytes  12.5 Gbits/sec



rsocket slower than IPoIB ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                                     ` <CAJH6TXgR=wVGyyHrpTKkBw-5M=A9-tGzjpYcV6NWpk8uKvFn8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-08-30 10:29                                       ` Gandalf Corvotempesta
@ 2013-08-30 15:51                                       ` Hefty, Sean
       [not found]                                         ` <CAJH6TXgu4L8gnqQX1fKZ=ioZDxUMnj=s3h0qkYh1_35VWpMJ1g@mail.gmai! l.com>
                                                           ` (2 more replies)
  1 sibling, 3 replies; 27+ messages in thread
From: Hefty, Sean @ 2013-08-30 15:51 UTC (permalink / raw)
  To: Gandalf Corvotempesta, linux-rdma-u79uwXL29TY76Z2rM5mHXA

> with 2 parallel connection i'm able to reach "rate" speed with iperf,
> the same speed archived with rstream.
> Is iperf affected by IPoIB MTU size when used with librspreload.so ?

Not directly.  The ipoib mtu is usually set based on the mtu of the IB link.  The latter does affect rsocket performance.  However if the ipoib mtu is changed separately from the IB link mtu, it will not affect rsockets.

- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                         ` <1828884A29C6694DAF28B7E6B8A8237388CAA11B-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-30 16:26                                           ` Gandalf Corvotempesta
  0 siblings, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-30 16:26 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/30 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> Not directly.  The ipoib mtu is usually set based on the mtu of the IB link.  The latter does affect rsocket performance.  However if the ipoib mtu is changed separately from the IB link mtu, it will not affect rsockets.

Actually i'm going faster with IPoIB than rsockets.
How can I change the MTU for IB link ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                                         ` <CAJH6TXidOVJDTokOwxjCT9RRTOO6E_xbyG_K++YVkFx8NbhMTA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-30 17:38                                           ` Hefty, Sean
       [not found]                                             ` <1828884A29C6694DAF28B7E6B8A8237388CAA1B9-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Hefty, Sean @ 2013-08-30 17:38 UTC (permalink / raw)
  To: Gandalf Corvotempesta, linux-rdma-u79uwXL29TY76Z2rM5mHXA

> Another strange issue:
> 
> $ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c
> 172.17.0.2
> ------------------------------------------------------------
> Client connecting to 172.17.0.2, TCP port 5001
> TCP window size:  128 KByte (default)

Increasing the window size may improve the results.  E.g. on my systems I go from 17.7 Gbps at 128 KB to 24.3 Gbps for 512 KB.

> ------------------------------------------------------------
> [  3] local 172.17.0.1 port 57926 connected with 172.17.0.2 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  12.2 GBytes  10.4 Gbits/sec
> 
> $ iperf -c 172.17.0.2
> ------------------------------------------------------------
> Client connecting to 172.17.0.2, TCP port 5001
> TCP window size:  648 KByte (default)
> ------------------------------------------------------------
> [  3] local 172.17.0.1 port 58113 connected with 172.17.0.2 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  14.5 GBytes  12.5 Gbits/sec
> 
> rsocket slower than IPoIB ?

This is surprising to me - just getting 12.5 Gbps out of ipoib is surprising.  Does iperf use sendfile()?

My results with iperf (version 2.0.5) over ipoib (default configurations) vary considerably based on the TCP window size.  (Note that this is a 40 Gbps link.)  Results summarized:

TCP window size: 27.9 KByte (default)
 [  3]  0.0-10.0 sec  12.8 GBytes  11.0 Gbits/sec

TCP window size:  416 KByte (WARNING: requested  500 KByte)
[  3]  0.0-10.0 sec  8.19 GBytes  7.03 Gbits/sec

TCP window size:  250 KByte (WARNING: requested  125 KByte)
 [  3]  0.0-10.0 sec  4.99 GBytes  4.29 Gbits/sec

I'm guessing that there are some settings I can change to increase the ipoib performance on my systems.  Using rspreload, I get:

LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 192.168.0.103
TCP window size:  512 KByte (default)
[  3]  0.0-10.0 sec  28.3 GBytes  24.3 Gbits/sec

It seems that ipoib bandwidth should be close to rsockets, similar to what you see.  I also don't understand the effect that the TCP window size is having on the results.  The smallest window gives the best bandwidth for ipoib?!

- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                             ` <1828884A29C6694DAF28B7E6B8A8237388CAA1B9-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2013-08-30 18:08                                               ` Atchley, Scott
  0 siblings, 0 replies; 27+ messages in thread
From: Atchley, Scott @ 2013-08-30 18:08 UTC (permalink / raw)
  To: Hefty, Sean; +Cc: Gandalf Corvotempesta, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On Aug 30, 2013, at 1:38 PM, "Hefty, Sean" <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:

>> Another strange issue:
>> 
>> $ sudo LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c
>> 172.17.0.2
>> ------------------------------------------------------------
>> Client connecting to 172.17.0.2, TCP port 5001
>> TCP window size:  128 KByte (default)
> 
> Increasing the window size may improve the results.  E.g. on my systems I go from 17.7 Gbps at 128 KB to 24.3 Gbps for 512 KB.
> 
>> ------------------------------------------------------------
>> [  3] local 172.17.0.1 port 57926 connected with 172.17.0.2 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.0 sec  12.2 GBytes  10.4 Gbits/sec
>> 
>> $ iperf -c 172.17.0.2
>> ------------------------------------------------------------
>> Client connecting to 172.17.0.2, TCP port 5001
>> TCP window size:  648 KByte (default)
>> ------------------------------------------------------------
>> [  3] local 172.17.0.1 port 58113 connected with 172.17.0.2 port 5001
>> [ ID] Interval       Transfer     Bandwidth
>> [  3]  0.0-10.0 sec  14.5 GBytes  12.5 Gbits/sec
>> 
>> rsocket slower than IPoIB ?
> 
> This is surprising to me - just getting 12.5 Gbps out of ipoib is surprising.  Does iperf use sendfile()?

I have a pair of nodes connected by QDR via a switch. Using normal IPoIB, a single Netperf can reach 18.4 Gb/s if I bind to the same core that the IRQ handler is bound to. With four concurrent Netperfs, I can reach 23 Gb/s. This is in datagram mode. Connected mode is slower.

I have not tried rsockets on these nodes.

Scott


> 
> My results with iperf (version 2.0.5) over ipoib (default configurations) vary considerably based on the TCP window size.  (Note that this is a 40 Gbps link.)  Results summarized:
> 
> TCP window size: 27.9 KByte (default)
> [  3]  0.0-10.0 sec  12.8 GBytes  11.0 Gbits/sec
> 
> TCP window size:  416 KByte (WARNING: requested  500 KByte)
> [  3]  0.0-10.0 sec  8.19 GBytes  7.03 Gbits/sec
> 
> TCP window size:  250 KByte (WARNING: requested  125 KByte)
> [  3]  0.0-10.0 sec  4.99 GBytes  4.29 Gbits/sec
> 
> I'm guessing that there are some settings I can change to increase the ipoib performance on my systems.  Using rspreload, I get:
> 
> LD_PRELOAD=/usr/local/lib/rsocket/librspreload.so iperf -c 192.168.0.103
> TCP window size:  512 KByte (default)
> [  3]  0.0-10.0 sec  28.3 GBytes  24.3 Gbits/sec
> 
> It seems that ipoib bandwidth should be close to rsockets, similar to what you see.  I also don't understand the effect that the TCP window size is having on the results.  The smallest window gives the best bandwidth for ipoib?!
> 
> - Sean
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                                           ` <CAJH6TXgvViraH4SEcYydCerGyM6kK61eoiaENCy6PSf_1ocSVA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-30 18:19                                             ` Rupert Dance
  2013-08-31  9:20                                               ` Gandalf Corvotempesta
  0 siblings, 1 reply; 27+ messages in thread
From: Rupert Dance @ 2013-08-30 18:19 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta', 'Hefty, Sean'
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA

One way to set or check mtu is with the ibportstate utility:

Usage: ibportstate [options] <dest dr_path|lid|guid> <portnum> [<op>]
Supported ops: enable, disable, reset, speed, width, query, down, arm,
active, vls, mtu, lid, smlid, lmc

-----Original Message-----
From: linux-rdma-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
[mailto:linux-rdma-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Gandalf Corvotempesta
Sent: Friday, August 30, 2013 12:27 PM
To: Hefty, Sean
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Slow performance with librspreload.so

2013/8/30 Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>:
> Not directly.  The ipoib mtu is usually set based on the mtu of the IB
link.  The latter does affect rsocket performance.  However if the ipoib mtu
is changed separately from the IB link mtu, it will not affect rsockets.

Actually i'm going faster with IPoIB than rsockets.
How can I change the MTU for IB link ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at
http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
  2013-08-30 18:19                                             ` Rupert Dance
@ 2013-08-31  9:20                                               ` Gandalf Corvotempesta
       [not found]                                                 ` <CAJH6TXgu4L8gnqQX1fKZ=ioZDxUMnj=s3h0qkYh1_35VWpMJ1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-31  9:20 UTC (permalink / raw)
  To: Rupert Dance; +Cc: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/30 Rupert Dance <rsdance-rzwDkyJvnYokmLvzuZlaBw@public.gmane.org>:
> One way to set or check mtu is with the ibportstate utility:
>
> Usage: ibportstate [options] <dest dr_path|lid|guid> <portnum> [<op>]
> Supported ops: enable, disable, reset, speed, width, query, down, arm,
> active, vls, mtu, lid, smlid, lmc

I've tried but max MTU is 2048 on one device:

$ sudo ibv_devinfo
hca_id: mthca0
transport: InfiniBand (0)
fw_ver: 4.7.600
node_guid: 0008:f104:0398:14cc
sys_image_guid: 0008:f104:0398:14cf
vendor_id: 0x08f1
vendor_part_id: 25208
hw_ver: 0xA0
board_id: VLT0040010001
phys_port_cnt: 2
port: 1
state: PORT_ACTIVE (4)
max_mtu: 2048 (4)
active_mtu: 2048 (4)
sm_lid: 1
port_lid: 2
port_lmc: 0x00
link_layer: InfiniBand

any workaround? Maybe a firmware update ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                                                 ` <CAJH6TXgu4L8gnqQX1fKZ=ioZDxUMnj=s3h0qkYh1_35VWpMJ1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-08-31 11:34                                                   ` Rupert Dance
  2013-08-31 19:51                                                     ` Gandalf Corvotempesta
  0 siblings, 1 reply; 27+ messages in thread
From: Rupert Dance @ 2013-08-31 11:34 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta'
  Cc: 'Hefty, Sean', linux-rdma-u79uwXL29TY76Z2rM5mHXA

The Vendor ID indicates that this is a Voltaire card which probably means it
is an older card. Some of the early Mellanox based cards did not support
anything bigger than 2048. 

  00-08-F1   (hex)		Voltaire
  0008F1     (base 16)		Voltaire
  				9 Hamenofim st.
				Herzelia  46725
				ISRAEL

Checking for FW updates cannot hurt but you may well be restricted to 2048

-----Original Message-----
From: Gandalf Corvotempesta [mailto:gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org] 
Sent: Saturday, August 31, 2013 5:21 AM
To: Rupert Dance
Cc: Hefty, Sean; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Slow performance with librspreload.so

2013/8/30 Rupert Dance <rsdance-rzwDkyJvnYokmLvzuZlaBw@public.gmane.org>:
> One way to set or check mtu is with the ibportstate utility:
>
> Usage: ibportstate [options] <dest dr_path|lid|guid> <portnum> [<op>] 
> Supported ops: enable, disable, reset, speed, width, query, down, arm, 
> active, vls, mtu, lid, smlid, lmc

I've tried but max MTU is 2048 on one device:

$ sudo ibv_devinfo
hca_id: mthca0
transport: InfiniBand (0)
fw_ver: 4.7.600
node_guid: 0008:f104:0398:14cc
sys_image_guid: 0008:f104:0398:14cf
vendor_id: 0x08f1
vendor_part_id: 25208
hw_ver: 0xA0
board_id: VLT0040010001
phys_port_cnt: 2
port: 1
state: PORT_ACTIVE (4)
max_mtu: 2048 (4)
active_mtu: 2048 (4)
sm_lid: 1
port_lid: 2
port_lmc: 0x00
link_layer: InfiniBand

any workaround? Maybe a firmware update ?


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
  2013-08-31 11:34                                                   ` Rupert Dance
@ 2013-08-31 19:51                                                     ` Gandalf Corvotempesta
       [not found]                                                       ` <CAJH6TXiOToa2-EOj6Hz-rkVHt3tCSh4jnLbKZs69GhT7dFeH0A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-08-31 19:51 UTC (permalink / raw)
  To: Rupert Dance; +Cc: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/8/31 Rupert Dance <rsdance-rzwDkyJvnYokmLvzuZlaBw@public.gmane.org>:
> The Vendor ID indicates that this is a Voltaire card which probably means it
> is an older card. Some of the early Mellanox based cards did not support
> anything bigger than 2048.

Yes, it's an older card used just for this test.
By the way, increasing MTU to 4096 will give me more performance?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: Slow performance with librspreload.so
       [not found]                                                       ` <CAJH6TXiOToa2-EOj6Hz-rkVHt3tCSh4jnLbKZs69GhT7dFeH0A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-09-01 13:28                                                         ` Rupert Dance
  2013-09-01 17:41                                                           ` Gandalf Corvotempesta
  2013-09-03 12:06                                                         ` Hal Rosenstock
  1 sibling, 1 reply; 27+ messages in thread
From: Rupert Dance @ 2013-09-01 13:28 UTC (permalink / raw)
  To: 'Gandalf Corvotempesta'
  Cc: 'Hefty, Sean', linux-rdma-u79uwXL29TY76Z2rM5mHXA

My guess is that it will not make a huge difference and that the solution
lies elsewhere.

-----Original Message-----
From: Gandalf Corvotempesta [mailto:gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org] 
Sent: Saturday, August 31, 2013 3:51 PM
To: Rupert Dance
Cc: Hefty, Sean; linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Slow performance with librspreload.so

2013/8/31 Rupert Dance <rsdance-rzwDkyJvnYokmLvzuZlaBw@public.gmane.org>:
> The Vendor ID indicates that this is a Voltaire card which probably 
> means it is an older card. Some of the early Mellanox based cards did 
> not support anything bigger than 2048.

Yes, it's an older card used just for this test.
By the way, increasing MTU to 4096 will give me more performance?


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
  2013-09-01 13:28                                                         ` Rupert Dance
@ 2013-09-01 17:41                                                           ` Gandalf Corvotempesta
       [not found]                                                             ` <CAJH6TXj53RZmzqA94CfPJjXBetK7us5v=cFyoMmjVUNQ-oeqpQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-09-01 17:41 UTC (permalink / raw)
  To: Rupert Dance; +Cc: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/9/1 Rupert Dance <rsdance-rzwDkyJvnYokmLvzuZlaBw@public.gmane.org>:
> My guess is that it will not make a huge difference and that the solution
> lies elsewhere.

What is strange to me is that rsocket is slower than IPoIB and limited
to 10Gbit more or less. With IPoIB i'm able to reach 12.5 Gbit
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                                             ` <CAJH6TXj53RZmzqA94CfPJjXBetK7us5v=cFyoMmjVUNQ-oeqpQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-09-03  9:17                                                               ` Gandalf Corvotempesta
  0 siblings, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-09-03  9:17 UTC (permalink / raw)
  To: Rupert Dance; +Cc: Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/9/1 Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> What is strange to me is that rsocket is slower than IPoIB and limited
> to 10Gbit more or less. With IPoIB i'm able to reach 12.5 Gbit

qperf is giving the same strange speed:

FROM NODE1 to NODE2:
$ sudo qperf -ub 77.95.175.106 ud_lat ud_bw
ud_lat:
    latency  =  12.5 us
ud_bw:
    send_bw  =  12.5 Gb/sec
    recv_bw  =  12.5 Gb/sec


FROM NODE1 TO NODE2, slower and with more latency than remote host!
$ sudo qperf -ub 172.17.0.1 ud_lat ud_bw
ud_lat:
    latency  =  13.8 us
ud_bw:
    send_bw  =  11.9 Gb/sec
    recv_bw  =  11.9 Gb/sec


how can I check if this is due to an hardware bottleneck ? CPU and RAM are good.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                                       ` <CAJH6TXiOToa2-EOj6Hz-rkVHt3tCSh4jnLbKZs69GhT7dFeH0A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-09-01 13:28                                                         ` Rupert Dance
@ 2013-09-03 12:06                                                         ` Hal Rosenstock
       [not found]                                                           ` <5225D0DD.7060803-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  1 sibling, 1 reply; 27+ messages in thread
From: Hal Rosenstock @ 2013-09-03 12:06 UTC (permalink / raw)
  To: Gandalf Corvotempesta
  Cc: Rupert Dance, Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

On 8/31/2013 3:51 PM, Gandalf Corvotempesta wrote:
> By the way, increasing MTU to 4096 will give me more performance?

With mthca, due to quirk, optimal performance is achieved at 1K MTU.
OpenSM can reduce the MTU in returned PathRecords to 1K when one end of
the path is mthca and actual path MTU is > 1K. This is controlled by
enable_quirks config parameter which defaults to FALSE (don't do this).

-- Hal
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                                           ` <5225D0DD.7060803-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2013-09-03 12:21                                                             ` Gandalf Corvotempesta
       [not found]                                                               ` <CAJH6TXhSoxFshtuE0YqdVShQKSgX_wNwxekjy1+ZcGHR0vjC9w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-09-03 12:21 UTC (permalink / raw)
  To: Hal Rosenstock
  Cc: Rupert Dance, Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/9/3 Hal Rosenstock <hal-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>:
> With mthca, due to quirk, optimal performance is achieved at 1K MTU.
> OpenSM can reduce the MTU in returned PathRecords to 1K when one end of
> the path is mthca and actual path MTU is > 1K. This is controlled by
> enable_quirks config parameter which defaults to FALSE (don't do this).

I'll try.

Actually these are my results, from node1 to node2

$ sudo qperf -ub  172.17.0.2 rc_bi_bw rc_lat rc_bw rc_rdma_read_lat
rc_rdma_read_bw rc_rdma_write_lat rc_rdma_write_bw tcp_lat tcp_bw
rc_bi_bw:
    bw  =  20.5 Gb/sec
rc_lat:
    latency  =  15.4 us
rc_bw:
    bw  =  13.7 Gb/sec
rc_rdma_read_lat:
    latency  =  12.9 us
rc_rdma_read_bw:
    bw  =  11.5 Gb/sec
rc_rdma_write_lat:
    latency  =  15.2 us
rc_rdma_write_bw:
    bw  =  13.7 Gb/sec
tcp_lat:
    latency  =  48.8 us
tcp_bw:
    bw  =  12.5 Gb/sec

I don't know if they are good for a DDR fabric.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: Slow performance with librspreload.so
       [not found]                                                               ` <CAJH6TXhSoxFshtuE0YqdVShQKSgX_wNwxekjy1+ZcGHR0vjC9w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-09-16 16:18                                                                 ` Gandalf Corvotempesta
  0 siblings, 0 replies; 27+ messages in thread
From: Gandalf Corvotempesta @ 2013-09-16 16:18 UTC (permalink / raw)
  To: Hal Rosenstock
  Cc: Rupert Dance, Hefty, Sean, linux-rdma-u79uwXL29TY76Z2rM5mHXA

2013/9/3 Gandalf Corvotempesta <gandalf.corvotempesta-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>:
> $ sudo qperf -ub  172.17.0.2 rc_bi_bw rc_lat rc_bw rc_rdma_read_lat
> rc_rdma_read_bw rc_rdma_write_lat rc_rdma_write_bw tcp_lat tcp_bw
> rc_bi_bw:
>     bw  =  20.5 Gb/sec
> rc_lat:
>     latency  =  15.4 us
> rc_bw:
>     bw  =  13.7 Gb/sec
> rc_rdma_read_lat:
>     latency  =  12.9 us
> rc_rdma_read_bw:
>     bw  =  11.5 Gb/sec
> rc_rdma_write_lat:
>     latency  =  15.2 us
> rc_rdma_write_bw:
>     bw  =  13.7 Gb/sec
> tcp_lat:
>     latency  =  48.8 us
> tcp_bw:
>     bw  =  12.5 Gb/sec
>
> I don't know if they are good for a DDR fabric.

Just to clarify, why I'm getting the same bandwidth with
librspreload.so and with plain use of IPoIB ?
Should I check something ?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2013-09-16 16:18 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-28 15:20 Slow performance with librspreload.so Gandalf Corvotempesta
     [not found] ` <CAJH6TXgf2LeMH+1L290w_KZ5tTN7NWpQxntF58Z506G3h_qKVw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-28 15:50   ` Hefty, Sean
     [not found]     ` <1828884A29C6694DAF28B7E6B8A8237388CA937E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 16:19       ` Gandalf Corvotempesta
     [not found]         ` <CAJH6TXjEx+41G_7wvQybMXzb60tu-ha2d2Bu_J_erNDPJRbQFw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-28 17:16           ` Hefty, Sean
     [not found]             ` <1828884A29C6694DAF28B7E6B8A8237388CA96AD-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-28 18:24               ` Gandalf Corvotempesta
     [not found]                 ` <CAJH6TXhAuSDytS5O1cJMg3iatq+STkwhPUG2zmexJ5tmt3Foqg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-28 19:20                   ` Hefty, Sean
2013-08-28 20:05                   ` Hefty, Sean
     [not found]                     ` <CAJH6TXgA0ghKX1P8UUAMFKY9o0xBJ0j4-kFa_M4a4ecdzoD0HA@mail.gmail.com>
     [not found]                       ` <CAJH6TXgA0ghKX1P8UUAMFKY9o0xBJ0j4-kFa_M4a4ecdzoD0HA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-29 13:03                         ` Fwd: " Gandalf Corvotempesta
2013-08-29 13:03                         ` Gandalf Corvotempesta
     [not found]                       ` <1828884A29C6694DAF28B7E6B8A8237388CA9C6B@ORSMSX109.amr.corp.intel.com>
     [not found]                         ` <CAJH6TXiYLKt3b1UFsZt7uFwDbWcDFnHNnS8CTO24Gt-2zn+Qiw@mail.gmail.com>
     [not found]                           ` <1828884A29C6694DAF28B7E6B8A8237388CA9D8A@ORSMSX109.amr.corp.intel.com>
     [not found]                             ` <1828884A29C6694DAF28B7E6B8A8237388CA9D8A-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-29 18:57                               ` Gandalf Corvotempesta
2013-08-30  8:19                               ` Gandalf Corvotempesta
     [not found]                                 ` <CAJH6TXhWWMBbopDLZY2+rrNOm2m5gcmObj7Sr16u2qrNW_NHgw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-30  8:23                                   ` Gandalf Corvotempesta
     [not found]                                     ` <CAJH6TXgR=wVGyyHrpTKkBw-5M=A9-tGzjpYcV6NWpk8uKvFn8Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-30 10:29                                       ` Gandalf Corvotempesta
     [not found]                                         ` <CAJH6TXidOVJDTokOwxjCT9RRTOO6E_xbyG_K++YVkFx8NbhMTA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-30 17:38                                           ` Hefty, Sean
     [not found]                                             ` <1828884A29C6694DAF28B7E6B8A8237388CAA1B9-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-30 18:08                                               ` Atchley, Scott
2013-08-30 15:51                                       ` Hefty, Sean
     [not found]                                         ` <CAJH6TXgu4L8gnqQX1fKZ=ioZDxUMnj=s3h0qkYh1_35VWpMJ1g@mail.gmai! l.com>
     [not found]                                         ` <1828884A29C6694DAF28B7E6B8A8237388CAA11B-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2013-08-30 16:26                                           ` Gandalf Corvotempesta
     [not found]                                         ` <CAJH6TXgvViraH4SEcYydCerGyM6kK61eoiaENCy6PSf_1ocSVA@mail.gmai! l.com>
     [not found]                                           ` <CAJH6TXgvViraH4SEcYydCerGyM6kK61eoiaENCy6PSf_1ocSVA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-30 18:19                                             ` Rupert Dance
2013-08-31  9:20                                               ` Gandalf Corvotempesta
     [not found]                                                 ` <CAJH6TXgu4L8gnqQX1fKZ=ioZDxUMnj=s3h0qkYh1_35VWpMJ1g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-08-31 11:34                                                   ` Rupert Dance
2013-08-31 19:51                                                     ` Gandalf Corvotempesta
     [not found]                                                       ` <CAJH6TXiOToa2-EOj6Hz-rkVHt3tCSh4jnLbKZs69GhT7dFeH0A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-09-01 13:28                                                         ` Rupert Dance
2013-09-01 17:41                                                           ` Gandalf Corvotempesta
     [not found]                                                             ` <CAJH6TXj53RZmzqA94CfPJjXBetK7us5v=cFyoMmjVUNQ-oeqpQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-09-03  9:17                                                               ` Gandalf Corvotempesta
2013-09-03 12:06                                                         ` Hal Rosenstock
     [not found]                                                           ` <5225D0DD.7060803-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2013-09-03 12:21                                                             ` Gandalf Corvotempesta
     [not found]                                                               ` <CAJH6TXhSoxFshtuE0YqdVShQKSgX_wNwxekjy1+ZcGHR0vjC9w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-09-16 16:18                                                                 ` Gandalf Corvotempesta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.