* [ath9k-devel] Very large datagram loss
@ 2012-06-14 2:37 Stephen Donecker
2012-06-16 21:09 ` Adrian Chadd
2012-06-16 21:28 ` Felix Fietkau
0 siblings, 2 replies; 6+ messages in thread
From: Stephen Donecker @ 2012-06-14 2:37 UTC (permalink / raw)
To: ath9k-devel
Hi,
I am experiencing very high datagram loss between two identical cards
communicating over a single spacial stream in adhoc mode. Using
minstrel_ht rate control the tx bitrate settles at MCS7 150Mbps, and I
get the following iperf results.
# iperf -c 192.168.11.2 -p 7777 -u -b 150m -t 10 -i 1
------------------------------------------------------------
Client connecting to 192.168.11.2, UDP port 7777
Sending 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.11.1 port 43287 connected with 192.168.11.2 port 7777
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 16.1 MBytes 135 Mbits/sec
[ 3] 1.0- 2.0 sec 16.3 MBytes 137 Mbits/sec
[ 3] 2.0- 3.0 sec 16.3 MBytes 137 Mbits/sec
[ 3] 3.0- 4.0 sec 15.9 MBytes 133 Mbits/sec
[ 3] 4.0- 5.0 sec 16.4 MBytes 138 Mbits/sec
[ 3] 5.0- 6.0 sec 16.0 MBytes 134 Mbits/sec
[ 3] 6.0- 7.0 sec 16.2 MBytes 136 Mbits/sec
[ 3] 7.0- 8.0 sec 16.0 MBytes 135 Mbits/sec
[ 3] 8.0- 9.0 sec 16.2 MBytes 136 Mbits/sec
[ 3] 0.0-10.0 sec 161 MBytes 135 Mbits/sec
[ 3] Sent 115053 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 92.0 MBytes 77.0 Mbits/sec 0.262 ms 49459/115052
(43%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
When I set minstrel_ht to fixed_rate MCS7 I get the following iperf results.
# iperf -c 192.168.11.2 -p 7777 -u -b 150m -t 10 -i 1
------------------------------------------------------------
Client connecting to 192.168.11.2, UDP port 7777
Sending 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.11.1 port 55776 connected with 192.168.11.2 port 7777
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 17.4 MBytes 146 Mbits/sec
[ 3] 1.0- 2.0 sec 17.4 MBytes 146 Mbits/sec
[ 3] 2.0- 3.0 sec 17.6 MBytes 147 Mbits/sec
[ 3] 3.0- 4.0 sec 17.4 MBytes 146 Mbits/sec
[ 3] 4.0- 5.0 sec 17.5 MBytes 147 Mbits/sec
[ 3] 5.0- 6.0 sec 17.6 MBytes 148 Mbits/sec
[ 3] 6.0- 7.0 sec 17.4 MBytes 146 Mbits/sec
[ 3] 7.0- 8.0 sec 17.4 MBytes 146 Mbits/sec
[ 3] 8.0- 9.0 sec 17.8 MBytes 149 Mbits/sec
[ 3] 0.0-10.0 sec 175 MBytes 147 Mbits/sec
[ 3] Sent 124930 datagrams
[ 3] Server Report:
[ 3] 0.0-10.2 sec 10.2 MBytes 8.39 Mbits/sec 4.527 ms
117609/124920 (94%)
[ 3] 0.0-10.2 sec 1 datagrams received out-of-order
In either case the overall throughput and datagram loss percentages are
terrible. I believe overall the datagram loss should be less than a few
percent.
Any ideas how I can determine where the packets are getting lost?
Thanks,
-Stephen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.ath9k.org/pipermail/ath9k-devel/attachments/20120613/c1b0d529/attachment-0001.htm
^ permalink raw reply [flat|nested] 6+ messages in thread
* [ath9k-devel] Very large datagram loss
2012-06-14 2:37 [ath9k-devel] Very large datagram loss Stephen Donecker
@ 2012-06-16 21:09 ` Adrian Chadd
2012-06-19 4:07 ` Stephen Donecker
2012-06-16 21:28 ` Felix Fietkau
1 sibling, 1 reply; 6+ messages in thread
From: Adrian Chadd @ 2012-06-16 21:09 UTC (permalink / raw)
To: ath9k-devel
Hi,
You should first report what hardware, which channel, what configuration, etc.
Next is likely looking at the TX and RX side of things.
TX - check short/long retries, what the minstrel_ht rate control
debugging says. It should give you throughput information per rate
that it tries. Look at short/long retry counts.
RX - look at CRC errors.
Adrian
^ permalink raw reply [flat|nested] 6+ messages in thread
* [ath9k-devel] Very large datagram loss
2012-06-14 2:37 [ath9k-devel] Very large datagram loss Stephen Donecker
2012-06-16 21:09 ` Adrian Chadd
@ 2012-06-16 21:28 ` Felix Fietkau
2012-06-18 22:54 ` Stephen Donecker
1 sibling, 1 reply; 6+ messages in thread
From: Felix Fietkau @ 2012-06-16 21:28 UTC (permalink / raw)
To: ath9k-devel
On 2012-06-14 4:37 AM, Stephen Donecker wrote:
> Hi,
>
> I am experiencing very high datagram loss between two identical cards
> communicating over a single spacial stream in adhoc mode. Using
> minstrel_ht rate control the tx bitrate settles at MCS7 150Mbps, and I
> get the following iperf results.
>
> # iperf -c 192.168.11.2 -p 7777 -u -b 150m -t 10 -i 1
> ------------------------------------------------------------
> Client connecting to 192.168.11.2, UDP port 7777
> Sending 1470 byte datagrams
> UDP buffer size: 160 KByte (default)
> ------------------------------------------------------------
> [ 3] local 192.168.11.1 port 43287 connected with 192.168.11.2 port 7777
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0- 1.0 sec 16.1 MBytes 135 Mbits/sec
> [ 3] 1.0- 2.0 sec 16.3 MBytes 137 Mbits/sec
> [ 3] 2.0- 3.0 sec 16.3 MBytes 137 Mbits/sec
> [ 3] 3.0- 4.0 sec 15.9 MBytes 133 Mbits/sec
> [ 3] 4.0- 5.0 sec 16.4 MBytes 138 Mbits/sec
> [ 3] 5.0- 6.0 sec 16.0 MBytes 134 Mbits/sec
> [ 3] 6.0- 7.0 sec 16.2 MBytes 136 Mbits/sec
> [ 3] 7.0- 8.0 sec 16.0 MBytes 135 Mbits/sec
> [ 3] 8.0- 9.0 sec 16.2 MBytes 136 Mbits/sec
> [ 3] 0.0-10.0 sec 161 MBytes 135 Mbits/sec
> [ 3] Sent 115053 datagrams
> [ 3] Server Report:
> [ 3] 0.0-10.0 sec 92.0 MBytes 77.0 Mbits/sec 0.262 ms 49459/115052
> (43%)
> [ 3] 0.0-10.0 sec 1 datagrams received out-of-order
>
> When I set minstrel_ht to fixed_rate MCS7 I get the following iperf results.
>
> # iperf -c 192.168.11.2 -p 7777 -u -b 150m -t 10 -i 1
> ------------------------------------------------------------
> Client connecting to 192.168.11.2, UDP port 7777
> Sending 1470 byte datagrams
> UDP buffer size: 160 KByte (default)
> ------------------------------------------------------------
> [ 3] local 192.168.11.1 port 55776 connected with 192.168.11.2 port 7777
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0- 1.0 sec 17.4 MBytes 146 Mbits/sec
> [ 3] 1.0- 2.0 sec 17.4 MBytes 146 Mbits/sec
> [ 3] 2.0- 3.0 sec 17.6 MBytes 147 Mbits/sec
> [ 3] 3.0- 4.0 sec 17.4 MBytes 146 Mbits/sec
> [ 3] 4.0- 5.0 sec 17.5 MBytes 147 Mbits/sec
> [ 3] 5.0- 6.0 sec 17.6 MBytes 148 Mbits/sec
> [ 3] 6.0- 7.0 sec 17.4 MBytes 146 Mbits/sec
> [ 3] 7.0- 8.0 sec 17.4 MBytes 146 Mbits/sec
> [ 3] 8.0- 9.0 sec 17.8 MBytes 149 Mbits/sec
> [ 3] 0.0-10.0 sec 175 MBytes 147 Mbits/sec
> [ 3] Sent 124930 datagrams
> [ 3] Server Report:
> [ 3] 0.0-10.2 sec 10.2 MBytes 8.39 Mbits/sec 4.527 ms
> 117609/124920 (94%)
> [ 3] 0.0-10.2 sec 1 datagrams received out-of-order
>
> In either case the overall throughput and datagram loss percentages are
> terrible. I believe overall the datagram loss should be less than a few
> percent.
Your expectations seem quite wrong. A PHY rate of 150 Mbit/s will not
get you anywhere near 150 Mbit/s UDP throughput, there's lots of 802.11
related overhead inbetween.
To check some stats on tx rate and retransmissions, take a look at
/sys/kernel/debug/ieee80211/phy0/netdev:wlan0/stations/*/rc_stats
(without fixed-rate).
One reason why the fixed-rate iperf looks so much worse is that it
disables aggregation.
- Felix
^ permalink raw reply [flat|nested] 6+ messages in thread
* [ath9k-devel] Very large datagram loss
2012-06-16 21:28 ` Felix Fietkau
@ 2012-06-18 22:54 ` Stephen Donecker
0 siblings, 0 replies; 6+ messages in thread
From: Stephen Donecker @ 2012-06-18 22:54 UTC (permalink / raw)
To: ath9k-devel
Felix,
Thanks for you reply.
> Your expectations seem quite wrong. A PHY rate of 150 Mbit/s will not
> get you anywhere near 150 Mbit/s UDP throughput, there's lots of 802.11
> related overhead inbetween.
My tests above show that I was only getting 77.0 Mbps UDP on a PHY of
150Mbps. I don't think it is unreasonable to expect better performance
than this.
My setup:
Routerstations (AR7161 @680MHz)
SR71-15 (AR9220/AR9280 802.11an)
Openwrt r32347
^ permalink raw reply [flat|nested] 6+ messages in thread
* [ath9k-devel] Very large datagram loss
2012-06-16 21:09 ` Adrian Chadd
@ 2012-06-19 4:07 ` Stephen Donecker
2012-08-06 16:15 ` abhinav narain
0 siblings, 1 reply; 6+ messages in thread
From: Stephen Donecker @ 2012-06-19 4:07 UTC (permalink / raw)
To: ath9k-devel
Adrian,
Thanks for your reply.
> You should first report what hardware, which channel, what configuration, etc.
Ubiquity Routerstation (Atheros AR7161 @ 680MHz)
Ubiquity SR71-15 (Atheros AR9220/AR9280) 802.11n 2T2R
OpenWRT r32347
mode adhoc
channel 157
frequency 5785MHz
channel type HT40+
rate control minstrel_ht
dual stream MCS15 300Mbps
> Next is likely looking at the TX and RX side of things.
I have been doing some extensive testing and discovered that a lot of
packets are being dropped on the tx side with few if any drops on the rx
side. With a PHY of 300Mbps I get about 100Mbps UDP.
> TX - check short/long retries, what the minstrel_ht rate control
> debugging says. It should give you throughput information per rate
> that it tries. Look at short/long retry counts.
> RX - look at CRC errors.
>
long_retry_limit 4
short_retry_limit 7
type rate throughput ewma prob this prob this
succ/attempt success attempts
HT20/LGI MCS0 6.6 99.9 100.0 0( 0)
989 990
HT20/LGI MCS1 13.0 99.9 100.0 0( 0)
989 991
HT20/LGI MCS2 19.1 99.9 100.0 0( 0)
1026 1027
HT20/LGI MCS3 25.0 99.9 100.0 0( 0)
1060 1065
HT20/LGI MCS4 36.2 99.6 100.0 0( 0)
1036 1040
HT20/LGI MCS5 46.5 99.9 100.0 0( 0)
959 962
HT20/LGI MCS6 51.2 99.9 100.0 0( 0)
969 973
HT20/LGI MCS7 56.9 99.7 100.0 0( 0)
971 977
HT20/LGI MCS8 13.0 99.9 100.0 0( 0)
969 974
HT20/LGI MCS9 25.0 99.9 100.0 0( 0)
974 977
HT20/LGI MCS10 36.3 99.9 100.0 0( 0)
1007 1014
HT20/LGI MCS11 46.5 99.9 100.0 0( 0)
1017 1020
HT20/LGI MCS12 66.2 99.9 100.0 0( 0)
1058 1062
HT20/LGI MCS13 81.2 99.9 100.0 0( 0)
973 977
HT20/LGI MCS14 90.0 99.9 100.0 0( 0)
966 968
HT20/LGI MCS15 86.8 89.4 100.0 0( 0)
969 976
HT40/LGI MCS0 13.5 99.9 100.0 0( 0)
983 986
HT40/LGI MCS1 26.0 99.9 100.0 0( 0)
996 1000
HT40/LGI MCS2 37.4 99.9 100.0 0( 0)
1014 1018
HT40/LGI MCS3 48.2 99.9 100.0 0( 0)
1026 1033
HT40/LGI MCS4 68.0 99.9 100.0 0( 0)
1059 1062
HT40/LGI MCS5 84.0 100.0 100.0 0( 0)
975 975
HT40/LGI MCS6 93.4 99.9 100.0 0( 0)
970 974
HT40/LGI MCS7 101.0 99.9 100.0 0( 0)
976 978
HT40/LGI MCS8 25.4 97.4 100.0 0( 0)
963 971
HT40/LGI MCS9 48.2 99.9 100.0 0( 0)
975 982
HT40/LGI MCS10 68.0 99.9 100.0 0( 0)
978 985
HT40/LGI MCS11 84.0 100.0 100.0 0( 0)
1014 1014
HT40/LGI MCS12 114.9 99.9 100.0 0( 0)
1049 1051
HT40/LGI MCS13 133.3 99.9 100.0 0( 0)
970 972
HT40/LGI MCS14 148.0 99.2 100.0 0( 0)
983 987
HT40/LGI t MCS15 158.7 99.9 100.0 0( 0)
8419 8533
HT40/SGI MCS0 14.9 99.9 100.0 0( 0)
973 977
HT40/SGI MCS1 28.7 99.9 100.0 0( 0)
993 995
HT40/SGI MCS2 41.1 99.9 100.0 0( 0)
1003 1008
HT40/SGI MCS3 52.9 99.9 100.0 0( 0)
1013 1018
HT40/SGI MCS4 74.0 99.9 100.0 1( 1)
1027 1032
HT40/SGI MCS5 90.9 100.0 100.0 0( 0)
978 978
HT40/SGI MCS6 101.0 99.9 100.0 0( 0)
973 974
HT40/SGI PMCS7 108.6 99.9 100.0 0( 0)
974 976
HT40/SGI MCS8 28.7 99.9 100.0 0( 0)
978 983
HT40/SGI MCS9 52.9 99.9 100.0 0( 0)
978 984
HT40/SGI MCS10 74.0 99.9 100.0 0( 0)
994 998
HT40/SGI MCS11 90.8 99.9 100.0 0( 0)
1017 1019
HT40/SGI MCS12 123.4 99.9 100.0 0( 0)
1056 1066
HT40/SGI MCS13 140.3 99.6 100.0 0( 0)
991 999
HT40/SGI MCS14 158.7 99.9 100.0 0( 0)
18269 18402
HT40/SGI T MCS15 165.2 99.1 98.7 553(560)
10865638 11039921
Total packet count:: ideal 10890621 lookaround 46906
Average A-MPDU length: 4.8
I am not sure where I am to find the short and long retry counts.
In conclusion I discovered that during Tx the CPU load is almost 100%.
Thanks again,
-Stephen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.ath9k.org/pipermail/ath9k-devel/attachments/20120618/fa7469db/attachment-0001.htm
^ permalink raw reply [flat|nested] 6+ messages in thread
* [ath9k-devel] Very large datagram loss
2012-06-19 4:07 ` Stephen Donecker
@ 2012-08-06 16:15 ` abhinav narain
0 siblings, 0 replies; 6+ messages in thread
From: abhinav narain @ 2012-08-06 16:15 UTC (permalink / raw)
To: ath9k-devel
Can some please tell what does this throughput represent ?
at PHY layer ?
> type rate throughput ewma prob this prob this succ/attempt
> success attempts
> HT20/LGI MCS0 6.6 99.9 100.0 0( 0)
> 989 990
> HT20/LGI MCS1 13.0 99.9 100.0 0( 0)
> 989 991
>
thanks,
abhinav
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.ath9k.org/pipermail/ath9k-devel/attachments/20120806/b19593d7/attachment.htm
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-08-06 16:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-14 2:37 [ath9k-devel] Very large datagram loss Stephen Donecker
2012-06-16 21:09 ` Adrian Chadd
2012-06-19 4:07 ` Stephen Donecker
2012-08-06 16:15 ` abhinav narain
2012-06-16 21:28 ` Felix Fietkau
2012-06-18 22:54 ` Stephen Donecker
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.