All of lore.kernel.org
 help / color / mirror / Atom feed
* TCP communication for raw image transmission
@ 2012-01-02 15:21 Jean-Michel Hautbois
  2012-01-02 15:59 ` Eric Dumazet
  2012-01-04  9:57 ` David Laight
  0 siblings, 2 replies; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 15:21 UTC (permalink / raw)
  To: netdev

Hi all,

I am currently working on a way to send raw images (~400*300 YUV
4:2:0) at a relative high speed (raw image and not compressed image
due to specific codec issues).
This would represent about 57MBps/sec on a 100Mbps ethernet link.
This link is direct, nothing else will pass through it.

My question is a performance and efficiency question. Here is what I
am thinking about :
- I have the image on one computer at a known memory adress
- I want top transmit this picture using TCP packets with the rule
"1row = 1packet", maybe "2rows=1packet"
- All these packets have to go on an ethernet link and will be
received on another computer starting at a known memory adress
Something like this :
http://imageshack.us/photo/my-images/15/tcpmemory.png/

I was thinking about using TCP socket (of course) and the sendfile
syscall on sender side, splice syscall on the receiver side, but I
don't know if this is the best (and fastest) way to do it ?
I would like to avoid memory copy as much as possible as the systems
on which it will be done are embedded arm computers.
TCP is chosen in order to get ordered packets, but maybe am I wrong ?

Thanks for any clues you can give !
Regards,
JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 15:21 TCP communication for raw image transmission Jean-Michel Hautbois
@ 2012-01-02 15:59 ` Eric Dumazet
  2012-01-02 16:08   ` Jean-Michel Hautbois
  2012-01-04  9:57 ` David Laight
  1 sibling, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-02 15:59 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le lundi 02 janvier 2012 à 16:21 +0100, Jean-Michel Hautbois a écrit :
> Hi all,
> 
> I am currently working on a way to send raw images (~400*300 YUV
> 4:2:0) at a relative high speed (raw image and not compressed image
> due to specific codec issues).
> This would represent about 57MBps/sec on a 100Mbps ethernet link.
> This link is direct, nothing else will pass through it.
> 
> My question is a performance and efficiency question. Here is what I
> am thinking about :
> - I have the image on one computer at a known memory adress
> - I want top transmit this picture using TCP packets with the rule
> "1row = 1packet", maybe "2rows=1packet"
> - All these packets have to go on an ethernet link and will be
> received on another computer starting at a known memory adress
> Something like this :
> http://imageshack.us/photo/my-images/15/tcpmemory.png/
> 
> I was thinking about using TCP socket (of course) and the sendfile
> syscall on sender side, splice syscall on the receiver side, but I
> don't know if this is the best (and fastest) way to do it ?
> I would like to avoid memory copy as much as possible as the systems
> on which it will be done are embedded arm computers.
> TCP is chosen in order to get ordered packets, but maybe am I wrong ?
> 
> Thanks for any clues you can give !
> Regards,
> JM
> --

Do you really need full image, or are some missing lines allowed ?

What is the length in bytes for a line ?

TCP stream and your bandwidth requirements would work only if you
guarantee not a single packet loss.

Using UDP (RTP) would allow you to respect any time limits at the
sender, yet allowing some drops on the network.

If you use a direct link, packets wont be re-ordered.

Is your ARM platform so slow it cannot afford copying 10 Mbytes per
second in skb ?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 15:59 ` Eric Dumazet
@ 2012-01-02 16:08   ` Jean-Michel Hautbois
  2012-01-02 16:29     ` Eric Dumazet
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 16:08 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
> Le lundi 02 janvier 2012 à 16:21 +0100, Jean-Michel Hautbois a écrit :
>> Hi all,
>>
>> I am currently working on a way to send raw images (~400*300 YUV
>> 4:2:0) at a relative high speed (raw image and not compressed image
>> due to specific codec issues).
>> This would represent about 57MBps/sec on a 100Mbps ethernet link.
>> This link is direct, nothing else will pass through it.
>>
>> My question is a performance and efficiency question. Here is what I
>> am thinking about :
>> - I have the image on one computer at a known memory adress
>> - I want top transmit this picture using TCP packets with the rule
>> "1row = 1packet", maybe "2rows=1packet"
>> - All these packets have to go on an ethernet link and will be
>> received on another computer starting at a known memory adress
>> Something like this :
>> http://imageshack.us/photo/my-images/15/tcpmemory.png/
>>
>> I was thinking about using TCP socket (of course) and the sendfile
>> syscall on sender side, splice syscall on the receiver side, but I
>> don't know if this is the best (and fastest) way to do it ?
>> I would like to avoid memory copy as much as possible as the systems
>> on which it will be done are embedded arm computers.
>> TCP is chosen in order to get ordered packets, but maybe am I wrong ?
>>
>> Thanks for any clues you can give !
>> Regards,
>> JM
>> --
>
> Do you really need full image, or are some missing lines allowed ?
>
> What is the length in bytes for a line ?

I can't accept any missing line, and line length in bytes is to be
decided but I think it would be something like 672bytes. This is why I
would put two lines per packet.

> TCP stream and your bandwidth requirements would work only if you
> guarantee not a single packet loss.
>
> Using UDP (RTP) would allow you to respect any time limits at the
> sender, yet allowing some drops on the network.

Yes, and again, I don't want any missing lines :).

> If you use a direct link, packets wont be re-ordered.
>
> Is your ARM platform so slow it cannot afford copying 10 Mbytes per
> second in skb ?

I don't really know what it can do, but it has to do encoding (even if
it is a SoC, with dedicated devices, memory has to be used).
The main problem I have, is that I don't have the two boards with me
today, and even if I can prototype on x86, I can't conclude as far as
I don't test on the real HW.
This is also why I want to prototype my best bet in the first try :).
And this is mainly a networking/memory problem.

Thanks for answering.
JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:08   ` Jean-Michel Hautbois
@ 2012-01-02 16:29     ` Eric Dumazet
  2012-01-02 16:40       ` Jean-Michel Hautbois
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-02 16:29 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le lundi 02 janvier 2012 à 17:08 +0100, Jean-Michel Hautbois a écrit :

> Yes, and again, I don't want any missing lines :).
> 

Go for tcp :)


> I don't really know what it can do, but it has to do encoding (even if
> it is a SoC, with dedicated devices, memory has to be used).
> The main problem I have, is that I don't have the two boards with me
> today, and even if I can prototype on x86, I can't conclude as far as
> I don't test on the real HW.

Thats going to be tough. You could at least use one board and test if it
can send 100Mbits using netperf (on a x86 target), and check cpu usage.

> This is also why I want to prototype my best bet in the first try :).
> And this is mainly a networking/memory problem.

If the 'no packet loss happens in the network' requirement is OK, then a
plain sendfile() [ or splice() ] will work.

Limiting each tcp frame to a 1 or 2 lines of your image is about
limiting MSS on your tcp session (setsockopt( ... TCP_MAXSEG ))

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:29     ` Eric Dumazet
@ 2012-01-02 16:40       ` Jean-Michel Hautbois
  2012-01-02 16:52         ` Eric Dumazet
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 16:40 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

>> I don't really know what it can do, but it has to do encoding (even if
>> it is a SoC, with dedicated devices, memory has to be used).
>> The main problem I have, is that I don't have the two boards with me
>> today, and even if I can prototype on x86, I can't conclude as far as
>> I don't test on the real HW.
>
> Thats going to be tough. You could at least use one board and test if it
> can send 100Mbits using netperf (on a x86 target), and check cpu usage.

Mmmh, using netperf you would like to know what the client (my ARM
board) can do ?
How would you test it ? I can have an ARM board on one side, and the
x86 on the other...

>> This is also why I want to prototype my best bet in the first try :).
>> And this is mainly a networking/memory problem.
>
> If the 'no packet loss happens in the network' requirement is OK, then a
> plain sendfile() [ or splice() ] will work.
>
> Limiting each tcp frame to a 1 or 2 lines of your image is about
> limiting MSS on your tcp session (setsockopt( ... TCP_MAXSEG ))

Yes, the idea of limiting tcp frame to 2 lines is to avoid
fragmenting. So, with a MTU at 1500, MSS=1460, I can put 2 lines <=
730bytes. I will probably have ~700bytes.

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:40       ` Jean-Michel Hautbois
@ 2012-01-02 16:52         ` Eric Dumazet
  2012-01-02 16:52           ` Eric Dumazet
  2012-01-03 22:35           ` Rick Jones
  0 siblings, 2 replies; 29+ messages in thread
From: Eric Dumazet @ 2012-01-02 16:52 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :

> Mmmh, using netperf you would like to know what the client (my ARM
> board) can do ?
> How would you test it ? I can have an ARM board on one side, and the
> x86 on the other...
> 

x86> netserver &
arm> netperf -H <arm_ip_address> -l 60 -t TCP_STREAM

1) check cpu usage on <arm> while test is running 
(for example : vmstat 1 )
2) check bandwith of test run 


> Yes, the idea of limiting tcp frame to 2 lines is to avoid
> fragmenting. So, with a MTU at 1500, MSS=1460, I can put 2 lines <=
> 730bytes. I will probably have ~700bytes.

There is no fragmentation on TCP.

You should just let TCP stack use the best MSS (MTU - 40 or so), to
lower cpu usage both on sender and receiver.

Eventually, check if you can use a big MTU on your dedicated ethernet
link to fit 4 lines per tcp frame.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:52         ` Eric Dumazet
@ 2012-01-02 16:52           ` Eric Dumazet
  2012-01-02 17:04             ` Jean-Michel Hautbois
  2012-01-02 17:20             ` Jean-Michel Hautbois
  2012-01-03 22:35           ` Rick Jones
  1 sibling, 2 replies; 29+ messages in thread
From: Eric Dumazet @ 2012-01-02 16:52 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le lundi 02 janvier 2012 à 17:52 +0100, Eric Dumazet a écrit :
> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
> 
> > Mmmh, using netperf you would like to know what the client (my ARM
> > board) can do ?
> > How would you test it ? I can have an ARM board on one side, and the
> > x86 on the other...
> > 
> 
> x86> netserver &
> arm> netperf -H <arm_ip_address> -l 60 -t TCP_STREAM
> 

well it should be :

arm> netperf -H <x86_ip_address> -l 60 -t TCP_STREAM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:52           ` Eric Dumazet
@ 2012-01-02 17:04             ` Jean-Michel Hautbois
  2012-01-02 17:20             ` Jean-Michel Hautbois
  1 sibling, 0 replies; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 17:04 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
> Le lundi 02 janvier 2012 à 17:52 +0100, Eric Dumazet a écrit :
>> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
>>
>> > Mmmh, using netperf you would like to know what the client (my ARM
>> > board) can do ?
>> > How would you test it ? I can have an ARM board on one side, and the
>> > x86 on the other...
>> >
>>
>> x86> netserver &
>> arm> netperf -H <arm_ip_address> -l 60 -t TCP_STREAM
>>
>
> well it should be :
>
> arm> netperf -H <x86_ip_address> -l 60 -t TCP_STREAM
>

OK, I will now have to wait for the netperf servers to be up... :|.

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:52           ` Eric Dumazet
  2012-01-02 17:04             ` Jean-Michel Hautbois
@ 2012-01-02 17:20             ` Jean-Michel Hautbois
  2012-01-02 17:41               ` Eric Dumazet
  1 sibling, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 17:20 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
> Le lundi 02 janvier 2012 à 17:52 +0100, Eric Dumazet a écrit :
>> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
>>
>> > Mmmh, using netperf you would like to know what the client (my ARM
>> > board) can do ?
>> > How would you test it ? I can have an ARM board on one side, and the
>> > x86 on the other...
>> >
>>
>> x86> netserver &
>> arm> netperf -H <arm_ip_address> -l 60 -t TCP_STREAM
>>
>
> well it should be :
>
> arm> netperf -H <x86_ip_address> -l 60 -t TCP_STREAM

Here we go...

Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    60.01      21.79

It is not very good, AFAIK.
CPU usage is between 17 and 35%.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 17:20             ` Jean-Michel Hautbois
@ 2012-01-02 17:41               ` Eric Dumazet
  2012-01-02 18:00                 ` Jean-Michel Hautbois
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-02 17:41 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le lundi 02 janvier 2012 à 18:20 +0100, Jean-Michel Hautbois a écrit :

> Here we go...
> 
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    60.01      21.79
> 
> It is not very good, AFAIK.
> CPU usage is between 17 and 35%.

Ouch...

Better find out what is happening before even starting coding
anything...

checkout "netstat -s" on both sender/receiver

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 17:41               ` Eric Dumazet
@ 2012-01-02 18:00                 ` Jean-Michel Hautbois
  2012-01-02 18:02                   ` Jean-Michel Hautbois
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 18:00 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
> Le lundi 02 janvier 2012 à 18:20 +0100, Jean-Michel Hautbois a écrit :
>
>> Here we go...
>>
>> Recv   Send    Send
>> Socket Socket  Message  Elapsed
>> Size   Size    Size     Time     Throughput
>> bytes  bytes   bytes    secs.    10^6bits/sec
>>
>>  87380  16384  16384    60.01      21.79
>>
>> It is not very good, AFAIK.
>> CPU usage is between 17 and 35%.
>
> Ouch...
>
> Better find out what is happening before even starting coding
> anything...
>
> checkout "netstat -s" on both sender/receiver

OK, here are the results :
netserver is launched on the x86 side and the ARM and x86 are
connected through ethernet network via a switch. There is nobody on
this network.

On ARM side :
netperf -H 192.168.1.10 -l 60 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.10 (192.168.1.10) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    60.02      22.01
/ # netstat -s
Ip:
    67326 total packets received
    4 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    67322 incoming packets delivered
    133712 requests sent out
Icmp:
    0 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
    0 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
Tcp:
    4 active connections openings
    0 passive connection openings
    0 failed connection attempts
    0 connection resets received
    0 connections established
    67167 segments received
    133712 segments send out
    0 segments retransmited
    0 bad segments received.
    0 resets sent
Udp:
    0 packets received
    0 packets to unknown port received.
    0 packet receive errors
    0 packets sent
    0 receive buffer errors
    0 send buffer errors
UdpLite:
TcpExt:
    2 TCP sockets finished time wait in fast timer
    2 delayed acks sent
    18 packets directly queued to recvmsg prequeue.
    3 packet headers predicted
    127 acknowledgments not containing data payload received
    67026 predicted acknowledgments
IpExt:
    InBcastPkts: 155
    InOctets: 3511540
    OutOctets: 199739352
    InBcastOctets: 16072

On the x86 side :
netperf -H 192.168.1.10 -l 60 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.10 (192.168.1.10) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    60.02      22.01
/ # netstat -s
Ip:
    67326 total packets received
    4 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    67322 incoming packets delivered
    133712 requests sent out
Icmp:
    0 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
    0 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
Tcp:
    4 active connections openings
    0 passive connection openings
    0 failed connection attempts
    0 connection resets received
    0 connections established
    67167 segments received
    133712 segments send out
    0 segments retransmited
    0 bad segments received.
    0 resets sent
Udp:
    0 packets received
    0 packets to unknown port received.
    0 packet receive errors
    0 packets sent
    0 receive buffer errors
    0 send buffer errors
UdpLite:
TcpExt:
    2 TCP sockets finished time wait in fast timer
    2 delayed acks sent
    18 packets directly queued to recvmsg prequeue.
    3 packet headers predicted
    127 acknowledgments not containing data payload received
    67026 predicted acknowledgments
IpExt:
    InBcastPkts: 155
    InOctets: 3511540
    OutOctets: 199739352
    InBcastOctets: 16072


JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 18:00                 ` Jean-Michel Hautbois
@ 2012-01-02 18:02                   ` Jean-Michel Hautbois
  2012-01-03  7:36                     ` Jean-Michel Hautbois
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-02 18:02 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Jean-Michel Hautbois <jhautbois@gmail.com>:
> 2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
>> Le lundi 02 janvier 2012 à 18:20 +0100, Jean-Michel Hautbois a écrit :
>>
>>> Here we go...
>>>
>>> Recv   Send    Send
>>> Socket Socket  Message  Elapsed
>>> Size   Size    Size     Time     Throughput
>>> bytes  bytes   bytes    secs.    10^6bits/sec
>>>
>>>  87380  16384  16384    60.01      21.79
>>>
>>> It is not very good, AFAIK.
>>> CPU usage is between 17 and 35%.
>>
>> Ouch...
>>
>> Better find out what is happening before even starting coding
>> anything...
>>
>> checkout "netstat -s" on both sender/receiver
>
> OK, here are the results :
> netserver is launched on the x86 side and the ARM and x86 are
> connected through ethernet network via a switch. There is nobody on
> this network.
>
> On ARM side :
> netperf -H 192.168.1.10 -l 60 -t TCP_STREAM
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.1.10 (192.168.1.10) port 0 AF_INET
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    60.02      22.01
> / # netstat -s
> Ip:
>    67326 total packets received
>    4 with invalid addresses
>    0 forwarded
>    0 incoming packets discarded
>    67322 incoming packets delivered
>    133712 requests sent out
> Icmp:
>    0 ICMP messages received
>    0 input ICMP message failed.
>    ICMP input histogram:
>    0 ICMP messages sent
>    0 ICMP messages failed
>    ICMP output histogram:
> Tcp:
>    4 active connections openings
>    0 passive connection openings
>    0 failed connection attempts
>    0 connection resets received
>    0 connections established
>    67167 segments received
>    133712 segments send out
>    0 segments retransmited
>    0 bad segments received.
>    0 resets sent
> Udp:
>    0 packets received
>    0 packets to unknown port received.
>    0 packet receive errors
>    0 packets sent
>    0 receive buffer errors
>    0 send buffer errors
> UdpLite:
> TcpExt:
>    2 TCP sockets finished time wait in fast timer
>    2 delayed acks sent
>    18 packets directly queued to recvmsg prequeue.
>    3 packet headers predicted
>    127 acknowledgments not containing data payload received
>    67026 predicted acknowledgments
> IpExt:
>    InBcastPkts: 155
>    InOctets: 3511540
>    OutOctets: 199739352
>    InBcastOctets: 16072
>
> On the x86 side :
> netperf -H 192.168.1.10 -l 60 -t TCP_STREAM
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.1.10 (192.168.1.10) port 0 AF_INET
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    60.02      22.01
> / # netstat -s
> Ip:
>    67326 total packets received
>    4 with invalid addresses
>    0 forwarded
>    0 incoming packets discarded
>    67322 incoming packets delivered
>    133712 requests sent out
> Icmp:
>    0 ICMP messages received
>    0 input ICMP message failed.
>    ICMP input histogram:
>    0 ICMP messages sent
>    0 ICMP messages failed
>    ICMP output histogram:
> Tcp:
>    4 active connections openings
>    0 passive connection openings
>    0 failed connection attempts
>    0 connection resets received
>    0 connections established
>    67167 segments received
>    133712 segments send out
>    0 segments retransmited
>    0 bad segments received.
>    0 resets sent
> Udp:
>    0 packets received
>    0 packets to unknown port received.
>    0 packet receive errors
>    0 packets sent
>    0 receive buffer errors
>    0 send buffer errors
> UdpLite:
> TcpExt:
>    2 TCP sockets finished time wait in fast timer
>    2 delayed acks sent
>    18 packets directly queued to recvmsg prequeue.
>    3 packet headers predicted
>    127 acknowledgments not containing data payload received
>    67026 predicted acknowledgments
> IpExt:
>    InBcastPkts: 155
>    InOctets: 3511540
>    OutOctets: 199739352
>    InBcastOctets: 16072
>
>
Sorry, bad copy/paste...
Here is the x86 side :
netstat -s
Ip:
    134157 total packets received
    14 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    134079 incoming packets delivered
    67446 requests sent out
    84 outgoing packets dropped
    2 dropped because of missing route
Icmp:
    86 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 86
    92 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 92
IcmpMsg:
        InType3: 86
        OutType3: 92
Tcp:
    0 active connections openings
    4 passive connection openings
    0 failed connection attempts
    0 connection resets received
    0 connections established
    133712 segments received
    67167 segments send out
    0 segments retransmited
    0 bad segments received.
    0 resets sent
Udp:
    294 packets received
    92 packets to unknown port received.
    0 packet receive errors
    187 packets sent
    0 receive buffer errors
    0 send buffer errors
UdpLite:
TcpExt:
    1 delayed acks sent
    133689 packets directly queued to recvmsg prequeue.
    192786040 bytes directly received in process context from prequeue
    0 packet headers predicted
    133689 packets header predicted and directly queued to user
    14 acknowledgments not containing data payload received
IpExt:
    InBcastPkts: 140
    OutBcastPkts: 51
    InOctets: 199778186
    OutOctets: 3519013
    InBcastOctets: 14912
    OutBcastOctets: 6639

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 18:02                   ` Jean-Michel Hautbois
@ 2012-01-03  7:36                     ` Jean-Michel Hautbois
  2012-01-03  8:06                       ` Eric Dumazet
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-03  7:36 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/2 Jean-Michel Hautbois <jhautbois@gmail.com>:
> 2012/1/2 Jean-Michel Hautbois <jhautbois@gmail.com>:
>> 2012/1/2 Eric Dumazet <eric.dumazet@gmail.com>:
>>> Le lundi 02 janvier 2012 à 18:20 +0100, Jean-Michel Hautbois a écrit :
>>>
>>>> Here we go...
>>>>
>>>> Recv   Send    Send
>>>> Socket Socket  Message  Elapsed
>>>> Size   Size    Size     Time     Throughput
>>>> bytes  bytes   bytes    secs.    10^6bits/sec
>>>>
>>>>  87380  16384  16384    60.01      21.79
>>>>
>>>> It is not very good, AFAIK.
>>>> CPU usage is between 17 and 35%.
>>>
>>> Ouch...
>>>
>>> Better find out what is happening before even starting coding
>>> anything...
>>>
>>> checkout "netstat -s" on both sender/receiver
>>

After looking to the results, I can't see really bad values or
anything which would explain such a behaviour...
Maybe is there some limits in the driver ? My board is a leopardboard (dm368).
Any idea ?

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-03  7:36                     ` Jean-Michel Hautbois
@ 2012-01-03  8:06                       ` Eric Dumazet
  2012-01-05  8:50                         ` Jean-Michel Hautbois
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-03  8:06 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: netdev

Le mardi 03 janvier 2012 à 08:36 +0100, Jean-Michel Hautbois a écrit :

> After looking to the results, I can't see really bad values or
> anything which would explain such a behaviour...

no tcp problems, so your board seems to be slow

> Maybe is there some limits in the driver ? My board is a leopardboard (dm368).
> Any idea ?

Just to have any idea of the inter frame timings, could you post the
~200 first frames taken on x86 ?

tcpdump -p -n -s 0 -i eth0 host arm_ip -c 200

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-02 16:52         ` Eric Dumazet
  2012-01-02 16:52           ` Eric Dumazet
@ 2012-01-03 22:35           ` Rick Jones
  2012-01-05  9:13             ` Jean-Michel Hautbois
  1 sibling, 1 reply; 29+ messages in thread
From: Rick Jones @ 2012-01-03 22:35 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: Eric Dumazet, netdev

On 01/02/2012 08:52 AM, Eric Dumazet wrote:
> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
>
>> Mmmh, using netperf you would like to know what the client (my ARM
>> board) can do ?
>> How would you test it ? I can have an ARM board on one side, and the
>> x86 on the other...
>>
>
> x86>  netserver&
> arm>  netperf -H<arm_ip_address>  -l 60 -t TCP_STREAM
>
> 1) check cpu usage on<arm>  while test is running
> (for example : vmstat 1 )
> 2) check bandwith of test run

The "&" at the end of the netserver command is (should be) redundant - 
netserver will by default daemonize itself.

I would suggest amending the netperf command line to something more like:

netperf -H <x86IP> -c -l 60 -t TCP_STREAM -- -m <dataofoneline> -D

Which will cause netperf to make "send" calls of <dataofoneline> bytes 
(substitute dataoftwolines if you like).

The -D is to disable Nagle, which you will need to do if you really 
really want to have only one line of the picture per TCP segment.  Even 
then though, the streaming nature of TCP means you don't "really" know 
that it will go out that way onto the network.  Otherwise, leave it off 
and TCP will attempt to aggregate the sends into maximum segment sized 
segments.

The -c is to get netperf to report CPU utilization for the local/netperf 
side.

As Eric suggests, you really should just hand as much data to TCP at one 
time as you reasonably can within the constraints of the rest of your 
application

Skipping ahead a little, on the netperf side at least you might be well 
served to do:

netstat -s > before
netperf ....
netstat -s > after
beforeafter before after > delta

and then run "before" and "after" through "beforeafter" from 
ftp://ftp.cup.hp.com/dist/networking/tools or something similar - I 
think someone (here on netdev?) created a more robust version at some 
point.  Alas, beforeafter will sometimes get confused if a new statistic 
appears in after that was not present in before - the utility was 
initially developed on HP-UX, the netstat of which does not have the 
"only display non-zero values" mindset.

One other thing - your 100 Mbit/s link - is that full duplex or half? 
If half you may need to be concerned with "capture effect."  Briefly, it 
was a CSMA/CD behaviour which emerged once systems were fast enough to 
saturate the Ethernet link - the transmitting station would "capture" 
the wire (half-duplex) and the receiving station, which was still trying 
to transmit ACKs would end-up backing-off (at the PHY/data link).  Then 
the transmitter would stop transmitting, having run-out of window, but 
the receiver's NIC would still be backed-off and so the link would go 
idle.  The workaround was to either have a rather large or rather small 
(8KB, IIRC but best do some web searchings - this topic goes back to the 
1990s) TCP window.

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 29+ messages in thread

* RE: TCP communication for raw image transmission
  2012-01-02 15:21 TCP communication for raw image transmission Jean-Michel Hautbois
  2012-01-02 15:59 ` Eric Dumazet
@ 2012-01-04  9:57 ` David Laight
  2012-01-04 11:16   ` Jean-Michel Hautbois
  1 sibling, 1 reply; 29+ messages in thread
From: David Laight @ 2012-01-04  9:57 UTC (permalink / raw)
  To: Jean-Michel Hautbois, netdev

 
> I am currently working on a way to send raw images (~400*300 YUV
> 4:2:0) at a relative high speed (raw image and not compressed image
> due to specific codec issues).
> This would represent about 57MBps/sec on a 100Mbps ethernet link.
> This link is direct, nothing else will pass through it.
...
> I was thinking about using TCP socket (of course) ...

I'm not sure you get any benefit from using TCP (over UDP).
If any packets are lost (which on a dedicated network they
shouldn't be) then you probably can't stand the restransmit
delays - especially if the losses are systemic and repeated
in which case you'll occaisionally end up with very slow
throughput (when not enough packets are sent for the
out-of-sequence receives to request retransittions).

The TCP stack also has a series of 'features' that will
reduce the attainable throughput for this kind of traffic:
1) Nagle - easily turned off.
2) Slow start - only a global flag
3) Delayed acks - make 'slow start' more of a problem.

The code paths for UDP (sender and receiver) will be
much shorter, although you'll probably need to ensure
there is adequate buffering.

Also, it should be fairly easy to directly send the UDP
packets from withing the kernel completly bypassing the
normal UDP+IP parts of the network stack!
I know some people have done that completely in hardware
on some FPGA systems with slow cpus.

	David

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-04  9:57 ` David Laight
@ 2012-01-04 11:16   ` Jean-Michel Hautbois
  0 siblings, 0 replies; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-04 11:16 UTC (permalink / raw)
  To: David Laight; +Cc: netdev

2012/1/4 David Laight <David.Laight@aculab.com>:
>
>> I am currently working on a way to send raw images (~400*300 YUV
>> 4:2:0) at a relative high speed (raw image and not compressed image
>> due to specific codec issues).
>> This would represent about 57MBps/sec on a 100Mbps ethernet link.
>> This link is direct, nothing else will pass through it.
> ...
>> I was thinking about using TCP socket (of course) ...
>
> I'm not sure you get any benefit from using TCP (over UDP).
> If any packets are lost (which on a dedicated network they
> shouldn't be) then you probably can't stand the restransmit
> delays - especially if the losses are systemic and repeated
> in which case you'll occaisionally end up with very slow
> throughput (when not enough packets are sent for the
> out-of-sequence receives to request retransittions).

UDP is interesting for the fast path (you mention it later) but it has
some drawbacks, with one being packet loss.
Today, it is a dedicated link, but maybe will I use a switch also, and
the problem would exist.
I can't say (I will try tomorrow) if this would be really more
efficient on my board BTW.
If this is a HW issue then, UDP or TCP will be limited in the same way.

> The TCP stack also has a series of 'features' that will
> reduce the attainable throughput for this kind of traffic:
> 1) Nagle - easily turned off.
> 2) Slow start - only a global flag
> 3) Delayed acks - make 'slow start' more of a problem.
>
> The code paths for UDP (sender and receiver) will be
> much shorter, although you'll probably need to ensure
> there is adequate buffering.
>
> Also, it should be fairly easy to directly send the UDP
> packets from withing the kernel completly bypassing the
> normal UDP+IP parts of the network stack!
> I know some people have done that completely in hardware
> on some FPGA systems with slow cpus.

You mean using a layer 2 protocol with specific ethertype and raw
sockets for instance ?

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-03  8:06                       ` Eric Dumazet
@ 2012-01-05  8:50                         ` Jean-Michel Hautbois
  0 siblings, 0 replies; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-05  8:50 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

2012/1/3 Eric Dumazet <eric.dumazet@gmail.com>:
> Le mardi 03 janvier 2012 à 08:36 +0100, Jean-Michel Hautbois a écrit :
>
>> After looking to the results, I can't see really bad values or
>> anything which would explain such a behaviour...
>
> no tcp problems, so your board seems to be slow
>
>> Maybe is there some limits in the driver ? My board is a leopardboard (dm368).
>> Any idea ?
>
> Just to have any idea of the inter frame timings, could you post the
> ~200 first frames taken on x86 ?
>
> tcpdump -p -n -s 0 -i eth0 host arm_ip -c 200

Here it is :
09:49:11.163207 IP 192.168.0.2.40234 > 192.168.0.1.12865: Flags [S],
seq 2206455606, win 5840, options [mss 1460,sackOK,TS val 4294951315
ecr 0,nop,wscale 1], length 0
09:49:11.163232 IP 192.168.0.1.12865 > 192.168.0.2.40234: Flags [S.],
seq 411452088, ack 2206455607, win 14480, options [mss 1460,sackOK,TS
val 156359 ecr 4294951315,nop,wscale 6], length 0
09:49:11.163827 IP 192.168.0.2.40234 > 192.168.0.1.12865: Flags [.],
ack 1, win 2920, options [nop,nop,TS val 4294951315 ecr 156359],
length 0
09:49:11.164443 IP 192.168.0.2.40234 > 192.168.0.1.12865: Flags [P.],
seq 1:257, ack 1, win 2920, options [nop,nop,TS val 4294951315 ecr
156359], length 256
09:49:11.164466 IP 192.168.0.1.12865 > 192.168.0.2.40234: Flags [.],
ack 257, win 243, options [nop,nop,TS val 156359 ecr 4294951315],
length 0
09:49:11.164494 IP 192.168.0.1.12865 > 192.168.0.2.40234: Flags [P.],
seq 1:257, ack 257, win 243, options [nop,nop,TS val 156359 ecr
4294951315], length 256
09:49:11.165126 IP 192.168.0.2.40234 > 192.168.0.1.12865: Flags [.],
ack 257, win 3456, options [nop,nop,TS val 4294951315 ecr 156359],
length 0
09:49:11.179437 IP 192.168.0.2.40234 > 192.168.0.1.12865: Flags [P.],
seq 257:513, ack 257, win 3456, options [nop,nop,TS val 4294951317 ecr
156359], length 256
09:49:11.218998 IP 192.168.0.1.12865 > 192.168.0.2.40234: Flags [.],
ack 513, win 260, options [nop,nop,TS val 156376 ecr 4294951317],
length 0
09:49:11.227409 IP 192.168.0.1.12865 > 192.168.0.2.40234: Flags [P.],
seq 257:513, ack 513, win 260, options [nop,nop,TS val 156378 ecr
4294951317], length 256
09:49:11.228895 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [S],
seq 2200327184, win 5840, options [mss 1460,sackOK,TS val 4294951322
ecr 0,nop,wscale 1], length 0
09:49:11.228911 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [S.],
seq 933609285, ack 2200327185, win 14480, options [mss 1460,sackOK,TS
val 156378 ecr 4294951322,nop,wscale 6], length 0
09:49:11.229513 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
ack 1, win 2920, options [nop,nop,TS val 4294951322 ecr 156378],
length 0
09:49:11.229915 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 1:1449, ack 1, win 2920, options [nop,nop,TS val 4294951322 ecr
156378], length 1448
09:49:11.229956 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 1449, win 272, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.230168 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 1449:4345, ack 1, win 2920, options [nop,nop,TS val 4294951322 ecr
156378], length 2896
09:49:11.230203 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 4345, win 317, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.230763 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 4345:5793, ack 1, win 2920, options [nop,nop,TS val 4294951322 ecr
156379], length 1448
09:49:11.230793 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 5793, win 362, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.231015 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 5793:8689, ack 1, win 2920, options [nop,nop,TS val 4294951322 ecr
156379], length 2896
09:49:11.231045 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 8689, win 408, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.231265 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 8689:11585, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 2896
09:49:11.231292 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 11585, win 453, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.231514 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 11585:14481, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 2896
09:49:11.231554 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 14481, win 498, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.232060 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 14481:16385, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 1904
09:49:11.232089 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 16385, win 543, options [nop,nop,TS val 156379 ecr 4294951322],
length 0
09:49:11.232586 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 16385:19281, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 2896
09:49:11.232617 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 19281, win 589, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.232838 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 19281:22177, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 2896
09:49:11.232866 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 22177, win 634, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.233113 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 22177:23625, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 1448
09:49:11.233140 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 23625, win 679, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.233362 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 23625:25073, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 1448
09:49:11.233388 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 25073, win 724, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.233612 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 25073:27969, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156379], length 2896
09:49:11.233641 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 27969, win 770, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.234202 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 27969:29417, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1448
09:49:11.234225 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 29417, win 815, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.234455 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 29417:32769, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 3352
09:49:11.234483 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 32769, win 860, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.235042 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 32769:34217, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1448
09:49:11.235067 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 34217, win 905, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.235293 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 34217:37113, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 2896
09:49:11.235318 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 37113, win 951, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.235551 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 37113:40009, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 2896
09:49:11.235577 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 40009, win 996, options [nop,nop,TS val 156380 ecr 4294951322],
length 0
09:49:11.235791 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 40009:41457, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1448
09:49:11.235818 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 41457, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.236041 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 41457:42905, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1448
09:49:11.236067 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 42905, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.236312 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 42905:44353, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1448
09:49:11.236336 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 44353, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.236562 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 44353:47249, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 2896
09:49:11.236588 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 47249, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.236812 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 47249:49153, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156380], length 1904
09:49:11.236837 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 49153, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.237508 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 49153:50601, ack 1, win 2920, options [nop,nop,TS val 4294951322
ecr 156381], length 1448
09:49:11.237761 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 50601:53497, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 2896
09:49:11.237796 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 53497, win 1002, options [nop,nop,TS val 156381 ecr 4294951322],
length 0
09:49:11.238012 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 53497:56393, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 2896
09:49:11.238053 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 56393, win 1002, options [nop,nop,TS val 156381 ecr 4294951323],
length 0
09:49:11.238262 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 56393:59289, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 2896
09:49:11.238299 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 59289, win 1002, options [nop,nop,TS val 156381 ecr 4294951323],
length 0
09:49:11.238509 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 59289:60737, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 1448
09:49:11.238759 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 60737:62185, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 1448
09:49:11.238798 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 62185, win 1002, options [nop,nop,TS val 156381 ecr 4294951323],
length 0
09:49:11.239010 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 62185:65537, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156381], length 3352
09:49:11.239053 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 65537, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.239757 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 65537:66985, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1448
09:49:11.240007 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 66985:69881, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.240040 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 69881, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.240257 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 69881:72777, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.240286 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 72777, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.240507 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 72777:75673, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.240534 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 75673, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.240756 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 75673:78569, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.240783 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 78569, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.241027 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 78569:80017, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1448
09:49:11.241277 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 80017:81921, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1904
09:49:11.241306 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 81921, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.241801 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 81921:83369, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1448
09:49:11.242053 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 83369:86265, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.242081 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 86265, win 1002, options [nop,nop,TS val 156382 ecr 4294951323],
length 0
09:49:11.242303 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 86265:89161, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.242340 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 89161, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.242553 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 89161:92057, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.242578 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 92057, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.242802 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 92057:94953, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 2896
09:49:11.242828 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 94953, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.243055 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 94953:96401, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1448
09:49:11.243385 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 96401:98305, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156382], length 1904
09:49:11.243409 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 98305, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.243963 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 98305:99753, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1448
09:49:11.244215 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 99753:102649, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.244238 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 102649, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.244465 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 102649:105545, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.244491 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 105545, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.244715 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 105545:108441, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.244740 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 108441, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.244964 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 108441:111337, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.244990 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 111337, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.245236 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 111337:112785, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1448
09:49:11.245486 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 112785:114689, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1904
09:49:11.245510 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 114689, win 1002, options [nop,nop,TS val 156383 ecr 4294951323],
length 0
09:49:11.246059 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 114689:116137, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1448
09:49:11.246310 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 116137:119033, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.246334 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 119033, win 1002, options [nop,nop,TS val 156384 ecr 4294951323],
length 0
09:49:11.246560 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 119033:121929, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.246586 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 121929, win 1002, options [nop,nop,TS val 156384 ecr 4294951323],
length 0
09:49:11.246810 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 121929:124825, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 2896
09:49:11.246835 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 124825, win 1002, options [nop,nop,TS val 156384 ecr 4294951323],
length 0
09:49:11.247059 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 124825:126273, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1448
09:49:11.247307 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 126273:127721, ack 1, win 2920, options [nop,nop,TS val 4294951323
ecr 156383], length 1448
09:49:11.247330 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 127721, win 1002, options [nop,nop,TS val 156384 ecr 4294951323],
length 0
09:49:11.247572 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 127721:129169, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156383], length 1448
09:49:11.247821 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 129169:131073, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156383], length 1904
09:49:11.247844 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 131073, win 1002, options [nop,nop,TS val 156384 ecr 4294951324],
length 0
09:49:11.248434 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 131073:132521, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 1448
09:49:11.248685 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 132521:135417, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 2896
09:49:11.248699 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 135417, win 1002, options [nop,nop,TS val 156384 ecr 4294951324],
length 0
09:49:11.248935 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 135417:138313, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 2896
09:49:11.248964 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 138313, win 1002, options [nop,nop,TS val 156384 ecr 4294951324],
length 0
09:49:11.249184 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 138313:141209, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 2896
09:49:11.249212 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 141209, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.249433 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 141209:142657, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 1448
09:49:11.249683 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 142657:145553, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 2896
09:49:11.249707 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 145553, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.249933 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 145553:147457, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156384], length 1904
09:49:11.249959 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 147457, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.250570 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 147457:148905, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1448
09:49:11.250821 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 148905:151801, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.250844 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 151801, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.251071 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 151801:154697, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.251096 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 154697, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.251320 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 154697:157593, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.251345 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 157593, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.251570 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 157593:160489, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.251597 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 160489, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.251842 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 160489:161937, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1448
09:49:11.252126 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 161937:163841, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1904
09:49:11.252151 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 163841, win 1002, options [nop,nop,TS val 156385 ecr 4294951324],
length 0
09:49:11.252724 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 163841:165289, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1448
09:49:11.252975 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 165289:168185, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.252983 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 168185, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.253225 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 168185:171081, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.253232 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 171081, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.253477 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 171081:173977, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.253489 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 173977, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.253726 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 173977:176873, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 2896
09:49:11.253738 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 176873, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.253984 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 176873:178321, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1448
09:49:11.254233 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 178321:180225, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156385], length 1904
09:49:11.254244 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 180225, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.254848 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 180225:181673, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 1448
09:49:11.255101 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 181673:184569, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 2896
09:49:11.255114 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 184569, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.255351 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 184569:187465, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 2896
09:49:11.255364 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 187465, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.255598 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 187465:190361, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 2896
09:49:11.255630 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 190361, win 1002, options [nop,nop,TS val 156386 ecr 4294951324],
length 0
09:49:11.255848 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 190361:191809, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 1448
09:49:11.256097 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 191809:194705, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 2896
09:49:11.256121 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 194705, win 1002, options [nop,nop,TS val 156387 ecr 4294951324],
length 0
09:49:11.256347 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 194705:196609, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156386], length 1904
09:49:11.256374 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 196609, win 1002, options [nop,nop,TS val 156387 ecr 4294951324],
length 0
09:49:11.257052 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 196609:198057, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156387], length 1448
09:49:11.257302 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 198057:199505, ack 1, win 2920, options [nop,nop,TS val 4294951324
ecr 156387], length 1448
09:49:11.257325 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 199505, win 1047, options [nop,nop,TS val 156387 ecr 4294951324],
length 0
09:49:11.257554 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 199505:202401, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.257581 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 202401, win 1070, options [nop,nop,TS val 156387 ecr 4294951325],
length 0
09:49:11.257827 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 202401:203849, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1448
09:49:11.258078 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 203849:206745, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.258101 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 206745, win 1070, options [nop,nop,TS val 156387 ecr 4294951325],
length 0
09:49:11.258329 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 206745:209641, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.258356 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 209641, win 1070, options [nop,nop,TS val 156387 ecr 4294951325],
length 0
09:49:11.258598 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 209641:211089, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1448
09:49:11.258849 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 211089:212993, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1904
09:49:11.258881 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 212993, win 1070, options [nop,nop,TS val 156387 ecr 4294951325],
length 0
09:49:11.259389 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 212993:214441, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1448
09:49:11.259639 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 214441:217337, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.259670 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 217337, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.259878 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 217337:220233, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.259907 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 220233, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.260127 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 220233:223129, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 2896
09:49:11.260152 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 223129, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.260399 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 223129:224577, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1448
09:49:11.260648 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 224577:226025, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 1448
09:49:11.260671 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 226025, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.260899 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [P.],
seq 226025:229377, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156387], length 3352
09:49:11.260926 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 229377, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.261622 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 229377:230825, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156388], length 1448
09:49:11.261873 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 230825:233721, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156388], length 2896
09:49:11.261896 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 233721, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.262123 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 233721:236617, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156388], length 2896
09:49:11.262149 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 236617, win 1070, options [nop,nop,TS val 156388 ecr 4294951325],
length 0
09:49:11.262373 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 236617:239513, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156388], length 2896
09:49:11.262398 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 239513, win 1070, options [nop,nop,TS val 156389 ecr 4294951325],
length 0
09:49:11.262622 IP 192.168.0.2.42474 > 192.168.0.1.51529: Flags [.],
seq 239513:242409, ack 1, win 2920, options [nop,nop,TS val 4294951325
ecr 156388], length 2896
09:49:11.262647 IP 192.168.0.1.51529 > 192.168.0.2.42474: Flags [.],
ack 242409, win 1070, options [nop,nop,TS val 156389 ecr 4294951325],
length 0

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-03 22:35           ` Rick Jones
@ 2012-01-05  9:13             ` Jean-Michel Hautbois
  2012-01-05  9:40               ` Eric Dumazet
  0 siblings, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-05  9:13 UTC (permalink / raw)
  To: Rick Jones; +Cc: Eric Dumazet, netdev

2012/1/3 Rick Jones <rick.jones2@hp.com>:
> On 01/02/2012 08:52 AM, Eric Dumazet wrote:
>>
>> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
>>
>>> Mmmh, using netperf you would like to know what the client (my ARM
>>> board) can do ?
>>> How would you test it ? I can have an ARM board on one side, and the
>>> x86 on the other...
>>>
>>
>> x86>  netserver&
>> arm>  netperf -H<arm_ip_address>  -l 60 -t TCP_STREAM
>>
>> 1) check cpu usage on<arm>  while test is running
>> (for example : vmstat 1 )
>> 2) check bandwith of test run
>
>
> The "&" at the end of the netserver command is (should be) redundant -
> netserver will by default daemonize itself.
>
> I would suggest amending the netperf command line to something more like:
>
> netperf -H <x86IP> -c -l 60 -t TCP_STREAM -- -m <dataofoneline> -D

I did it, and here are the results (when plugged directly between x86
and arm, and not throught the switch, as before) :
/ # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM -- -m 1344 -D
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   1344    60.01        45.43   100.00   -1.00    180.325  -1.000

And without specifying the data size :
/ # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384  16384    60.01        61.94   99.98    -1.00    132.230  -1.000

This is far better than the first tests, but this means my best bet is
to send as much data as possible (here, 16384)...
I will do a benchmark with a little script which will test several
frame sizes (or is there a way to know the theorical better value ?).

JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:13             ` Jean-Michel Hautbois
@ 2012-01-05  9:40               ` Eric Dumazet
  2012-01-05  9:48                 ` Jean-Michel Hautbois
  2012-01-05  9:49                 ` Eric Dumazet
  0 siblings, 2 replies; 29+ messages in thread
From: Eric Dumazet @ 2012-01-05  9:40 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: Rick Jones, netdev

Le jeudi 05 janvier 2012 à 10:13 +0100, Jean-Michel Hautbois a écrit :
> 2012/1/3 Rick Jones <rick.jones2@hp.com>:
> > On 01/02/2012 08:52 AM, Eric Dumazet wrote:
> >>
> >> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
> >>
> >>> Mmmh, using netperf you would like to know what the client (my ARM
> >>> board) can do ?
> >>> How would you test it ? I can have an ARM board on one side, and the
> >>> x86 on the other...
> >>>
> >>
> >> x86>  netserver&
> >> arm>  netperf -H<arm_ip_address>  -l 60 -t TCP_STREAM
> >>
> >> 1) check cpu usage on<arm>  while test is running
> >> (for example : vmstat 1 )
> >> 2) check bandwith of test run
> >
> >
> > The "&" at the end of the netserver command is (should be) redundant -
> > netserver will by default daemonize itself.
> >
> > I would suggest amending the netperf command line to something more like:
> >
> > netperf -H <x86IP> -c -l 60 -t TCP_STREAM -- -m <dataofoneline> -D
> 
> I did it, and here are the results (when plugged directly between x86
> and arm, and not throught the switch, as before) :
> / # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM -- -m 1344 -D
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
> Recv   Send    Send                          Utilization       Service Demand
> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
> Size   Size    Size     Time     Throughput  local    remote   local   remote
> bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB
> 
>  87380  16384   1344    60.01        45.43   100.00   -1.00    180.325  -1.000
> 
> And without specifying the data size :
> / # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET
> Recv   Send    Send                          Utilization       Service Demand
> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
> Size   Size    Size     Time     Throughput  local    remote   local   remote
> bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB
> 
>  87380  16384  16384    60.01        61.94   99.98    -1.00    132.230  -1.000
> 
> This is far better than the first tests, but this means my best bet is
> to send as much data as possible (here, 16384)...
> I will do a benchmark with a little script which will test several
> frame sizes (or is there a way to know the theorical better value ?).
> 

Could you test UDP_STREAM as well ?

$ netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

1000000   65507   10.00       13398      0     702.12
110592           10.00       13398            702.12

Then, a pktgen test (this sends UDP frames, but from kernel land) might
give you the limit of the NIC...

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:40               ` Eric Dumazet
@ 2012-01-05  9:48                 ` Jean-Michel Hautbois
  2012-01-05  9:55                   ` Eric Dumazet
  2012-01-05  9:49                 ` Eric Dumazet
  1 sibling, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-05  9:48 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Rick Jones, netdev

2012/1/5 Eric Dumazet <eric.dumazet@gmail.com>:
> Le jeudi 05 janvier 2012 à 10:13 +0100, Jean-Michel Hautbois a écrit :
>> 2012/1/3 Rick Jones <rick.jones2@hp.com>:
>> > On 01/02/2012 08:52 AM, Eric Dumazet wrote:
>> >>
>> >> Le lundi 02 janvier 2012 à 17:40 +0100, Jean-Michel Hautbois a écrit :
>> >>
>> >>> Mmmh, using netperf you would like to know what the client (my ARM
>> >>> board) can do ?
>> >>> How would you test it ? I can have an ARM board on one side, and the
>> >>> x86 on the other...
>> >>>
>> >>
>> >> x86>  netserver&
>> >> arm>  netperf -H<arm_ip_address>  -l 60 -t TCP_STREAM
>> >>
>> >> 1) check cpu usage on<arm>  while test is running
>> >> (for example : vmstat 1 )
>> >> 2) check bandwith of test run
>> >
>> >
>> > The "&" at the end of the netserver command is (should be) redundant -
>> > netserver will by default daemonize itself.
>> >
>> > I would suggest amending the netperf command line to something more like:
>> >
>> > netperf -H <x86IP> -c -l 60 -t TCP_STREAM -- -m <dataofoneline> -D
>>
>> I did it, and here are the results (when plugged directly between x86
>> and arm, and not throught the switch, as before) :
>> / # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM -- -m 1344 -D
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
>> Recv   Send    Send                          Utilization       Service Demand
>> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
>> Size   Size    Size     Time     Throughput  local    remote   local   remote
>> bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB
>>
>>  87380  16384   1344    60.01        45.43   100.00   -1.00    180.325  -1.000
>>
>> And without specifying the data size :
>> / # netperf -H 192.168.0.1 -c -l 60 -t TCP_STREAM
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.1 (192.168.0.1) port 0 AF_INET
>> Recv   Send    Send                          Utilization       Service Demand
>> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
>> Size   Size    Size     Time     Throughput  local    remote   local   remote
>> bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB
>>
>>  87380  16384  16384    60.01        61.94   99.98    -1.00    132.230  -1.000
>>
>> This is far better than the first tests, but this means my best bet is
>> to send as much data as possible (here, 16384)...
>> I will do a benchmark with a little script which will test several
>> frame sizes (or is there a way to know the theorical better value ?).
>>

Here is the result of this little benchmark :
m=1024
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   1024    60.01        36.44   99.97    -1.00    224.722  -1.000

m=1344
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   1344    60.01        45.95   100.00   -1.00    178.279  -1.000

m=1460
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   1460    60.01        27.92   99.98    -1.00    293.382  -1.000

m=2048
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   2048    60.01        38.15   99.98    -1.00    214.705  -1.000

m=4096
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   4096    60.01        55.32   99.98    -1.00    148.070  -1.000

m=8192
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384   8192    60.01        62.06   100.00   -1.00    132.000  -1.000

m=16384
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384  16384    60.01        63.07   99.98    -1.00    129.865  -1.000

m=32768
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET : nodelay
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % U      us/KB   us/KB

 87380  16384  32768    60.01        67.46   100.00   -1.00    121.428  -1.000

m is the size of the message, and there is clearly an effect of this value...

> Could you test UDP_STREAM as well ?
>
> $ netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET
> Socket  Message  Elapsed      Messages
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
>
> 1000000   65507   10.00       13398      0     702.12
> 110592           10.00       13398            702.12

It does not seem to work :
netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

106496   65507   10.01        1836      0       0.00
419227124           0.00      536870912              0.00

Adding a "-c" gives the same :
netperf -H 192.168.0.1 -c -l 10 -t UDP_STREAM
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages                   CPU      Service
Size    Size     Time         Okay Errors   Throughput   Util     Demand
bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB

106496   65507   10.01        1835      0        0.0     96.10    50.250
536870912           2.25      1073741824               0.0     96.10    -1.000

Using ifconfig before and after and comparing the TX values I get 98Mbps...

> Then, a pktgen test (this sends UDP frames, but from kernel land) might
> give you the limit of the NIC...

I need to recompile my kernel, as I don't have the CONFIG_NET_PKTGEN
in this one...
JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:40               ` Eric Dumazet
  2012-01-05  9:48                 ` Jean-Michel Hautbois
@ 2012-01-05  9:49                 ` Eric Dumazet
  1 sibling, 0 replies; 29+ messages in thread
From: Eric Dumazet @ 2012-01-05  9:49 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: Rick Jones, netdev

Le jeudi 05 janvier 2012 à 10:40 +0100, Eric Dumazet a écrit :

> Could you test UDP_STREAM as well ?
> 
> $ netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET
> Socket  Message  Elapsed      Messages                
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
> 1000000   65507   10.00       13398      0     702.12
> 110592           10.00       13398            702.12
> 

Ah, I had a special BQL setup, this explains I didnt reach line rate.

After doing :

$ echo max >/sys/class/net/eth3/queues/tx-0/byte_queue_limits/limit_max

I get line rate :

$ grep . /sys/class/net/eth3/queues/tx-0/byte_queue_limits/*
/sys/class/net/eth3/queues/tx-0/byte_queue_limits/hold_time:1000
/sys/class/net/eth3/queues/tx-0/byte_queue_limits/inflight:154294
/sys/class/net/eth3/queues/tx-0/byte_queue_limits/limit:152852
/sys/class/net/eth3/queues/tx-0/byte_queue_limits/limit_max:1879048192
/sys/class/net/eth3/queues/tx-0/byte_queue_limits/limit_min:0


perf record netperf -H 192.168.0.1 -l 10 -t UDP_STREAM -- -m 1400
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

1000000    1400   10.00      853120      0     955.49
110592           10.00      852799            955.13

[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.155 MB perf.data (~6779 samples) ]

$ perf report --stdio
# ========
# captured on: Thu Jan  5 10:45:29 2012
# hostname : svivoipvnx001
# os release : 3.2.0-rc7-15182-g3b9b886-dirty
# perf version : 3.2.rc7.272.ga1b347.dirty
# arch : i686
# nrcpus online : 8
# nrcpus avail : 8
# cpudesc : Intel(R) Xeon(R) CPU E5450 @ 3.00GHz
# cpuid : GenuineIntel,6,23,6
# total memory : 4147856 kB
# cmdline : /usr/bin/perf record netperf -H 192.168.0.1 -l 10 -t UDP_STREAM -- -m 1400 
# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, excl_usr = 0, excl_kern = 0, id = { 65, 66
# HEADER_CPU_TOPOLOGY info available, use -I to display
# ========
#
# Events: 3K cycles
#
# Overhead  Command      Shared Object                                Symbol
# ........  .......  .................  ....................................
#
    54.87%  netperf  [kernel.kallsyms]  [k] __copy_from_user_ll
     5.93%  netperf  [kernel.kallsyms]  [k] __ip_make_skb
     5.00%  netperf  [kernel.kallsyms]  [k] __ip_append_data.clone.64
     2.23%  netperf  [kernel.kallsyms]  [k] __alloc_skb
     2.06%  netperf  [kernel.kallsyms]  [k] sysenter_past_esp
     1.94%  netperf  [ip_tables]        [k] ipt_do_table
     1.78%  netperf  [kernel.kallsyms]  [k] udp_sendmsg
     1.58%  netperf  [kernel.kallsyms]  [k] __ip_route_output_key
     1.48%  netperf  [kernel.kallsyms]  [k] sock_alloc_send_pskb
     1.10%  netperf  [kernel.kallsyms]  [k] udp_send_skb
     1.07%  netperf  [kernel.kallsyms]  [k] ip_finish_output
     1.04%  netperf  [kernel.kallsyms]  [k] __kmalloc_track_caller
     1.03%  netperf  [kernel.kallsyms]  [k] memcpy
     0.97%  netperf  [kernel.kallsyms]  [k] local_bh_enable
     0.88%  netperf  [kernel.kallsyms]  [k] __slab_alloc.clone.58
     0.88%  netperf  [kernel.kallsyms]  [k] __ip_local_out
     0.82%  netperf  [unknown]          [.] 0xffffe424
     0.80%  netperf  [kernel.kallsyms]  [k] kmem_cache_alloc
     0.73%  netperf  [kernel.kallsyms]  [k] nf_iterate
     0.71%  netperf  [kernel.kallsyms]  [k] pfifo_fast_enqueue
     0.70%  netperf  [kernel.kallsyms]  [k] _raw_spin_lock
     0.69%  netperf  [kernel.kallsyms]  [k] _copy_from_user
     0.58%  netperf  [kernel.kallsyms]  [k] dev_queue_xmit
     0.54%  netperf  [kernel.kallsyms]  [k] ipv4_mtu
     0.50%  netperf  [kernel.kallsyms]  [k] ipv4_validate_peer
     0.46%  netperf  [kernel.kallsyms]  [k] ksize
     0.46%  netperf  [kernel.kallsyms]  [k] sys_sendto
     0.44%  netperf  [kernel.kallsyms]  [k] nf_hook_slow
     0.39%  netperf  netperf            [.] send_udp_stream
     0.38%  netperf  [kernel.kallsyms]  [k] sock_sendmsg

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:48                 ` Jean-Michel Hautbois
@ 2012-01-05  9:55                   ` Eric Dumazet
  2012-01-05  9:57                     ` Jean-Michel Hautbois
  2012-01-05 18:32                     ` Rick Jones
  0 siblings, 2 replies; 29+ messages in thread
From: Eric Dumazet @ 2012-01-05  9:55 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: Rick Jones, netdev

Le jeudi 05 janvier 2012 à 10:48 +0100, Jean-Michel Hautbois a écrit :

> It does not seem to work :
> netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET
> Socket  Message  Elapsed      Messages
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
> 106496   65507   10.01        1836      0       0.00
> 419227124           0.00      536870912              0.00
> 

Thats because netperf -t UDP_STREAM sends big UDP frames by default,
that must be fragmented, and defragmented on destination. Maybe some
frags are lost.

Try :

netperf -H 192.168.0.1 -l 10 -t UDP_STREAM -- -m 1500

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:55                   ` Eric Dumazet
@ 2012-01-05  9:57                     ` Jean-Michel Hautbois
  2012-01-05 10:04                       ` Eric Dumazet
  2012-01-05 18:32                     ` Rick Jones
  1 sibling, 1 reply; 29+ messages in thread
From: Jean-Michel Hautbois @ 2012-01-05  9:57 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Rick Jones, netdev

2012/1/5 Eric Dumazet <eric.dumazet@gmail.com>:
> Le jeudi 05 janvier 2012 à 10:48 +0100, Jean-Michel Hautbois a écrit :
>
>> It does not seem to work :
>> netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
>> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.1 (192.168.0.1) port 0 AF_INET
>> Socket  Message  Elapsed      Messages
>> Size    Size     Time         Okay Errors   Throughput
>> bytes   bytes    secs            #      #   10^6bits/sec
>>
>> 106496   65507   10.01        1836      0       0.00
>> 419227124           0.00      536870912              0.00
>>
>
> Thats because netperf -t UDP_STREAM sends big UDP frames by default,
> that must be fragmented, and defragmented on destination. Maybe some
> frags are lost.
>
> Try :
>
> netperf -H 192.168.0.1 -l 10 -t UDP_STREAM -- -m 1500
>
>

/ # netperf -H 192.168.0.1 -c -l 10 -t UDP_STREAM -- -m 1500
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages                   CPU      Service
Size    Size     Time         Okay Errors   Throughput   Util     Demand
bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB

106496    1500   10.00       54477      0        0.0     65.34    100.000
1073741824           2.25            0               0.0     65.34    -1.000

/ # netperf -H 192.168.0.1 -c -l 10 -t UDP_STREAM -- -m 1400
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.0.1 (192.168.0.1) port 0 AF_INET
Socket  Message  Elapsed      Messages                   CPU      Service
Size    Size     Time         Okay Errors   Throughput   Util     Demand
bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB

106496    1400   10.00       73591      0        0.0     82.38    100.000
-2147483648           2.25      1610612736               0.0     82.38    -1.000

I will recompile my kernel in order to use perf as well, in order to
get profiling in the TCP case...
JM

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:57                     ` Jean-Michel Hautbois
@ 2012-01-05 10:04                       ` Eric Dumazet
  2012-01-05 18:41                         ` Rick Jones
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-05 10:04 UTC (permalink / raw)
  To: Jean-Michel Hautbois; +Cc: Rick Jones, netdev

Le jeudi 05 janvier 2012 à 10:57 +0100, Jean-Michel Hautbois a écrit :
>  # netperf -H 192.168.0.1 -c -l 10 -t UDP_STREAM -- -m 1500
> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 192.168.0.1 (192.168.0.1) port 0 AF_INET
> Socket  Message  Elapsed      Messages                   CPU      Service
> Size    Size     Time         Okay Errors   Throughput   Util     Demand
> bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB
> 
> 106496    1500   10.00       54477      0        0.0     65.34    100.000
> 1073741824           2.25            0               0.0     65.34    -1.000
> 

Hmm... this sounds you have half duplex somewhere ?

ethtool eth0   (on both machines)

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05  9:55                   ` Eric Dumazet
  2012-01-05  9:57                     ` Jean-Michel Hautbois
@ 2012-01-05 18:32                     ` Rick Jones
  1 sibling, 0 replies; 29+ messages in thread
From: Rick Jones @ 2012-01-05 18:32 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Jean-Michel Hautbois, netdev

On 01/05/2012 01:55 AM, Eric Dumazet wrote:
> Le jeudi 05 janvier 2012 à 10:48 +0100, Jean-Michel Hautbois a écrit :
>
>> It does not seem to work :
>> netperf -H 192.168.0.1 -l 10 -t UDP_STREAM
>> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.1 (192.168.0.1) port 0 AF_INET
>> Socket  Message  Elapsed      Messages
>> Size    Size     Time         Okay Errors   Throughput
>> bytes   bytes    secs            #      #   10^6bits/sec
>>
>> 106496   65507   10.01        1836      0       0.00
>> 419227124           0.00      536870912              0.00
>>
>
> Thats because netperf -t UDP_STREAM sends big UDP frames by default,
> that must be fragmented, and defragmented on destination. Maybe some
> frags are lost.

There is also the question of the bogus values on the second line. 
There are some "fixed in top-of-trunk" issues in that area with respect 
to the migrated UDP_STREAM results printing.  The top of trunk also has 
a netperf control message size change, which means you have to make sure 
to update both sides to the top-of-trunk.  (Netperf has never 
"supported" mixing versions though doing so has often "worked.")

rick jones

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05 10:04                       ` Eric Dumazet
@ 2012-01-05 18:41                         ` Rick Jones
  2012-01-05 18:53                           ` Eric Dumazet
  0 siblings, 1 reply; 29+ messages in thread
From: Rick Jones @ 2012-01-05 18:41 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Jean-Michel Hautbois, netdev

On 01/05/2012 02:04 AM, Eric Dumazet wrote:
> Le jeudi 05 janvier 2012 à 10:57 +0100, Jean-Michel Hautbois a écrit :
>>   # netperf -H 192.168.0.1 -c -l 10 -t UDP_STREAM -- -m 1500
>> MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 192.168.0.1 (192.168.0.1) port 0 AF_INET
>> Socket  Message  Elapsed      Messages                   CPU      Service
>> Size    Size     Time         Okay Errors   Throughput   Util     Demand
>> bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB
>>
>> 106496    1500   10.00       54477      0        0.0     65.34    100.000
>> 1073741824           2.25            0               0.0     65.34    -1.000
>>
>
> Hmm... this sounds you have half duplex somewhere ?

Why?  A netperf UDP_STREAM test is "purely" unidirectional (*).  I 
suspect the numbers look funny thanks to the 32-bit compilation bugs 
(format statement issues).

Apart from updating to the top of trunk bits from 
http://www.netperf.org/svn/netperf2/trunk , another way to run the test 
with the existing bits would be to get the explicit "omni" output going 
with something akin to:

netperf -H 192.168.0.1 -t omni -c -l 10 -- -T UDP -m 1500

raj@tardy:~/netperf2_trunk$ src/netperf -t omni -H localhost -c -l 10 -- 
-T udp -m 1500
OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
localhost.localdomain () port 0 AF_INET
Local       Local       Local Elapsed Throughput Throughput  Local 
Local  Remote Remote Local   Remote  Service
Send Socket Send Socket Send  Time               Units       CPU    CPU 
    CPU    CPU    Service Service Demand
Size        Size        Size  (sec)                          Util   Util 
   Util   Util   Demand  Demand  Units
Final       Final                                            % 
Method %      Method
1254744     1254744     1500  10.00   12869.82   10^6bits/s  42.05  S 
    -1.00  U      1.071   -1.000  usec/KB

Or to avoid the wraps, use the keyval output format:

raj@tardy:~/netperf2_trunk$ src/netperf -t omni -H localhost -c -l 10 -- 
-T udp -m 1500 -k
OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
localhost.localdomain () port 0 AF_INET
LSS_SIZE_END=1390392
LSS_SIZE_END=1390392
LOCAL_SEND_SIZE=1500
ELAPSED_TIME=10.00
THROUGHPUT=12640.22
THROUGHPUT_UNITS=10^6bits/s
LOCAL_CPU_UTIL=41.19
LOCAL_CPU_METHOD=S
REMOTE_CPU_UTIL=-1.00
REMOTE_CPU_METHOD=U
LOCAL_SD=1.068
REMOTE_SD=-1.000
SD_UNITS=usec/KB

One could also whittle the output down by using the omni output 
selectors - 
http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#Omni-Output-Selection

rick jones

* well, there may be a stray, delayed TCP ACKnowledgement on the control 
connection while the UDP traffic is flowing, but even that I believe 
will be from netperf to netserver - ie in the direction the UDP_STREAM 
traffic is going

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05 18:41                         ` Rick Jones
@ 2012-01-05 18:53                           ` Eric Dumazet
  2012-01-05 19:26                             ` Rick Jones
  0 siblings, 1 reply; 29+ messages in thread
From: Eric Dumazet @ 2012-01-05 18:53 UTC (permalink / raw)
  To: Rick Jones; +Cc: Jean-Michel Hautbois, netdev

Le jeudi 05 janvier 2012 à 10:41 -0800, Rick Jones a écrit :

> Why?  A netperf UDP_STREAM test is "purely" unidirectional (*).  I 
> suspect the numbers look funny thanks to the 32-bit compilation bugs 
> (format statement issues).

Unidirectional, but the receiver must send some status notifications ?

My reasonning (before you explained the compat problem between sender
and receiver) was that since we flooded the link, we were blocking
output from receiver (because of collisions), and 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: TCP communication for raw image transmission
  2012-01-05 18:53                           ` Eric Dumazet
@ 2012-01-05 19:26                             ` Rick Jones
  0 siblings, 0 replies; 29+ messages in thread
From: Rick Jones @ 2012-01-05 19:26 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Jean-Michel Hautbois, netdev

On 01/05/2012 10:53 AM, Eric Dumazet wrote:
> Le jeudi 05 janvier 2012 à 10:41 -0800, Rick Jones a écrit :
>
>> Why?  A netperf UDP_STREAM test is "purely" unidirectional (*).  I
>> suspect the numbers look funny thanks to the 32-bit compilation bugs
>> (format statement issues).
>
> Unidirectional, but the receiver must send some status notifications ?
>
> My reasonning (before you explained the compat problem between sender
> and receiver) was that since we flooded the link, we were blocking
> output from receiver (because of collisions), and

Well, the flow goes something like

*) netperf establishes control connection
*) netserver accepts and forks
*) netperf sends initial test request and configuration
*) netserver sends final test setup information
*) netserver starts timer (with padding)
*) netperf starts timer
*) netperf's timer expires, netperf stops sending, starts waiting for 
results from netserver (in a select with a one or two minute timeout)
*) netserver's timer expires, sends its part of the test results
*) netperf comes out of select and receives netservers information
*) netperf displays results

If the netserver's results were blocked by some massive backlog of data 
in queues somewhere, the results will either simply be delayed, or 
netperf's select() will timeout and errors will be reported.  It would 
not result in netperf reporting results without netserver's input.

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2012-01-05 19:26 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-02 15:21 TCP communication for raw image transmission Jean-Michel Hautbois
2012-01-02 15:59 ` Eric Dumazet
2012-01-02 16:08   ` Jean-Michel Hautbois
2012-01-02 16:29     ` Eric Dumazet
2012-01-02 16:40       ` Jean-Michel Hautbois
2012-01-02 16:52         ` Eric Dumazet
2012-01-02 16:52           ` Eric Dumazet
2012-01-02 17:04             ` Jean-Michel Hautbois
2012-01-02 17:20             ` Jean-Michel Hautbois
2012-01-02 17:41               ` Eric Dumazet
2012-01-02 18:00                 ` Jean-Michel Hautbois
2012-01-02 18:02                   ` Jean-Michel Hautbois
2012-01-03  7:36                     ` Jean-Michel Hautbois
2012-01-03  8:06                       ` Eric Dumazet
2012-01-05  8:50                         ` Jean-Michel Hautbois
2012-01-03 22:35           ` Rick Jones
2012-01-05  9:13             ` Jean-Michel Hautbois
2012-01-05  9:40               ` Eric Dumazet
2012-01-05  9:48                 ` Jean-Michel Hautbois
2012-01-05  9:55                   ` Eric Dumazet
2012-01-05  9:57                     ` Jean-Michel Hautbois
2012-01-05 10:04                       ` Eric Dumazet
2012-01-05 18:41                         ` Rick Jones
2012-01-05 18:53                           ` Eric Dumazet
2012-01-05 19:26                             ` Rick Jones
2012-01-05 18:32                     ` Rick Jones
2012-01-05  9:49                 ` Eric Dumazet
2012-01-04  9:57 ` David Laight
2012-01-04 11:16   ` Jean-Michel Hautbois

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.