Netdev Archive on lore.kernel.org
 help / color / Atom feed
* tperf: An initial TSN Performance Utility
@ 2019-12-03 10:00 Jose Abreu
  2019-12-03 10:10 ` Vladimir Oltean
  2019-12-03 14:27 ` Ivan Khoronzhuk
  0 siblings, 2 replies; 6+ messages in thread
From: Jose Abreu @ 2019-12-03 10:00 UTC (permalink / raw)
  To: netdev
  Cc: Joao Pinto, Jesus Sanchez-Palencia, Vinicius Costa Gomes,
	Vladimir Oltean

Hi netdev,

[ I added in cc the people I know that work with TSN stuff, please add 
anyone interested ]

We are currently using a very basic tool for monitoring the CBS 
performance of Synopsys-based NICs which we called tperf. This was based 
on a patchset submitted by Jesus back in 2017 so credits to him and 
blames on me :)

The current version tries to send "dummy" AVTP packets, and measures the 
bandwidth of both receiver and sender. By using this tool in conjunction 
with iperf3 we can check if CBS reserved queues are behaving correctly 
by reserving the priority traffic for AVTP packets.

You can checkout the tool in the following address:
	GitHub: https://github.com/joabreu/tperf

We are open to improve this to more robust scenarios, so that we can 
have a common tool for TSN testing that's at the same time light 
weighted and precise.

Anyone interested in helping ?

---
Thanks,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tperf: An initial TSN Performance Utility
  2019-12-03 10:00 tperf: An initial TSN Performance Utility Jose Abreu
@ 2019-12-03 10:10 ` Vladimir Oltean
  2019-12-03 10:29   ` Jose Abreu
  2019-12-03 14:27 ` Ivan Khoronzhuk
  1 sibling, 1 reply; 6+ messages in thread
From: Vladimir Oltean @ 2019-12-03 10:10 UTC (permalink / raw)
  To: Jose Abreu
  Cc: netdev, Joao Pinto, Jesus Sanchez-Palencia, Vinicius Costa Gomes,
	Po Liu, Xiaoliang Yang

Hi Jose,

On Tue, 3 Dec 2019 at 12:00, Jose Abreu <Jose.Abreu@synopsys.com> wrote:
>
> Hi netdev,
>
> [ I added in cc the people I know that work with TSN stuff, please add
> anyone interested ]
>
> We are currently using a very basic tool for monitoring the CBS
> performance of Synopsys-based NICs which we called tperf. This was based
> on a patchset submitted by Jesus back in 2017 so credits to him and
> blames on me :)
>
> The current version tries to send "dummy" AVTP packets, and measures the
> bandwidth of both receiver and sender. By using this tool in conjunction
> with iperf3 we can check if CBS reserved queues are behaving correctly
> by reserving the priority traffic for AVTP packets.
>
> You can checkout the tool in the following address:
>         GitHub: https://github.com/joabreu/tperf
>
> We are open to improve this to more robust scenarios, so that we can
> have a common tool for TSN testing that's at the same time light
> weighted and precise.
>
> Anyone interested in helping ?
>
> ---
> Thanks,
> Jose Miguel Abreu

Sounds nice, I'm interested in giving this a try on the LS1028A ENETC.

Do you have any more tooling around tperf? Does the talker advertise
the stream in such a way that a switch could reserve bandwidth too?

Do you plan to add this to the kernel tree (e.g.
tools/testing/selftests/tsn or such) or how do you want to maintain
this long-term?

I've added more people from NXP who may also be interested.

Thanks,
-Vladimir

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: tperf: An initial TSN Performance Utility
  2019-12-03 10:10 ` Vladimir Oltean
@ 2019-12-03 10:29   ` Jose Abreu
  0 siblings, 0 replies; 6+ messages in thread
From: Jose Abreu @ 2019-12-03 10:29 UTC (permalink / raw)
  To: Vladimir Oltean
  Cc: netdev, Joao Pinto, Jesus Sanchez-Palencia, Vinicius Costa Gomes,
	Po Liu, Xiaoliang Yang

From: Vladimir Oltean <olteanv@gmail.com>
Date: Dec/03/2019, 10:10:54 (UTC+00:00)

> Sounds nice, I'm interested in giving this a try on the LS1028A ENETC.
> 
> Do you have any more tooling around tperf? Does the talker advertise
> the stream in such a way that a switch could reserve bandwidth too?

No, but that would be a great addition ! Unfortunately, for now you need 
to use vlan utility along with tc to configure everything. My knowledge 
of tc is none so I don't know what's necessary to just use tperf to 
configure everything ...

> Do you plan to add this to the kernel tree (e.g.
> tools/testing/selftests/tsn or such) or how do you want to maintain
> this long-term?

We will follow the path that community thinks it suits better ... 
 
> I've added more people from NXP who may also be interested.

Thanks Vladimir.

---
Thanks,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tperf: An initial TSN Performance Utility
  2019-12-03 10:00 tperf: An initial TSN Performance Utility Jose Abreu
  2019-12-03 10:10 ` Vladimir Oltean
@ 2019-12-03 14:27 ` Ivan Khoronzhuk
  2019-12-03 17:22   ` Vinicius Costa Gomes
  1 sibling, 1 reply; 6+ messages in thread
From: Ivan Khoronzhuk @ 2019-12-03 14:27 UTC (permalink / raw)
  To: Jose Abreu
  Cc: netdev, Joao Pinto, Jesus Sanchez-Palencia, Vinicius Costa Gomes,
	Vladimir Oltean

On Tue, Dec 03, 2019 at 10:00:15AM +0000, Jose Abreu wrote:
>Hi netdev,
>
>[ I added in cc the people I know that work with TSN stuff, please add
>anyone interested ]
>
>We are currently using a very basic tool for monitoring the CBS
>performance of Synopsys-based NICs which we called tperf. This was based
>on a patchset submitted by Jesus back in 2017 so credits to him and
>blames on me :)
>
>The current version tries to send "dummy" AVTP packets, and measures the
>bandwidth of both receiver and sender. By using this tool in conjunction
>with iperf3 we can check if CBS reserved queues are behaving correctly
>by reserving the priority traffic for AVTP packets.
>
>You can checkout the tool in the following address:
>	GitHub: https://github.com/joabreu/tperf
>
>We are open to improve this to more robust scenarios, so that we can
>have a common tool for TSN testing that's at the same time light
>weighted and precise.
>
>Anyone interested in helping ?

I've also have tool that already includes similar functionality.

https://github.com/ikhorn/plget

It's also about from 2016-2017 years.
Not ideal, but it helped me a lot for last years. Also worked with XDP, but
libbpf library is old already and should be updated. But mostly it was used to
get latencies and observe hw ts how packets are put on the line.

I've used it for CBS and for TAPRIO scheudler testing, observing h/w ts of each
packet, closed and open gates, but a target board should support hw ts to be
accurate, that's why ptp packets were used.

It includes also latency measurements based as on hw timestamp as on software
ts.

It also includes avtp and bw measurements with priorities, you've mentioned.
Now I have branch adding runtime measurement with plots.

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tperf: An initial TSN Performance Utility
  2019-12-03 14:27 ` Ivan Khoronzhuk
@ 2019-12-03 17:22   ` Vinicius Costa Gomes
  2019-12-03 19:05     ` Ivan Khoronzhuk
  0 siblings, 1 reply; 6+ messages in thread
From: Vinicius Costa Gomes @ 2019-12-03 17:22 UTC (permalink / raw)
  To: Ivan Khoronzhuk, Jose Abreu
  Cc: netdev\, Joao Pinto, Jesus Sanchez-Palencia, Vladimir Oltean

Hi,

Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> writes:

> On Tue, Dec 03, 2019 at 10:00:15AM +0000, Jose Abreu wrote:
>>Hi netdev,
>>
>>[ I added in cc the people I know that work with TSN stuff, please add
>>anyone interested ]
>>
>>We are currently using a very basic tool for monitoring the CBS
>>performance of Synopsys-based NICs which we called tperf. This was based
>>on a patchset submitted by Jesus back in 2017 so credits to him and
>>blames on me :)
>>
>>The current version tries to send "dummy" AVTP packets, and measures the
>>bandwidth of both receiver and sender. By using this tool in conjunction
>>with iperf3 we can check if CBS reserved queues are behaving correctly
>>by reserving the priority traffic for AVTP packets.
>>
>>You can checkout the tool in the following address:
>>	GitHub: https://github.com/joabreu/tperf
>>
>>We are open to improve this to more robust scenarios, so that we can
>>have a common tool for TSN testing that's at the same time light
>>weighted and precise.
>>
>>Anyone interested in helping ?

@Jose, that's really nice. I will play with it for sure.

>
> I've also have tool that already includes similar functionality.
>
> https://github.com/ikhorn/plget
>
> It's also about from 2016-2017 years.
> Not ideal, but it helped me a lot for last years. Also worked with XDP, but
> libbpf library is old already and should be updated. But mostly it was used to
> get latencies and observe hw ts how packets are put on the line.
>
> I've used it for CBS and for TAPRIO scheudler testing, observing h/w ts of each
> packet, closed and open gates, but a target board should support hw ts to be
> accurate, that's why ptp packets were used.
>
> It includes also latency measurements based as on hw timestamp as on software
> ts.

@Ivan, I took a look at plget some time ago. One thing that I missed is
a test method that doesn't use the HW transmit timestamp, as retrieving
the TX timestamp can be quite slow, and makes it hard to measure latency
when sending something like ~10K packet/s.

What I mean is something like this: the transmit side takes a timestamp,
stores it in the packet being sent and sends the packet, the receive
side gets the HW receive timestamp (usually faster) and calculates the
latency based on the HW receive timestamp and the timestamp in the
packet. I know that you should take the propagation delay and other
stuff into account. But on back-to-back systems (at least) these other
delays should be quite small compared to the whole latency.

Do you think adding a test method similar to this would make sense for
plget?


Cheers,
--
Vinicius

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: tperf: An initial TSN Performance Utility
  2019-12-03 17:22   ` Vinicius Costa Gomes
@ 2019-12-03 19:05     ` Ivan Khoronzhuk
  0 siblings, 0 replies; 6+ messages in thread
From: Ivan Khoronzhuk @ 2019-12-03 19:05 UTC (permalink / raw)
  To: Vinicius Costa Gomes
  Cc: Jose Abreu, netdev, Joao Pinto, Jesus Sanchez-Palencia, Vladimir Oltean

On Tue, Dec 03, 2019 at 09:22:55AM -0800, Vinicius Costa Gomes wrote:
>Hi,
>
>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> writes:
>
>> On Tue, Dec 03, 2019 at 10:00:15AM +0000, Jose Abreu wrote:
>>>Hi netdev,
>>>
>>>[ I added in cc the people I know that work with TSN stuff, please add
>>>anyone interested ]
>>>
>>>We are currently using a very basic tool for monitoring the CBS
>>>performance of Synopsys-based NICs which we called tperf. This was based
>>>on a patchset submitted by Jesus back in 2017 so credits to him and
>>>blames on me :)
>>>
>>>The current version tries to send "dummy" AVTP packets, and measures the
>>>bandwidth of both receiver and sender. By using this tool in conjunction
>>>with iperf3 we can check if CBS reserved queues are behaving correctly
>>>by reserving the priority traffic for AVTP packets.
>>>
>>>You can checkout the tool in the following address:
>>>	GitHub: https://github.com/joabreu/tperf
>>>
>>>We are open to improve this to more robust scenarios, so that we can
>>>have a common tool for TSN testing that's at the same time light
>>>weighted and precise.
>>>
>>>Anyone interested in helping ?
>
>@Jose, that's really nice. I will play with it for sure.
>
>>
>> I've also have tool that already includes similar functionality.
>>
>> https://github.com/ikhorn/plget
>>
>> It's also about from 2016-2017 years.
>> Not ideal, but it helped me a lot for last years. Also worked with XDP, but
>> libbpf library is old already and should be updated. But mostly it was used to
>> get latencies and observe hw ts how packets are put on the line.
>>
>> I've used it for CBS and for TAPRIO scheudler testing, observing h/w ts of each
>> packet, closed and open gates, but a target board should support hw ts to be
>> accurate, that's why ptp packets were used.
>>
>> It includes also latency measurements based as on hw timestamp as on software
>> ts.
>
>@Ivan, I took a look at plget some time ago. One thing that I missed is
>a test method that doesn't use the HW transmit timestamp, as retrieving
>the TX timestamp can be quite slow, and makes it hard to measure latency
>when sending something like ~10K packet/s.

Usually I use less rate, depends on robustness of implementation also.
For am5/am3/am6 TI SoC I was able to retrieve hw ts on 10K rate (500B size).
Everything has its limitation and different modes for different purposes ofc.
Just measuring bw it's quite primitive for testing shapers.
But, not a problem to add bw measurements just on app interval.
Actually it's added but with priority: hw ts, if not than sw ts, if no then app
ts. A little modification and it can do more.

>
>What I mean is something like this: the transmit side takes a timestamp,
>stores it in the packet being sent and sends the packet, the receive
>side gets the HW receive timestamp (usually faster) and calculates the
>latency based on the HW receive timestamp and the timestamp in the
>packet. I know that you should take the propagation delay and other
>stuff into account. But on back-to-back systems (at least) these other
>delays should be quite small compared to the whole latency.
>
>Do you think adding a test method similar to this would make sense for
>plget?

First of all, I can add everything that possible to add to plget.
It's a quite specific for TSN only, and any mode that is useful can be added.

However, for hw ts, there is no capability to set it to the packet as it becomes
available only after packet was sent. The method you've described only possible
with sw ts, but for this the driver has to be modified also. The plget doesn't
suppose driver modifications (except XDP sockets for now, and "bad" drivers),
But, h/w ts can be sent with next packet....

If use only sw timestamps from user app not a problem to add it. But accuracy of
this is nothing compared with hw ts (I mean for latency not bw), only some
average values can be more a while close to reality, but how accurate, it's
hard to say .... Frankly for most latency/timestamp tests the method you've
described is not needed. With hw ts I can measure latency only on one board,
second board is needed only for connection. For instance to get cycle interval
picture for taprio hw offload, showing me each schedule I used only hw ts of
receive board, seeing ipg between each received packet on destination point.
I don't need even synchronization of ptp clocks, seeing relative time is enough.
(Also there is problem to sync them and get hw ts for data flow based on ptp in
the same time .... but that's another usecase)

Any kind of bw measurements, based on priorities or other sockets flags, that
are useful or represent any tsn usecase can be added as separate mode. Mostly
it was based on hw ts as I debugged swithdev + cbs, taprio and more, measuring
bw and more parameters. I've tried to keep it simple, but it's hard when you
do it for yourself. Should be simplified.

Nothing against tperf, it's good idea!, but the tool has to have smth specific
but not duplicate functionality, just measuring bw that can be done with iperf
for instance, why not to modify iperf a little to get similar then? Or at least
try before...

-- 
Regards,
Ivan Khoronzhuk

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-03 10:00 tperf: An initial TSN Performance Utility Jose Abreu
2019-12-03 10:10 ` Vladimir Oltean
2019-12-03 10:29   ` Jose Abreu
2019-12-03 14:27 ` Ivan Khoronzhuk
2019-12-03 17:22   ` Vinicius Costa Gomes
2019-12-03 19:05     ` Ivan Khoronzhuk

Netdev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/netdev/0 netdev/git/0.git
	git clone --mirror https://lore.kernel.org/netdev/1 netdev/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 netdev netdev/ https://lore.kernel.org/netdev \
		netdev@vger.kernel.org
	public-inbox-index netdev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.netdev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git