DPDK-dev Archive on lore.kernel.org
 help / color / Atom feed
* [dpdk-dev] [RFC]  DPDK Trace support
@ 2020-01-13 10:40 Jerin Jacob Kollanukkaran
  2020-01-13 11:00 ` Ray Kinsella
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-01-13 10:40 UTC (permalink / raw)
  To: dev
  Cc: Thomas Monjalon, David Marchand, Ferruh Yigit, Andrew Rybchenko,
	Ajit Khaparde, Qi Zhang, Xiaolong Ye, Jerin Jacob Kollanukkaran,
	Raslan Darawsheh, Maxime Coquelin, Tiwei Bie, Akhil Goyal,
	Jerin Jacob Kollanukkaran, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Bruce Richardson, Aaron Conole, Michael Santana,
	Harry van Haaren, Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu,
	Jerin Jacob Kollanukkaran, Gavin Hu, David Christensen,
	Bruce Richardson, Konstantin Ananyev, Ferruh Yigit,
	Anatoly Burakov, Harini Ramakrishnan, Omar Cardona, Anand Rawat,
	Ranjit Menon, Olivier Matz, Andrew Rybchenko, Olivier Matz,
	Gage Eads, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Akhil Goyal,
	Declan Doherty, Fiona Trahe, Ashish Gupta,
	Jerin Jacob Kollanukkaran, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Hemant Agrawal, Sachin Saxena, Stephen Hemminger, Ferruh Yigit,
	Chas Williams, Ferruh Yigit, John W. Linville, Xiaolong Ye,
	Qi Zhang, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Ajit Khaparde, Somnath Kotur,
	Jerin Jacob Kollanukkaran, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Jerin Jacob Kollanukkaran,
	Rahul Lakkireddy, John Daley, Hyong Youb Kim, Wei Hu (Xavier,
	Min Hu (Connor, Yisen Zhuang, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Konstantin Ananyev, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Xiao Wang, Qiming Yang, Wenzhuo Lu,
	Rosen Xu, Tomasz Duszynski, Liron Himi, Zyta Szpak, Liron Himi,
	Jerin Jacob Kollanukkaran, Nithin Kumar Dabilpuram,
	Kiran Kumar Kokkilagadda, Matan Azrad, Shahaf Shuler,
	Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Matan Azrad,
	Stephen Hemminger, K. Y. Srinivasan, Haiyang Zhang, Jan Remes,
	Jan Remes, Heinrich Kuhn, Jan Gutter, Hemant Agrawal,
	Sachin Saxena, Hemant Agrawal, Sachin Saxena, Gagandeep Singh,
	Sachin Saxena, Gagandeep Singh, Akhil Goyal, Rasesh Mody,
	Shahed Shaikh, Rasesh Mody, Shahed Shaikh, Andrew Rybchenko,
	Yong Wang, Maxime Coquelin, Tiwei Bie, Zhihong Wang,
	Maxime Coquelin, Tiwei Bie, Zhihong Wang, Maxime Coquelin,
	Tiwei Bie, Zhihong Wang, Steven Webster, Matt Peters,
	Ferruh Yigit, Keith Wiles, Ferruh Yigit, Bruce Richardson,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh,
	Cristian Dumitrescu, Jakub Grajciar, Ravi Kumar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Declan Doherty, Pablo de Lara,
	Declan Doherty, Pablo de Lara, John Griffin, Fiona Trahe,
	Deepak Kumar Jain, Pablo de Lara, Tomasz Duszynski,
	Michael Shamis, Liron Himi, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Anoob Joseph, Declan Doherty,
	Gagandeep Singh, Hemant Agrawal, Akhil Goyal, Hemant Agrawal,
	Akhil Goyal, Hemant Agrawal, Declan Doherty, Pablo de Lara,
	Jay Zhou, Pablo de Lara, Ashish Gupta, Fiona Trahe, Lee Daly,
	Sunila Sahu, Jerin Jacob Kollanukkaran, Hemant Agrawal,
	Nipun Gupta, Hemant Agrawal, Nipun Gupta, Harry van Haaren,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Rosen Xu,
	Tianfei zhang, Bruce Richardson, Satha Koteswara Rao Kottidi,
	Vamsi Krishna Attunuru, Xiaoyun Li, Jingjing Wu, Olivier Matz,
	Jasvinder Singh, Konstantin Ananyev, Konstantin Ananyev,
	Bernard Iremonger, Vladimir Medvedkin, Bernard Iremonger,
	David Hunt, Reshma Pattan, Cristian Dumitrescu, Jasvinder Singh,
	Reshma Pattan, Cristian Dumitrescu, Konstantin Ananyev,
	Byron Marohn, Sameh Gobriel, Vladimir Medvedkin, Yipeng Wang,
	Sameh Gobriel, Vladimir Medvedkin, Honnappa Nagarahalli,
	Gaetan Rivet, David Hunt, Robert Sanford, Erik Gabriel Carrillo,
	Reshma Pattan, Kevin Laatz, Konstantin Ananyev, Reshma Pattan,
	Wenzhuo Lu, Jingjing Wu, Bernard Iremonger, Declan Doherty,
	Jerin Jacob Kollanukkaran, Maryam Tahhan, Reshma Pattan,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Bruce Richardson, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, John McNamara, Kirill Rybalchenko,
	Bruce Richardson, John McNamara, Harry van Haaren,
	Bruce Richardson, John McNamara, Xiaoyun Li, Prasun Kapoor,
	Keith Wiles, Kadam, Pallavi

Hi All,

I would like to add tracing support for DPDK.
I am planning to add this support in v20.05 release.

This RFC attempts to get feedback from the community on

a) Tracing Use cases.
b) Tracing Requirements.
b) Implementation choices.
c) Trace format.

Use-cases
---------
- Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
To debug/analyze the slow path and fast path DPDK API usage from the field,
we need to have integrated trace support in DPDK.

- Need a low overhead Fast path multi-core PMD driver debugging/analysis
infrastructure in DPDK to fix the functional and performance issue(s) of PMD.

- Post trace analysis tools can provide various status across the system such
as cpu_idle() using the timestamp added in the trace.


Requirements:
-------------
- Support for Linux, FreeBSD and Windows OS
- Open trace format
- Multi-platform Open source trace viewer
- Absolute low overhead trace API for DPDK fast path tracing/debugging.
- Dynamic enable/disable of trace events


To enable trace support in DPDK, following items need to work out: 

a) Add the DPDK trace points in the DPDK source code.

- This includes updating DPDK functions such as,
rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.

b) Choosing suitable serialization-format

- Common Trace Format, CTF, is an open format and language to describe trace formats.
This enables tool reuse, of which line-textual (babeltrace) and 
graphical (TraceCompass) variants already exist.

CTF should look familiar to C programmers but adds stronger typing. 
See CTF - A Flexible, High-performance Binary Trace Format.

https://diamon.org/ctf/

c) Writing the on-target serialization code,

See the section below.(Lttng CTF trace emitter vs DPDK specific CTF trace emitter)
 
d) Deciding on and writing the I/O transport mechanics,

For performance reasons, it should be backed by a huge-page and write to file IO.

e) Writing the PC-side deserializer/parser,

Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
See: 
https://lttng.org/viewers/

f) Writing tools for filtering and presentation.

See item (e)


Lttng CTF trace emitter vs DPDK specific CTF trace emitter
----------------------------------------------------------

I have written a performance evaluation application to measure the overhead
of Lttng CTF emitter(The fastpath infrastructure used by https://lttng.org/ library to emit the trace)

https://github.com/jerinjacobk/lttng-overhead
https://github.com/jerinjacobk/lttng-overhead/blob/master/README

I could improve the performance by 30% by adding the "DPDK"
based plugin for get_clock() and get_cpu(),
Here are the performance numbers after adding the plugin on 
x86 and various arm64 board that I have access to,

On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the last line in the log(ZERO_ARG)) 
On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
In short, Based on the "IPC capabilities", The cost would be around 100ns to 400ns
for single void trace(a trace without any argument)


[lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
CPU Timer freq is 2600.000000MHz
NOP: cycles=0.194834 ns=0.074936
GET_CLOCK: cycles=47.854658 ns=18.405638
GET_CPU: cycles=30.995892 ns=11.921497
ZERO_ARG: cycles=236.945113 ns=91.132736


We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng CTF emitter
may not fit the DPDK fast path purpose due to the cost associated with generic Lttng features.

One option could be to have, native CTF emitter in EAL/DPDK to emit the
trace in a hugepage. I think it would be a handful of cycles if we limit the features
to the requirements above:

The upside of using Lttng CTF emitter:
a) No need to write a new CTF trace emitter(the item (c))

The downside of Lttng CTF emitter(the item (c))
a) performance issue(See above)
b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
c) dpdk library dependency to lttng for trace.

So, Probably it good to have native CTF emitter in DPDK and reuse all
open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
I think, it would be best of both world.

Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-13 10:40 [dpdk-dev] [RFC] DPDK Trace support Jerin Jacob Kollanukkaran
@ 2020-01-13 11:00 ` Ray Kinsella
  2020-01-13 12:04   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
  2020-01-18 15:14   ` [dpdk-dev] " dave
  2020-01-13 13:05 ` Bruce Richardson
  2020-01-27 16:12 ` Aaron Conole
  2 siblings, 2 replies; 24+ messages in thread
From: Ray Kinsella @ 2020-01-13 11:00 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, dpdk-dev, dave

Hi Jerin,

Any idea why lttng performance is so poor?
I would have naturally gone there to benefit from the existing toolchain.

Have you looked at the FD.io logging/tracing infrastructure for inspiration?
https://wiki.fd.io/view/VPP/elog

Ray K

On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
> Hi All,
> 
> I would like to add tracing support for DPDK.
> I am planning to add this support in v20.05 release.
> 
> This RFC attempts to get feedback from the community on
> 
> a) Tracing Use cases.
> b) Tracing Requirements.
> b) Implementation choices.
> c) Trace format.
> 
> Use-cases
> ---------
> - Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
> To debug/analyze the slow path and fast path DPDK API usage from the field,
> we need to have integrated trace support in DPDK.
> 
> - Need a low overhead Fast path multi-core PMD driver debugging/analysis
> infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
> 
> - Post trace analysis tools can provide various status across the system such
> as cpu_idle() using the timestamp added in the trace.
> 
> 
> Requirements:
> -------------
> - Support for Linux, FreeBSD and Windows OS
> - Open trace format
> - Multi-platform Open source trace viewer
> - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> - Dynamic enable/disable of trace events
> 
> 
> To enable trace support in DPDK, following items need to work out: 
> 
> a) Add the DPDK trace points in the DPDK source code.
> 
> - This includes updating DPDK functions such as,
> rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.
> 
> b) Choosing suitable serialization-format
> 
> - Common Trace Format, CTF, is an open format and language to describe trace formats.
> This enables tool reuse, of which line-textual (babeltrace) and 
> graphical (TraceCompass) variants already exist.
> 
> CTF should look familiar to C programmers but adds stronger typing. 
> See CTF - A Flexible, High-performance Binary Trace Format.
> 
> https://diamon.org/ctf/
> 
> c) Writing the on-target serialization code,
> 
> See the section below.(Lttng CTF trace emitter vs DPDK specific CTF trace emitter)
>  
> d) Deciding on and writing the I/O transport mechanics,
> 
> For performance reasons, it should be backed by a huge-page and write to file IO.
> 
> e) Writing the PC-side deserializer/parser,
> 
> Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> See: 
> https://lttng.org/viewers/
> 
> f) Writing tools for filtering and presentation.
> 
> See item (e)
> 
> 
> Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> ----------------------------------------------------------
> 
> I have written a performance evaluation application to measure the overhead
> of Lttng CTF emitter(The fastpath infrastructure used by https://lttng.org/ library to emit the trace)
> 
> https://github.com/jerinjacobk/lttng-overhead
> https://github.com/jerinjacobk/lttng-overhead/blob/master/README
> 
> I could improve the performance by 30% by adding the "DPDK"
> based plugin for get_clock() and get_cpu(),
> Here are the performance numbers after adding the plugin on 
> x86 and various arm64 board that I have access to,
> 
> On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the last line in the log(ZERO_ARG)) 
> On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
> In short, Based on the "IPC capabilities", The cost would be around 100ns to 400ns
> for single void trace(a trace without any argument)
> 
> 
> [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> CPU Timer freq is 2600.000000MHz
> NOP: cycles=0.194834 ns=0.074936
> GET_CLOCK: cycles=47.854658 ns=18.405638
> GET_CPU: cycles=30.995892 ns=11.921497
> ZERO_ARG: cycles=236.945113 ns=91.132736
> 
> 
> We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng CTF emitter
> may not fit the DPDK fast path purpose due to the cost associated with generic Lttng features.
> 
> One option could be to have, native CTF emitter in EAL/DPDK to emit the
> trace in a hugepage. I think it would be a handful of cycles if we limit the features
> to the requirements above:
> 
> The upside of using Lttng CTF emitter:
> a) No need to write a new CTF trace emitter(the item (c))
> 
> The downside of Lttng CTF emitter(the item (c))
> a) performance issue(See above)
> b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> c) dpdk library dependency to lttng for trace.
> 
> So, Probably it good to have native CTF emitter in DPDK and reuse all
> open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> I think, it would be best of both world.
> 
> Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [EXT] Re: [RFC]  DPDK Trace support
  2020-01-13 11:00 ` Ray Kinsella
@ 2020-01-13 12:04   ` " Jerin Jacob Kollanukkaran
  2020-01-18 15:14   ` [dpdk-dev] " dave
  1 sibling, 0 replies; 24+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-01-13 12:04 UTC (permalink / raw)
  To: Ray Kinsella, dpdk-dev, dave

> -----Original Message-----
> From: Ray Kinsella <mdr@ashroe.eu>
> Sent: Monday, January 13, 2020 4:30 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dpdk-dev
> <dev@dpdk.org>; dave@barachs.net
> Subject: [EXT] Re: [RFC] [dpdk-dev] DPDK Trace support
> 
> External Email
> 
> ----------------------------------------------------------------------
> Hi Jerin,

Hi Ray,

> 
> Any idea why lttng performance is so poor?

100ns is the expected number based on Lttng presentations.
Just the 100ns is for high-end x86 machines.
Here is the perf out. Looks like overhead is coming from ring buffer implementations
due to its features. More over for normal linux application case, 100ns may not be 
bad, Just that DPDK need more..

  45.07%  liblttng-ust.so.0.0.0             [.] lttng_event_reserve
  25.48%  liblttng-ust.so.0.0.0             [.] lttng_event_commit
   6.30%  calibrate                         [.] __event_probe__dpdk___zero_arg
   5.05%  calibrate                         [.] __worker_ZERO_ARG
   4.87%  liblttng-ust-tracepoint.so.0.0.0  [.] tp_rcu_read_lock_bp
   4.79%  liblttng-ust-tracepoint.so.0.0.0  [.] tp_rcu_read_unlock_bp
   4.43%  ld-2.29.so                        [.] _dl_tlsdesc_return
   1.94%  calibrate                         [.] plugin_getcpu
   1.42%  calibrate                         [.] plugin_read64
   0.65%  liblttng-ust-tracepoint.so.0.0.0  [.] tp_rcu_dereference_sym_bp

Note:
- Performance is even worse, we if don’t use snapshot mode and have DPDK plugin
For get_clock and get_cpu. There numbers are based on the optimizations provided
Lttng in the framework.

> I would have naturally gone there to benefit from the existing toolchain.

Yes. That’s reason why I started with Lttng. After the integration, the testpmd dipped
Performance. Then added following test case to verify the overhead.
https://github.com/jerinjacobk/lttng-overhead
 
> Have you looked at the FD.io logging/tracing infrastructure for inspiration?

Based on my understanding, VPP has VPP specific trace format, trace emitter and trace viewer.
Since Lttng uses CTF and it is open, We could leverage the open source viewers
And post processing tracing tools with CTF. Looks like High performance trace_emiiter
Is only the missing piece in Lttng for us.

Off Couse, We can use FD.IO logging documentation for reference.



> 
> Ray K
> 
> On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
> > Hi All,
> >
> > I would like to add tracing support for DPDK.
> > I am planning to add this support in v20.05 release.
> >
> > This RFC attempts to get feedback from the community on
> >
> > a) Tracing Use cases.
> > b) Tracing Requirements.
> > b) Implementation choices.
> > c) Trace format.
> >
> > Use-cases
> > ---------
> > - Most of the cases, The DPDK provider will not have access to the DPDK
> customer applications.
> > To debug/analyze the slow path and fast path DPDK API usage from the
> > field, we need to have integrated trace support in DPDK.
> >
> > - Need a low overhead Fast path multi-core PMD driver
> > debugging/analysis infrastructure in DPDK to fix the functional and
> performance issue(s) of PMD.
> >
> > - Post trace analysis tools can provide various status across the
> > system such as cpu_idle() using the timestamp added in the trace.
> >
> >
> > Requirements:
> > -------------
> > - Support for Linux, FreeBSD and Windows OS
> > - Open trace format
> > - Multi-platform Open source trace viewer
> > - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> > - Dynamic enable/disable of trace events
> >
> >
> > To enable trace support in DPDK, following items need to work out:
> >
> > a) Add the DPDK trace points in the DPDK source code.
> >
> > - This includes updating DPDK functions such as,
> > rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit
> the trace.
> >
> > b) Choosing suitable serialization-format
> >
> > - Common Trace Format, CTF, is an open format and language to describe
> trace formats.
> > This enables tool reuse, of which line-textual (babeltrace) and
> > graphical (TraceCompass) variants already exist.
> >
> > CTF should look familiar to C programmers but adds stronger typing.
> > See CTF - A Flexible, High-performance Binary Trace Format.
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__diamon.org_ctf_&d
> >
> =DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4
> wCNtTa4
> >
> UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=k0FbD-
> lAnFNH9
> > qkmKI6-LX_OHFBmsxKwQio7eEModCM&e=
> >
> > c) Writing the on-target serialization code,
> >
> > See the section below.(Lttng CTF trace emitter vs DPDK specific CTF
> > trace emitter)
> >
> > d) Deciding on and writing the I/O transport mechanics,
> >
> > For performance reasons, it should be backed by a huge-page and write to file
> IO.
> >
> > e) Writing the PC-side deserializer/parser,
> >
> > Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> > See:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_viewers
> >
> _&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0
> s5f4wCNt
> >
> Ta4UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=GJ
> U1ogbBwJ
> > N320JxEY4iB4SXWVopIDkoIAtxrMaHK4E&e=
> >
> > f) Writing tools for filtering and presentation.
> >
> > See item (e)
> >
> >
> > Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> > ----------------------------------------------------------
> >
> > I have written a performance evaluation application to measure the
> > overhead of Lttng CTF emitter(The fastpath infrastructure used by
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_&d=DwIC
> >
> aQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtT
> a4UUKvc
> > svI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=Ea-
> LF8IytOCG48BPPU
> > _4ucf9tRLIFbnKXLsj5E-eiNw&e=  library to emit the trace)
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz
> >
> 6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5t
> KxAPUr
> > VQmQcUXg&s=EUqCHc0znylADrKrGw9oLNew7ZqMQ2Dvi9T-t8NAmm8&e=
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead_blob_master_README&d=DwICaQ&c=nKjWec2b6R0mOyPaz
> >
> 7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=xnRsAfdFoE
> yF_20G
> > 98OkCz08C9v5tKxAPUrVQmQcUXg&s=Q3fAN3dL_m44lSb5I5I4BG4zZ-
> IQQM44b160UXlI
> > JaA&e=
> >
> > I could improve the performance by 30% by adding the "DPDK"
> > based plugin for get_clock() and get_cpu(), Here are the performance
> > numbers after adding the plugin on
> > x86 and various arm64 board that I have access to,
> >
> > On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
> > last line in the log(ZERO_ARG)) On arm64, it varies from 312 cycles to 1100
> cycles(based on the class of CPU).
> > In short, Based on the "IPC capabilities", The cost would be around
> > 100ns to 400ns for single void trace(a trace without any argument)
> >
> >
> > [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> > make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> > make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> > EAL: Detected 56 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:01:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device 0000:01:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > CPU Timer freq is 2600.000000MHz
> > NOP: cycles=0.194834 ns=0.074936
> > GET_CLOCK: cycles=47.854658 ns=18.405638
> > GET_CPU: cycles=30.995892 ns=11.921497
> > ZERO_ARG: cycles=236.945113 ns=91.132736
> >
> >
> > We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng
> > CTF emitter may not fit the DPDK fast path purpose due to the cost
> associated with generic Lttng features.
> >
> > One option could be to have, native CTF emitter in EAL/DPDK to emit
> > the trace in a hugepage. I think it would be a handful of cycles if we
> > limit the features to the requirements above:
> >
> > The upside of using Lttng CTF emitter:
> > a) No need to write a new CTF trace emitter(the item (c))
> >
> > The downside of Lttng CTF emitter(the item (c))
> > a) performance issue(See above)
> > b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> > c) dpdk library dependency to lttng for trace.
> >
> > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF)
> infrastructure.
> > I think, it would be best of both world.
> >
> > Any thoughts on this subject? Based on the community feedback, I can work
> on the patch for v20.05.
> >

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-13 10:40 [dpdk-dev] [RFC] DPDK Trace support Jerin Jacob Kollanukkaran
  2020-01-13 11:00 ` Ray Kinsella
@ 2020-01-13 13:05 ` Bruce Richardson
  2020-01-13 14:46   ` Jerin Jacob
  2020-01-27 16:12 ` Aaron Conole
  2 siblings, 1 reply; 24+ messages in thread
From: Bruce Richardson @ 2020-01-13 13:05 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran
  Cc: dev, Thomas Monjalon, David Marchand, Ferruh Yigit,
	Andrew Rybchenko, Ajit Khaparde, Qi Zhang, Xiaolong Ye,
	Raslan Darawsheh, Maxime Coquelin, Tiwei Bie, Akhil Goyal,
	Luca Boccassi, Kevin Traynor, maintainers, John McNamara,
	Marko Kovacevic, Ray Kinsella, Aaron Conole, Michael Santana,
	Harry van Haaren, Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu, David Christensen,
	Konstantin Ananyev, Anatoly Burakov, Harini Ramakrishnan,
	Omar Cardona, Anand Rawat, Ranjit Menon, Olivier Matz, Gage Eads,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Fiona Trahe,
	Ashish Gupta, Erik Gabriel Carrillo, Abhinandan Gujjar,
	Shreyansh Jain, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 13, 2020 at 10:40:13AM +0000, Jerin Jacob Kollanukkaran wrote:
> Hi All,
> 
> I would like to add tracing support for DPDK.
> I am planning to add this support in v20.05 release.
> 
> This RFC attempts to get feedback from the community on
> 
> a) Tracing Use cases.
> b) Tracing Requirements.
> b) Implementation choices.
> c) Trace format.
> 
> Use-cases
> ---------
> - Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
> To debug/analyze the slow path and fast path DPDK API usage from the field,
> we need to have integrated trace support in DPDK.
> 
> - Need a low overhead Fast path multi-core PMD driver debugging/analysis
> infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
> 
> - Post trace analysis tools can provide various status across the system such
> as cpu_idle() using the timestamp added in the trace.
> 
> 
> Requirements:
> -------------
> - Support for Linux, FreeBSD and Windows OS
> - Open trace format
> - Multi-platform Open source trace viewer
> - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> - Dynamic enable/disable of trace events
> 
> 
> To enable trace support in DPDK, following items need to work out: 
> 
> a) Add the DPDK trace points in the DPDK source code.
> 
> - This includes updating DPDK functions such as,
> rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.
> 
> b) Choosing suitable serialization-format
> 
> - Common Trace Format, CTF, is an open format and language to describe trace formats.
> This enables tool reuse, of which line-textual (babeltrace) and 
> graphical (TraceCompass) variants already exist.
> 
> CTF should look familiar to C programmers but adds stronger typing. 
> See CTF - A Flexible, High-performance Binary Trace Format.
> 
> https://diamon.org/ctf/
> 
> c) Writing the on-target serialization code,
> 
> See the section below.(Lttng CTF trace emitter vs DPDK specific CTF trace emitter)
>  
> d) Deciding on and writing the I/O transport mechanics,
> 
> For performance reasons, it should be backed by a huge-page and write to file IO.
> 
> e) Writing the PC-side deserializer/parser,
> 
> Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> See: 
> https://lttng.org/viewers/
> 
> f) Writing tools for filtering and presentation.
> 
> See item (e)
> 
> 
> Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> ----------------------------------------------------------
> 
> I have written a performance evaluation application to measure the overhead
> of Lttng CTF emitter(The fastpath infrastructure used by https://lttng.org/ library to emit the trace)
> 
> https://github.com/jerinjacobk/lttng-overhead
> https://github.com/jerinjacobk/lttng-overhead/blob/master/README
> 
> I could improve the performance by 30% by adding the "DPDK"
> based plugin for get_clock() and get_cpu(),
> Here are the performance numbers after adding the plugin on 
> x86 and various arm64 board that I have access to,
> 
> On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the last line in the log(ZERO_ARG)) 
> On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
> In short, Based on the "IPC capabilities", The cost would be around 100ns to 400ns
> for single void trace(a trace without any argument)
> 
> 
> [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> CPU Timer freq is 2600.000000MHz
> NOP: cycles=0.194834 ns=0.074936
> GET_CLOCK: cycles=47.854658 ns=18.405638
> GET_CPU: cycles=30.995892 ns=11.921497
> ZERO_ARG: cycles=236.945113 ns=91.132736
> 
> 
> We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng CTF emitter
> may not fit the DPDK fast path purpose due to the cost associated with generic Lttng features.
> 
> One option could be to have, native CTF emitter in EAL/DPDK to emit the
> trace in a hugepage. I think it would be a handful of cycles if we limit the features
> to the requirements above:
> 
> The upside of using Lttng CTF emitter:
> a) No need to write a new CTF trace emitter(the item (c))
> 
> The downside of Lttng CTF emitter(the item (c))
> a) performance issue(See above)
> b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> c) dpdk library dependency to lttng for trace.
> 
> So, Probably it good to have native CTF emitter in DPDK and reuse all
> open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> I think, it would be best of both world.
> 
> Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.

Forgive my ignorance of LTTng, but is there the concept of
enabling/disabling the trace points? If so, the overhead you refer to, that
is presumably with the trace enabled?

/Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-13 13:05 ` Bruce Richardson
@ 2020-01-13 14:46   ` Jerin Jacob
  2020-01-13 14:58     ` Bruce Richardson
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2020-01-13 14:46 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon,
	Olivier Matz, Gage Eads, Adrien Mazarguil, Nicolas Chautru,
	Declan Doherty, Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 13, 2020 at 6:36 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
>
> > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> > I think, it would be best of both world.
> >
> > Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
>
> Forgive my ignorance of LTTng, but is there the concept of
> enabling/disabling the trace points? If so, the overhead you refer to, that
> is presumably with the trace enabled?

Yes this is when trace is enabled. If the trace is disabled then it
will be the only a handful of cycles.

> /Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-13 14:46   ` Jerin Jacob
@ 2020-01-13 14:58     ` Bruce Richardson
  2020-01-13 15:13       ` Jerin Jacob
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Richardson @ 2020-01-13 14:58 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon,
	Olivier Matz, Gage Eads, Adrien Mazarguil, Nicolas Chautru,
	Declan Doherty, Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 13, 2020 at 08:16:07PM +0530, Jerin Jacob wrote:
> On Mon, Jan 13, 2020 at 6:36 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> >
> > > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> > > I think, it would be best of both world.
> > >
> > > Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
> >
> > Forgive my ignorance of LTTng, but is there the concept of
> > enabling/disabling the trace points? If so, the overhead you refer to, that
> > is presumably with the trace enabled?
> 
> Yes this is when trace is enabled. If the trace is disabled then it
> will be the only a handful of cycles.
> 
Two follow-on questions:
1. Is the trace enable/disable dynamic at runtime?
2. Have you investigated how low the "handful of cycles" actually is?

While I think it is important to get the cost of tracing right down to make
it useful, the cost of tracing when it is not being used is even more
critical IMHO.

/Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-13 14:58     ` Bruce Richardson
@ 2020-01-13 15:13       ` Jerin Jacob
  2020-01-13 16:12         ` Bruce Richardson
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2020-01-13 15:13 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon,
	Olivier Matz, Gage Eads, Adrien Mazarguil, Nicolas Chautru,
	Declan Doherty, Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 13, 2020 at 8:28 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Jan 13, 2020 at 08:16:07PM +0530, Jerin Jacob wrote:
> > On Mon, Jan 13, 2020 at 6:36 PM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > >
> > > > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > > > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> > > > I think, it would be best of both world.
> > > >
> > > > Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
> > >
> > > Forgive my ignorance of LTTng, but is there the concept of
> > > enabling/disabling the trace points? If so, the overhead you refer to, that
> > > is presumably with the trace enabled?
> >
> > Yes this is when trace is enabled. If the trace is disabled then it
> > will be the only a handful of cycles.
> >
> Two follow-on questions:
> 1. Is the trace enable/disable dynamic at runtime?

Yes. See the requirement section.

> 2. Have you investigated how low the "handful of cycles" actually is?

Yes. it is around 1 to 3 cycles based on the arch. it boils down to
mostly a branch hit/miss on a memory location
embedded in a C macro.

> While I think it is important to get the cost of tracing right down to make
> it useful, the cost of tracing when it is not being used is even more
> critical IMHO.

Yes.

>
> /Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-13 15:13       ` Jerin Jacob
@ 2020-01-13 16:12         ` Bruce Richardson
  2020-01-17  4:41           ` Jerin Jacob
  0 siblings, 1 reply; 24+ messages in thread
From: Bruce Richardson @ 2020-01-13 16:12 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 13, 2020 at 08:43:01PM +0530, Jerin Jacob wrote:
> On Mon, Jan 13, 2020 at 8:28 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Mon, Jan 13, 2020 at 08:16:07PM +0530, Jerin Jacob wrote:
> > > On Mon, Jan 13, 2020 at 6:36 PM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > >
> > > > > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > > > > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> > > > > I think, it would be best of both world.
> > > > >
> > > > > Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
> > > >
> > > > Forgive my ignorance of LTTng, but is there the concept of
> > > > enabling/disabling the trace points? If so, the overhead you refer to, that
> > > > is presumably with the trace enabled?
> > >
> > > Yes this is when trace is enabled. If the trace is disabled then it
> > > will be the only a handful of cycles.
> > >
> > Two follow-on questions:
> > 1. Is the trace enable/disable dynamic at runtime?
> 
> Yes. See the requirement section.
> 
> > 2. Have you investigated how low the "handful of cycles" actually is?
> 
> Yes. it is around 1 to 3 cycles based on the arch. it boils down to
> mostly a branch hit/miss on a memory location
> embedded in a C macro.
> 
That seems impressively low, which is great news!

/Bruce

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-13 16:12         ` Bruce Richardson
@ 2020-01-17  4:41           ` Jerin Jacob
  2020-01-17  8:04             ` David Marchand
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2020-01-17  4:41 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

> > > >
> > > > Yes this is when trace is enabled. If the trace is disabled then it
> > > > will be the only a handful of cycles.
> > > >
> > > Two follow-on questions:
> > > 1. Is the trace enable/disable dynamic at runtime?
> >
> > Yes. See the requirement section.
> >
> > > 2. Have you investigated how low the "handful of cycles" actually is?
> >
> > Yes. it is around 1 to 3 cycles based on the arch. it boils down to
> > mostly a branch hit/miss on a memory location
> > embedded in a C macro.
> >
> That seems impressively low, which is great news!

Does anyone have an objection to have
1) Use CTF as trace format to reuse the opensource tracing tools and
compatibility wth LTTng
https://diamon.org/ctf/
2) Have native DPDK CTF trace emitter for better performance for DPDK
fast path tracing and Non-Linux support.

I would like to avoid the situation where once code gets completed and
then starts our basic discussion
on the design decisions.

If someone needs more time to think through or any clarification is
required then please discuss.


See below the original RFC.

-------------------------- 8<----------------------------------

Hi All,

I would like to add tracing support for DPDK.
I am planning to add this support in v20.05 release.

This RFC attempts to get feedback from the community on

a) Tracing Use cases.
b) Tracing Requirements.
b) Implementation choices.
c) Trace format.

Use-cases
---------
- Most of the cases, The DPDK provider will not have access to the
DPDK customer applications.
To debug/analyze the slow path and fast path DPDK API usage from the field,
we need to have integrated trace support in DPDK.

- Need a low overhead Fast path multi-core PMD driver debugging/analysis
infrastructure in DPDK to fix the functional and performance issue(s) of PMD.

- Post trace analysis tools can provide various status across the system such
as cpu_idle() using the timestamp added in the trace.


Requirements:
-------------
- Support for Linux, FreeBSD and Windows OS
- Open trace format
- Multi-platform Open source trace viewer
- Absolute low overhead trace API for DPDK fast path tracing/debugging.
- Dynamic enable/disable of trace events


To enable trace support in DPDK, following items need to work out:

a) Add the DPDK trace points in the DPDK source code.

- This includes updating DPDK functions such as,
rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst()
to emit the trace.

b) Choosing suitable serialization-format

- Common Trace Format, CTF, is an open format and language to describe
trace formats.
This enables tool reuse, of which line-textual (babeltrace) and
graphical (TraceCompass) variants already exist.

CTF should look familiar to C programmers but adds stronger typing.
See CTF - A Flexible, High-performance Binary Trace Format.

https://diamon.org/ctf/

c) Writing the on-target serialization code,

See the section below.(Lttng CTF trace emitter vs DPDK specific CTF
trace emitter)

d) Deciding on and writing the I/O transport mechanics,

For performance reasons, it should be backed by a huge-page and write
to file IO.

e) Writing the PC-side deserializer/parser,

Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
See:
https://lttng.org/viewers/

f) Writing tools for filtering and presentation.

See item (e)


Lttng CTF trace emitter vs DPDK specific CTF trace emitter
----------------------------------------------------------

I have written a performance evaluation application to measure the overhead
of Lttng CTF emitter(The fastpath infrastructure used by
https://lttng.org/ library to emit the trace)

https://github.com/jerinjacobk/lttng-overhead
https://github.com/jerinjacobk/lttng-overhead/blob/master/README

I could improve the performance by 30% by adding the "DPDK"
based plugin for get_clock() and get_cpu(),
Here are the performance numbers after adding the plugin on
x86 and various arm64 board that I have access to,

On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
last line in the log(ZERO_ARG))
On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
In short, Based on the "IPC capabilities", The cost would be around
100ns to 400ns
for single void trace(a trace without any argument)


[lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
CPU Timer freq is 2600.000000MHz
NOP: cycles=0.194834 ns=0.074936
GET_CLOCK: cycles=47.854658 ns=18.405638
GET_CPU: cycles=30.995892 ns=11.921497
ZERO_ARG: cycles=236.945113 ns=91.132736


We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng
CTF emitter
may not fit the DPDK fast path purpose due to the cost associated with
generic Lttng features.

One option could be to have, native CTF emitter in EAL/DPDK to emit the
trace in a hugepage. I think it would be a handful of cycles if we
limit the features
to the requirements above:

The upside of using Lttng CTF emitter:
a) No need to write a new CTF trace emitter(the item (c))

The downside of Lttng CTF emitter(the item (c))
a) performance issue(See above)
b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
c) dpdk library dependency to lttng for trace.

So, Probably it good to have native CTF emitter in DPDK and reuse all
open-source trace viewer(babeltrace and  TraceCompass) and format(CTF)
infrastructure.
I think, it would be best of both world.

Any thoughts on this subject? Based on the community feedback, I can
work on the patch for v20.05.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17  4:41           ` Jerin Jacob
@ 2020-01-17  8:04             ` David Marchand
  2020-01-17  9:52               ` Jerin Jacob
  0 siblings, 1 reply; 24+ messages in thread
From: David Marchand @ 2020-01-17  8:04 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu, David Christensen,
	Konstantin Ananyev, Anatoly Burakov, Harini Ramakrishnan,
	Omar Cardona, Anand Rawat, Olivier Matz, Gage Eads,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Fiona Trahe,
	Ashish Gupta, Erik Gabriel Carrillo, Abhinandan Gujjar,
	Hemant Agrawal, Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Fri, Jan 17, 2020 at 5:41 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> > > > >
> > > > > Yes this is when trace is enabled. If the trace is disabled then it
> > > > > will be the only a handful of cycles.
> > > > >
> > > > Two follow-on questions:
> > > > 1. Is the trace enable/disable dynamic at runtime?
> > >
> > > Yes. See the requirement section.
> > >
> > > > 2. Have you investigated how low the "handful of cycles" actually is?
> > >
> > > Yes. it is around 1 to 3 cycles based on the arch. it boils down to
> > > mostly a branch hit/miss on a memory location
> > > embedded in a C macro.
> > >
> > That seems impressively low, which is great news!
>
> Does anyone have an objection to have
> 1) Use CTF as trace format to reuse the opensource tracing tools and
> compatibility wth LTTng
> https://diamon.org/ctf/
> 2) Have native DPDK CTF trace emitter for better performance for DPDK
> fast path tracing and Non-Linux support.
>
> I would like to avoid the situation where once code gets completed and
> then starts our basic discussion
> on the design decisions.
>
> If someone needs more time to think through or any clarification is
> required then please discuss.

I did not find the time to look at this.
Some quick questions:
- is LTTng coming with out-of-tree kmod? making it hard to support in
distributions?
- I have been playing with perf those days to track live processes and
gathering informations/stats at key point of a dpdk app without adding
anything in the binary. What does LTTng provide that scripting around
perf would not solve?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17  8:04             ` David Marchand
@ 2020-01-17  9:52               ` Jerin Jacob
  2020-01-17 10:30                 ` Mattias Rönnblom
  2020-01-17 10:43                 ` David Marchand
  0 siblings, 2 replies; 24+ messages in thread
From: Jerin Jacob @ 2020-01-17  9:52 UTC (permalink / raw)
  To: David Marchand
  Cc: Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu, David Christensen,
	Konstantin Ananyev, Anatoly Burakov, Harini Ramakrishnan,
	Omar Cardona, Anand Rawat, Olivier Matz, Gage Eads,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Fiona Trahe,
	Ashish Gupta, Erik Gabriel Carrillo, Abhinandan Gujjar,
	Hemant Agrawal, Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Fri, Jan 17, 2020 at 1:35 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Fri, Jan 17, 2020 at 5:41 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >
> > > > > >
> > > > > > Yes this is when trace is enabled. If the trace is disabled then it
> > > > > > will be the only a handful of cycles.
> > > > > >
> > > > > Two follow-on questions:
> > > > > 1. Is the trace enable/disable dynamic at runtime?
> > > >
> > > > Yes. See the requirement section.
> > > >
> > > > > 2. Have you investigated how low the "handful of cycles" actually is?
> > > >
> > > > Yes. it is around 1 to 3 cycles based on the arch. it boils down to
> > > > mostly a branch hit/miss on a memory location
> > > > embedded in a C macro.
> > > >
> > > That seems impressively low, which is great news!
> >
> > Does anyone have an objection to have
> > 1) Use CTF as trace format to reuse the opensource tracing tools and
> > compatibility wth LTTng
> > https://diamon.org/ctf/
> > 2) Have native DPDK CTF trace emitter for better performance for DPDK
> > fast path tracing and Non-Linux support.
> >
> > I would like to avoid the situation where once code gets completed and
> > then starts our basic discussion
> > on the design decisions.
> >
> > If someone needs more time to think through or any clarification is
> > required then please discuss.
>
> I did not find the time to look at this.
> Some quick questions:
> - is LTTng coming with out-of-tree kmod? making it hard to support in
> distributions?

LTTng kernel tracing only needs kmod support.
For the userspace tracing at minium following libraries are required.

a) LTTng-UST
b) LTTng-tools
c) liburcu
d) libpopt-dev

Based on the https://lttng.org/docs/v2.11/#doc-installing-lttng
-------------------------- 8<----------------------------------
Important:As of 22 October 2019, LTTng 2.11 is not available as
distribution packages, except for Arch Linux.
You can build LTTng 2.11 from source to install and use it.
-------------------------- >8----------------------------------

> - I have been playing with perf those days to track live processes and
> gathering informations/stats at key point of a dpdk app without adding
> anything in the binary. What does LTTng provide that scripting around
> perf would not solve?

Profiler and Tracer are two different things: Perf is a profiler.

Definitions from https://lttng.org/docs/v2.11/#doc-what-is-tracing
-------------------------- 8<----------------------------------
A profiler is often the tool of choice to identify performance
bottlenecks. Profiling is suitable to identify where performance is
lost in a given software. The profiler outputs a profile, a
statistical summary of observed events, which you may use to discover
which functions took the most time to execute. However, a profiler
won’t report why some identified functions are the bottleneck.
Bottlenecks might only occur when specific conditions are met,
conditions that are sometimes impossible to capture by a statistical
profiler, or impossible to reproduce with an application altered by
the overhead of an event-based profiler. For a thorough investigation
of software performance issues, a history of execution is essential,
with the recorded values of variables and context fields you choose,
and with as little influence as possible on the instrumented software.
This is where tracing comes in handy.

Tracing is a technique used to understand what goes on in a running
software system. The software used for tracing is called a tracer,
which is conceptually similar to a tape recorder. When recording,
specific instrumentation points placed in the software source code
generate events that are saved on a giant tape: a trace file. You can
trace user applications and the operating system at the same time,
opening the possibility of resolving a wide range of problems that
would otherwise be extremely challenging.
-------------------------- >8----------------------------------

Once tracing infrastructure is in place, we can add tracepoints in the
dpdk functions such as rte_eth_dev_configure(), rx_burst, etc so
that one can trace the flow of the program and debug. The use case
details from the RFC:

-------------------------- 8<----------------------------------
Use-cases
---------
- Most of the cases, The DPDK provider will not have access to the
DPDK customer applications.
To debug/analyze the slow path and fast path DPDK API usage from the field,
we need to have integrated trace support in DPDK.

- Need a low overhead Fast path multi-core PMD driver debugging/analysis
infrastructure in DPDK to fix the functional and performance issue(s) of PMD.

- Post trace analysis tools can provide various status across the system such
as cpu_idle() using the timestamp added in the trace.
-------------------------- >8----------------------------------

Here is more details on viewing Traces using Trace compass(An
opensource CTF trace viewer)

https://www.renesas.com/cn/zh/doc/products/tool/doc/014/r20ut4479ej0000-lttng.pdf


>
>
> --
> David Marchand
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17  9:52               ` Jerin Jacob
@ 2020-01-17 10:30                 ` Mattias Rönnblom
  2020-01-17 10:54                   ` Jerin Jacob
  2020-01-17 10:43                 ` David Marchand
  1 sibling, 1 reply; 24+ messages in thread
From: Mattias Rönnblom @ 2020-01-17 10:30 UTC (permalink / raw)
  To: Jerin Jacob, David Marchand
  Cc: Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On 2020-01-17 10:52, Jerin Jacob wrote:
> On Fri, Jan 17, 2020 at 1:35 PM David Marchand
> <david.marchand@redhat.com> wrote:
>> On Fri, Jan 17, 2020 at 5:41 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>>>>>> Yes this is when trace is enabled. If the trace is disabled then it
>>>>>>> will be the only a handful of cycles.
>>>>>>>
>>>>>> Two follow-on questions:
>>>>>> 1. Is the trace enable/disable dynamic at runtime?
>>>>> Yes. See the requirement section.
>>>>>
>>>>>> 2. Have you investigated how low the "handful of cycles" actually is?
>>>>> Yes. it is around 1 to 3 cycles based on the arch. it boils down to
>>>>> mostly a branch hit/miss on a memory location
>>>>> embedded in a C macro.
>>>>>
>>>> That seems impressively low, which is great news!
>>> Does anyone have an objection to have
>>> 1) Use CTF as trace format to reuse the opensource tracing tools and
>>> compatibility wth LTTng
>>> https://diamon.org/ctf/
>>> 2) Have native DPDK CTF trace emitter for better performance for DPDK
>>> fast path tracing and Non-Linux support.
>>>
>>> I would like to avoid the situation where once code gets completed and
>>> then starts our basic discussion
>>> on the design decisions.
>>>
>>> If someone needs more time to think through or any clarification is
>>> required then please discuss.
>> I did not find the time to look at this.
>> Some quick questions:
>> - is LTTng coming with out-of-tree kmod? making it hard to support in
>> distributions?
> LTTng kernel tracing only needs kmod support.
> For the userspace tracing at minium following libraries are required.
>
> a) LTTng-UST
> b) LTTng-tools
> c) liburcu
> d) libpopt-dev

This "DPDK CTF trace emitter" would make DPDK interoperate with, but 
without any build-time dependencies to, LTTng. Correct?

Do you have any idea of what the performance benefits one would receive 
from having something DPDK native, compared to just depending on LTTng UST?

Would this work also include moving over the DPDK trace macros to using 
this new CTF trace emitter? If so, we would retain the current 
printf()-style pattern, or move to a more LTTng-native like approach, 
with trace event type declarations and binary-format trace events?

> Based on the https://lttng.org/docs/v2.11/#doc-installing-lttng
> -------------------------- 8<----------------------------------
> Important:As of 22 October 2019, LTTng 2.11 is not available as
> distribution packages, except for Arch Linux.
> You can build LTTng 2.11 from source to install and use it.
> -------------------------- >8----------------------------------
>
>> - I have been playing with perf those days to track live processes and
>> gathering informations/stats at key point of a dpdk app without adding
>> anything in the binary. What does LTTng provide that scripting around
>> perf would not solve?
> Profiler and Tracer are two different things: Perf is a profiler.
>
> Definitions from https://lttng.org/docs/v2.11/#doc-what-is-tracing
> -------------------------- 8<----------------------------------
> A profiler is often the tool of choice to identify performance
> bottlenecks. Profiling is suitable to identify where performance is
> lost in a given software. The profiler outputs a profile, a
> statistical summary of observed events, which you may use to discover
> which functions took the most time to execute. However, a profiler
> won’t report why some identified functions are the bottleneck.
> Bottlenecks might only occur when specific conditions are met,
> conditions that are sometimes impossible to capture by a statistical
> profiler, or impossible to reproduce with an application altered by
> the overhead of an event-based profiler. For a thorough investigation
> of software performance issues, a history of execution is essential,
> with the recorded values of variables and context fields you choose,
> and with as little influence as possible on the instrumented software.
> This is where tracing comes in handy.
>
> Tracing is a technique used to understand what goes on in a running
> software system. The software used for tracing is called a tracer,
> which is conceptually similar to a tape recorder. When recording,
> specific instrumentation points placed in the software source code
> generate events that are saved on a giant tape: a trace file. You can
> trace user applications and the operating system at the same time,
> opening the possibility of resolving a wide range of problems that
> would otherwise be extremely challenging.
> -------------------------- >8----------------------------------
>
> Once tracing infrastructure is in place, we can add tracepoints in the
> dpdk functions such as rte_eth_dev_configure(), rx_burst, etc so
> that one can trace the flow of the program and debug. The use case
> details from the RFC:
>
> -------------------------- 8<----------------------------------
> Use-cases
> ---------
> - Most of the cases, The DPDK provider will not have access to the
> DPDK customer applications.
> To debug/analyze the slow path and fast path DPDK API usage from the field,
> we need to have integrated trace support in DPDK.
>
> - Need a low overhead Fast path multi-core PMD driver debugging/analysis
> infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
>
> - Post trace analysis tools can provide various status across the system such
> as cpu_idle() using the timestamp added in the trace.
> -------------------------- >8----------------------------------
>
> Here is more details on viewing Traces using Trace compass(An
> opensource CTF trace viewer)
>
> https://www.renesas.com/cn/zh/doc/products/tool/doc/014/r20ut4479ej0000-lttng.pdf
>
>
>>
>> --
>> David Marchand
>>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17  9:52               ` Jerin Jacob
  2020-01-17 10:30                 ` Mattias Rönnblom
@ 2020-01-17 10:43                 ` David Marchand
  2020-01-17 11:08                   ` Jerin Jacob
  1 sibling, 1 reply; 24+ messages in thread
From: David Marchand @ 2020-01-17 10:43 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu, David Christensen,
	Konstantin Ananyev, Anatoly Burakov, Harini Ramakrishnan,
	Omar Cardona, Anand Rawat, Olivier Matz, Gage Eads,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Fiona Trahe,
	Ashish Gupta, Erik Gabriel Carrillo, Abhinandan Gujjar,
	Hemant Agrawal, Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Fri, Jan 17, 2020 at 10:52 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > If someone needs more time to think through or any clarification is
> > > required then please discuss.
> >
> > I did not find the time to look at this.
> > Some quick questions:
> > - is LTTng coming with out-of-tree kmod? making it hard to support in
> > distributions?
>
> LTTng kernel tracing only needs kmod support.
> For the userspace tracing at minium following libraries are required.
>
> a) LTTng-UST
> b) LTTng-tools
> c) liburcu
> d) libpopt-dev
>
> Based on the https://lttng.org/docs/v2.11/#doc-installing-lttng
> -------------------------- 8<----------------------------------
> Important:As of 22 October 2019, LTTng 2.11 is not available as
> distribution packages, except for Arch Linux.
> You can build LTTng 2.11 from source to install and use it.
> -------------------------- >8----------------------------------

Would there be requirements on a specific version of LTTng?
I can see RHEL 7 comes with version 2.4.1, RHEL 8 has 2.8.1.


> > - I have been playing with perf those days to track live processes and
> > gathering informations/stats at key point of a dpdk app without adding
> > anything in the binary. What does LTTng provide that scripting around
> > perf would not solve?
>
> Profiler and Tracer are two different things: Perf is a profiler.

Are you sure you can draw such a line about perf?
You can add dynamic tracepoints with context in a live process (perf
probe/perf recordma), I used this to track where a variable was
getting updated once for a given device in OVS (and getting the number
of occurrences).

I know there are limitations with perf (some static variables not
being caught, can be tricky to trace inlines).
Maybe LTTng is better at this since you put markers in your code.


One thing of interest, I understand that LTTng does not require a
context switch when tracing.
That is an advantage over perf.


--
David Marchand


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17 10:30                 ` Mattias Rönnblom
@ 2020-01-17 10:54                   ` Jerin Jacob
  2020-02-15 10:21                     ` Jerin Jacob
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2020-01-17 10:54 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: David Marchand, Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Fri, Jan 17, 2020 at 4:00 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>

> > LTTng kernel tracing only needs kmod support.
> > For the userspace tracing at minium following libraries are required.
> >
> > a) LTTng-UST
> > b) LTTng-tools
> > c) liburcu
> > d) libpopt-dev
>
> This "DPDK CTF trace emitter" would make DPDK interoperate with, but
> without any build-time dependencies to, LTTng. Correct?

Yes. Native CTF trace emitter without LTTng dependency.

>
> Do you have any idea of what the performance benefits one would receive
> from having something DPDK native, compared to just depending on LTTng UST?

I calibrated LTTng cost and pushed the test code to github[1].

I just started working on the DPDK native CTF emitter.
I am sure it will be less than LTTng as we can leverage hugepage, exploit
dpdk worker thread usage to avoid atomics and use per core variables,
avoid a lot function pointers in fast-path etc
I can share the exact overhead after the PoC.
I think, based on the performance we can decide one or another?

[1]
-------------------------- 8<----------------------------------
https://github.com/jerinjacobk/lttng-overhead
https://github.com/jerinjacobk/lttng-overhead/blob/master/README

On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
last line in the log(ZERO_ARG))
On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
In short, Based on the "IPC capabilities", The cost would be around
100ns to 400ns
for single void trace(a trace without any argument)
-------------------------- 8<----------------------------------


>
> Would this work also include moving over the DPDK trace macros to using
> this new CTF trace emitter? If so, we would retain the current
> printf()-style pattern, or move to a more LTTng-native like approach,
> with trace event type declarations and binary-format trace events?

Yes. I am planning to add tracepoints across the DPDK source code.
The Fastpath stuff should be under conditional compilation.
My view is to keep the printf() pattern as is. Probably we can decide
the exact treatment later.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17 10:43                 ` David Marchand
@ 2020-01-17 11:08                   ` Jerin Jacob
  0 siblings, 0 replies; 24+ messages in thread
From: Jerin Jacob @ 2020-01-17 11:08 UTC (permalink / raw)
  To: David Marchand
  Cc: Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong,
	Mattias Rönnblom, Jan Viktorin, Gavin Hu, David Christensen,
	Konstantin Ananyev, Anatoly Burakov, Harini Ramakrishnan,
	Omar Cardona, Anand Rawat, Olivier Matz, Gage Eads,
	Adrien Mazarguil, Nicolas Chautru, Declan Doherty, Fiona Trahe,
	Ashish Gupta, Erik Gabriel Carrillo, Abhinandan Gujjar,
	Hemant Agrawal, Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi

On Fri, Jan 17, 2020 at 4:14 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Fri, Jan 17, 2020 at 10:52 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > > > If someone needs more time to think through or any clarification is
> > > > required then please discuss.
> > >
> > > I did not find the time to look at this.
> > > Some quick questions:
> > > - is LTTng coming with out-of-tree kmod? making it hard to support in
> > > distributions?
> >
> > LTTng kernel tracing only needs kmod support.
> > For the userspace tracing at minium following libraries are required.
> >
> > a) LTTng-UST
> > b) LTTng-tools
> > c) liburcu
> > d) libpopt-dev
> >
> > Based on the https://lttng.org/docs/v2.11/#doc-installing-lttng
> > -------------------------- 8<----------------------------------
> > Important:As of 22 October 2019, LTTng 2.11 is not available as
> > distribution packages, except for Arch Linux.
> > You can build LTTng 2.11 from source to install and use it.
> > -------------------------- >8----------------------------------
>
> Would there be requirements on a specific version of LTTng?

No.
If possible  we would  need a single trace format that works  across
multiple OS(that DPDK support(FreeBSD and Windows)
to support post analysis and viewer tools. Current LTTng concern is
only lacking all OS support and performance.

> I can see RHEL 7 comes with version 2.4.1, RHEL 8 has 2.8.1.
>
>
> > > - I have been playing with perf those days to track live processes and
> > > gathering informations/stats at key point of a dpdk app without adding
> > > anything in the binary. What does LTTng provide that scripting around
> > > perf would not solve?
> >
> > Profiler and Tracer are two different things: Perf is a profiler.
>
> Are you sure you can draw such a line about perf?
> You can add dynamic tracepoints with context in a live process (perf
> probe/perf recordma), I used this to track where a variable was
> getting updated once for a given device in OVS (and getting the number
> of occurrences).
>
> I know there are limitations with perf (some static variables not
> being caught, can be tricky to trace inlines).
> Maybe LTTng is better at this since you put markers in your code.

it is not like perf probe where it updates only the event.
Trace will spit out all the information when trace is enabled on the tracepoint.

For instance if we add _tracepoint_ for rte_eth_dev_configure(),
I will spit out the,
a) timestamp when it is called
b) which cpu it called
c) trace ID- rte_eth_dev_configure
d) port_id
e) nb_rx_q
f) nb_tx_q,
g) contents of  const struct rte_eth_conf *dev_conf.


>
>
> One thing of interest, I understand that LTTng does not require a
> context switch when tracing.
> That is an advantage over perf.
>
>
> --
> David Marchand
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-13 11:00 ` Ray Kinsella
  2020-01-13 12:04   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
@ 2020-01-18 15:14   ` " dave
  2020-01-20 16:51     ` Stephen Hemminger
  1 sibling, 1 reply; 24+ messages in thread
From: dave @ 2020-01-18 15:14 UTC (permalink / raw)
  To: 'Ray Kinsella', 'Jerin Jacob Kollanukkaran',
	'dpdk-dev'

It would be well worth considering one of the vpp techniques to minimize trace impact:

static inline ring_handler_inline (..., int is_traced)
{
  for (i = 0; i < vector_size; i++)
    {
      if (is_traced)
	{
	  do_trace_work;
	}
      normal_packet_processing;
    }
}

ring_handler (...)
{
  if (PREDICT_FALSE(global_trace_flag != 0))
    return ring_handler_inline (..., 1 /* is_traced */);
  else
    return ring_handler_inline (..., 0 /* is_traced */);
}

This reduces the runtime tax to the absolute minimum, but costs space. 

Please consider it.

HTH... Dave

-----Original Message-----
From: Ray Kinsella <mdr@ashroe.eu> 
Sent: Monday, January 13, 2020 6:00 AM
To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dpdk-dev <dev@dpdk.org>; dave@barachs.net
Subject: Re: [RFC] [dpdk-dev] DPDK Trace support

Hi Jerin,

Any idea why lttng performance is so poor?
I would have naturally gone there to benefit from the existing toolchain.

Have you looked at the FD.io logging/tracing infrastructure for inspiration?
https://wiki.fd.io/view/VPP/elog

Ray K

On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
> Hi All,
> 
> I would like to add tracing support for DPDK.
> I am planning to add this support in v20.05 release.
> 
> This RFC attempts to get feedback from the community on
> 
> a) Tracing Use cases.
> b) Tracing Requirements.
> b) Implementation choices.
> c) Trace format.
> 
> Use-cases
> ---------
> - Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
> To debug/analyze the slow path and fast path DPDK API usage from the 
> field, we need to have integrated trace support in DPDK.
> 
> - Need a low overhead Fast path multi-core PMD driver 
> debugging/analysis infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
> 
> - Post trace analysis tools can provide various status across the 
> system such as cpu_idle() using the timestamp added in the trace.
> 
> 
> Requirements:
> -------------
> - Support for Linux, FreeBSD and Windows OS
> - Open trace format
> - Multi-platform Open source trace viewer
> - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> - Dynamic enable/disable of trace events
> 
> 
> To enable trace support in DPDK, following items need to work out: 
> 
> a) Add the DPDK trace points in the DPDK source code.
> 
> - This includes updating DPDK functions such as, 
> rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.
> 
> b) Choosing suitable serialization-format
> 
> - Common Trace Format, CTF, is an open format and language to describe trace formats.
> This enables tool reuse, of which line-textual (babeltrace) and 
> graphical (TraceCompass) variants already exist.
> 
> CTF should look familiar to C programmers but adds stronger typing. 
> See CTF - A Flexible, High-performance Binary Trace Format.
> 
> https://diamon.org/ctf/
> 
> c) Writing the on-target serialization code,
> 
> See the section below.(Lttng CTF trace emitter vs DPDK specific CTF 
> trace emitter)
>  
> d) Deciding on and writing the I/O transport mechanics,
> 
> For performance reasons, it should be backed by a huge-page and write to file IO.
> 
> e) Writing the PC-side deserializer/parser,
> 
> Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> See: 
> https://lttng.org/viewers/
> 
> f) Writing tools for filtering and presentation.
> 
> See item (e)
> 
> 
> Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> ----------------------------------------------------------
> 
> I have written a performance evaluation application to measure the 
> overhead of Lttng CTF emitter(The fastpath infrastructure used by 
> https://lttng.org/ library to emit the trace)
> 
> https://github.com/jerinjacobk/lttng-overhead
> https://github.com/jerinjacobk/lttng-overhead/blob/master/README
> 
> I could improve the performance by 30% by adding the "DPDK"
> based plugin for get_clock() and get_cpu(), Here are the performance 
> numbers after adding the plugin on
> x86 and various arm64 board that I have access to,
> 
> On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the 
> last line in the log(ZERO_ARG)) On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
> In short, Based on the "IPC capabilities", The cost would be around 
> 100ns to 400ns for single void trace(a trace without any argument)
> 
> 
> [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> CPU Timer freq is 2600.000000MHz
> NOP: cycles=0.194834 ns=0.074936
> GET_CLOCK: cycles=47.854658 ns=18.405638
> GET_CPU: cycles=30.995892 ns=11.921497
> ZERO_ARG: cycles=236.945113 ns=91.132736
> 
> 
> We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng 
> CTF emitter may not fit the DPDK fast path purpose due to the cost associated with generic Lttng features.
> 
> One option could be to have, native CTF emitter in EAL/DPDK to emit 
> the trace in a hugepage. I think it would be a handful of cycles if we 
> limit the features to the requirements above:
> 
> The upside of using Lttng CTF emitter:
> a) No need to write a new CTF trace emitter(the item (c))
> 
> The downside of Lttng CTF emitter(the item (c))
> a) performance issue(See above)
> b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> c) dpdk library dependency to lttng for trace.
> 
> So, Probably it good to have native CTF emitter in DPDK and reuse all 
> open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> I think, it would be best of both world.
> 
> Any thoughts on this subject? Based on the community feedback, I can work on the patch for v20.05.
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-18 15:14   ` [dpdk-dev] " dave
@ 2020-01-20 16:51     ` Stephen Hemminger
  0 siblings, 0 replies; 24+ messages in thread
From: Stephen Hemminger @ 2020-01-20 16:51 UTC (permalink / raw)
  To: dave
  Cc: 'Ray Kinsella', 'Jerin Jacob Kollanukkaran',
	'dpdk-dev'

On Sat, 18 Jan 2020 10:14:31 -0500
<dave@barachs.net> wrote:

> It would be well worth considering one of the vpp techniques to minimize trace impact:
> 
> static inline ring_handler_inline (..., int is_traced)
> {
>   for (i = 0; i < vector_size; i++)
>     {
>       if (is_traced)
> 	{
> 	  do_trace_work;
> 	}
>       normal_packet_processing;
>     }
> }
> 
> ring_handler (...)
> {
>   if (PREDICT_FALSE(global_trace_flag != 0))
>     return ring_handler_inline (..., 1 /* is_traced */);
>   else
>     return ring_handler_inline (..., 0 /* is_traced */);
> }
> 
> This reduces the runtime tax to the absolute minimum, but costs space. 
> 
> Please consider it.
> 
> HTH... Dave

LTTng already has tracepoint_enabled for this


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-13 10:40 [dpdk-dev] [RFC] DPDK Trace support Jerin Jacob Kollanukkaran
  2020-01-13 11:00 ` Ray Kinsella
  2020-01-13 13:05 ` Bruce Richardson
@ 2020-01-27 16:12 ` Aaron Conole
  2020-01-27 17:23   ` Jerin Jacob
  2 siblings, 1 reply; 24+ messages in thread
From: Aaron Conole @ 2020-01-27 16:12 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran
  Cc: dev, Thomas Monjalon, David Marchand, Ferruh Yigit,
	Andrew Rybchenko, Ajit Khaparde, Qi Zhang, Xiaolong Ye,
	Raslan Darawsheh, Maxime Coquelin, Tiwei Bie, Akhil Goyal,
	Luca Boccassi, Kevin Traynor, maintainers, John McNamara,
	Marko Kovacevic, Ray Kinsella, Bruce Richardson, Aaron Conole,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon,
	Olivier Matz, Gage Eads, Adrien Mazarguil, Nicolas Chautru,
	Declan Doherty, Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

Jerin Jacob Kollanukkaran <jerinj@marvell.com> writes:

> Hi All,
>
> I would like to add tracing support for DPDK.
> I am planning to add this support in v20.05 release.
>
> This RFC attempts to get feedback from the community on
>
> a) Tracing Use cases.
> b) Tracing Requirements.
> b) Implementation choices.
> c) Trace format.
>
> Use-cases
> ---------
> - Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
> To debug/analyze the slow path and fast path DPDK API usage from the field,
> we need to have integrated trace support in DPDK.
>
> - Need a low overhead Fast path multi-core PMD driver debugging/analysis
> infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
>
> - Post trace analysis tools can provide various status across the system such
> as cpu_idle() using the timestamp added in the trace.
>
>
> Requirements:
> -------------
> - Support for Linux, FreeBSD and Windows OS
> - Open trace format
> - Multi-platform Open source trace viewer
> - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> - Dynamic enable/disable of trace events
>
>
> To enable trace support in DPDK, following items need to work out: 
>
> a) Add the DPDK trace points in the DPDK source code.
>
> - This includes updating DPDK functions such as,
> rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.

I wonder for these if it makes sense to use librte_bpf and a helper
function to actually emit events.  That way rather than static trace
point data, a user can implement some C-code and pull the exact data
that they want.

There could be some downside with the approach (because we might lose
some inlining or variable eliding), but I think it makes the trace point
concept quite a bit more powerful.  Have you given it any thought?

> b) Choosing suitable serialization-format
>
> - Common Trace Format, CTF, is an open format and language to describe trace formats.
> This enables tool reuse, of which line-textual (babeltrace) and 
> graphical (TraceCompass) variants already exist.
>
> CTF should look familiar to C programmers but adds stronger typing. 
> See CTF - A Flexible, High-performance Binary Trace Format.
>
> https://diamon.org/ctf/
>
> c) Writing the on-target serialization code,
>
> See the section below.(Lttng CTF trace emitter vs DPDK specific CTF trace emitter)
>  
> d) Deciding on and writing the I/O transport mechanics,
>
> For performance reasons, it should be backed by a huge-page and write to file IO.
>
> e) Writing the PC-side deserializer/parser,
>
> Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> See: 
> https://lttng.org/viewers/
>
> f) Writing tools for filtering and presentation.
>
> See item (e)
>
>
> Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> ----------------------------------------------------------
>
> I have written a performance evaluation application to measure the overhead
> of Lttng CTF emitter(The fastpath infrastructure used by
> https://lttng.org/ library to emit the trace)
>
> https://github.com/jerinjacobk/lttng-overhead
> https://github.com/jerinjacobk/lttng-overhead/blob/master/README
>
> I could improve the performance by 30% by adding the "DPDK"
> based plugin for get_clock() and get_cpu(),
> Here are the performance numbers after adding the plugin on 
> x86 and various arm64 board that I have access to,
>
> On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
> last line in the log(ZERO_ARG))
> On arm64, it varies from 312 cycles to 1100 cycles(based on the class of CPU).
> In short, Based on the "IPC capabilities", The cost would be around 100ns to 400ns
> for single void trace(a trace without any argument)
>
>
> [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> CPU Timer freq is 2600.000000MHz
> NOP: cycles=0.194834 ns=0.074936
> GET_CLOCK: cyclesG.854658 ns.405638
> GET_CPU: cycles0.995892 ns.921497
> ZERO_ARG: cycles#6.945113 ns‘.132736
>
>
> We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng CTF emitter
> may not fit the DPDK fast path purpose due to the cost associated with generic Lttng features.
>
> One option could be to have, native CTF emitter in EAL/DPDK to emit the
> trace in a hugepage. I think it would be a handful of cycles if we limit the features
> to the requirements above:
>
> The upside of using Lttng CTF emitter:
> a) No need to write a new CTF trace emitter(the item (c))
>
> The downside of Lttng CTF emitter(the item (c))
> a) performance issue(See above)
> b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> c) dpdk library dependency to lttng for trace.
>
> So, Probably it good to have native CTF emitter in DPDK and reuse all
> open-source trace viewer(babeltrace and  TraceCompass) and format(CTF) infrastructure.
> I think, it would be best of both world.
>
> Any thoughts on this subject? Based on the community feedback, I can
> work on the patch for v20.05.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-27 16:12 ` Aaron Conole
@ 2020-01-27 17:23   ` Jerin Jacob
  0 siblings, 0 replies; 24+ messages in thread
From: Jerin Jacob @ 2020-01-27 17:23 UTC (permalink / raw)
  To: Aaron Conole
  Cc: Jerin Jacob Kollanukkaran, dev, Thomas Monjalon, David Marchand,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Qi Zhang,
	Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin, Tiwei Bie,
	Akhil Goyal, Luca Boccassi, Kevin Traynor, maintainers,
	John McNamara, Marko Kovacevic, Ray Kinsella, Bruce Richardson,
	Michael Santana, Harry van Haaren, Cristian Dumitrescu,
	Phil Yang, Joyce Kong, Mattias Rönnblom, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon,
	Olivier Matz, Gage Eads, Adrien Mazarguil, Nicolas Chautru,
	Declan Doherty, Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Shreyansh Jain, Hemant Agrawal,
	Artem V. Andreev, Nithin Kumar Dabilpuram,
	Vamsi Krishna Attunuru, Rosen Xu, Sachin Saxena,
	Stephen Hemminger, Chas Williams, John W. Linville,
	Prasun Kapoor, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
	Evgeny Schemeilin, Igor Chauskin, Ravi Kumar, Igor Russkikh,
	Pavel Belous, Shepard Siegel, Ed Czeck, John Miller,
	Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Gaetan Rivet, Jasvinder Singh, Jakub Grajciar,
	Ruifeng Wang, Anoob Joseph, Fan Zhang, Pablo de Lara,
	John Griffin, Deepak Kumar Jain, Michael Shamis,
	Nagadheeraj Rottela, Srikanth Jampala, Ankur Dwivedi, Jay Zhou,
	Lee Daly, Sunila Sahu, Nipun Gupta, Liang Ma, Peter Mccarthy,
	Tianfei zhang, Satha Koteswara Rao Kottidi, Xiaoyun Li,
	Bernard Iremonger, Vladimir Medvedkin, David Hunt, Reshma Pattan,
	Byron Marohn, Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli,
	Robert Sanford, Kevin Laatz, Maryam Tahhan, Ori Kam,
	Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
	Pavan Nikhilesh Bhagavatula, Kirill Rybalchenko, Kadam, Pallavi

On Mon, Jan 27, 2020 at 9:43 PM Aaron Conole <aconole@redhat.com> wrote:
>
> Jerin Jacob Kollanukkaran <jerinj@marvell.com> writes:
>
> > Hi All,
> >
> > I would like to add tracing support for DPDK.
> > I am planning to add this support in v20.05 release.
> >
> > This RFC attempts to get feedback from the community on
> >
> > a) Tracing Use cases.
> > b) Tracing Requirements.
> > b) Implementation choices.
> > c) Trace format.
> >
> > Use-cases
> > ---------
> > - Most of the cases, The DPDK provider will not have access to the DPDK customer applications.
> > To debug/analyze the slow path and fast path DPDK API usage from the field,
> > we need to have integrated trace support in DPDK.
> >
> > - Need a low overhead Fast path multi-core PMD driver debugging/analysis
> > infrastructure in DPDK to fix the functional and performance issue(s) of PMD.
> >
> > - Post trace analysis tools can provide various status across the system such
> > as cpu_idle() using the timestamp added in the trace.
> >
> >
> > Requirements:
> > -------------
> > - Support for Linux, FreeBSD and Windows OS
> > - Open trace format
> > - Multi-platform Open source trace viewer
> > - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> > - Dynamic enable/disable of trace events
> >
> >
> > To enable trace support in DPDK, following items need to work out:
> >
> > a) Add the DPDK trace points in the DPDK source code.
> >
> > - This includes updating DPDK functions such as,
> > rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit the trace.
>
> I wonder for these if it makes sense to use librte_bpf and a helper
> function to actually emit events.  That way rather than static trace
> point data, a user can implement some C-code and pull the exact data
> that they want.
>
> There could be some downside with the approach (because we might lose
> some inlining or variable eliding), but I think it makes the trace point
> concept quite a bit more powerful.  Have you given it any thought?

I think the reasoning for the same to have control over whether to
emit the trace or not. Right?
i.e only when specific conditions are met with runtime data then only
emit the trace to buffer.

I think, a couple of challenges would be
1) Performance in fast-path tracing
2) Need to write eBPF class for all the events that we need to trace to have
control the arguments for tracing

I think, once we have the base framework in C which support
enable/disable the event at runtime and
then we can give provision to hook eBPF program to control more runtime nature.
example emit rte_eth_dev_configure() trace only when port_id == 2 and
nb_rx_q == 4.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-01-17 10:54                   ` Jerin Jacob
@ 2020-02-15 10:21                     ` Jerin Jacob
  2020-02-17  9:35                       ` Mattias Rönnblom
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob @ 2020-02-15 10:21 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: David Marchand, Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi, dave

On Fri, Jan 17, 2020 at 4:24 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Fri, Jan 17, 2020 at 4:00 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
>
> > > LTTng kernel tracing only needs kmod support.
> > > For the userspace tracing at minium following libraries are required.
> > >
> > > a) LTTng-UST
> > > b) LTTng-tools
> > > c) liburcu
> > > d) libpopt-dev
> >
> > This "DPDK CTF trace emitter" would make DPDK interoperate with, but
> > without any build-time dependencies to, LTTng. Correct?
>
> Yes. Native CTF trace emitter without LTTng dependency.
>
> >
> > Do you have any idea of what the performance benefits one would receive
> > from having something DPDK native, compared to just depending on LTTng UST?
>
> I calibrated LTTng cost and pushed the test code to github[1].
>
> I just started working on the DPDK native CTF emitter.
> I am sure it will be less than LTTng as we can leverage hugepage, exploit
> dpdk worker thread usage to avoid atomics and use per core variables,
> avoid a lot function pointers in fast-path etc
> I can share the exact overhead after the PoC.

I did the almost feature-complete PoC. The code shared in Github[1]
The documentation and code cleanup etc is still pending.

[1]
https://github.com/jerinjacobk/dpdk-trace.git

trace overhead data on x86:[2]
# 236 cyles with LTTng(>100ns)
# 18 cycles(7ns) with Native DPDK CTF emitter.

trace overhead data on arm64:
#  312  cycles to  1100 cycles with LTTng based on the class of arm64 CPU.
#  11 cycles to 13 cycles with Native DPDK CTF emitter based on the
class of arm64 CPU.

18 cycles(on x86) vs 11 cycles(on arm64) is due to rdtsc() overhead in
x86. It seems  rdtsc takes around 15cycles in x86.

# The Native DPDK CTF trace support does not have any dependency on
third-party library.
The generated output file is compatible with LTTng as both are using
CTF trace format.

The performance gain comes from:
1) exploit dpdk worker thread usage model to avoid atomics and use per
core variables
2) use hugepage,
3) avoid a lot function pointers in fast-path etc
4) avoid unaligned store for arm64 etc

Features:
- APIs and Features are similar to rte_log dynamic framework
API(expect log prints on stdout vs it dumps on trace file)
- No specific limit on the events. A string-based event like rte_log
for pattern matching
- Dynamic enable/disable support.
- Instructmention overhead is ~1 cycle. i.e cost of adding the code
wth out using trace feature.
- Timestamp support for all the events using DPDK rte_rtdsc
- No dependency on another library. Cleanroom native implementation of CTF.

Functional test case:
a) echo "trace_autotest" | sudo ./build/app/test/dpdk-test  -c 0x3
--trace-level=8

The above command emits the following trace events
<code>
        uint8_t i;

        rte_trace_lib_eal_generic_void();
        rte_trace_lib_eal_generic_u64(0x10000000000000);
        rte_trace_lib_eal_generic_u32(0x10000000);
        rte_trace_lib_eal_generic_u16(0xffee);
        rte_trace_lib_eal_generic_u8(0xc);
        rte_trace_lib_eal_generic_i64(-1234);
        rte_trace_lib_eal_generic_i32(-1234567);
        rte_trace_lib_eal_generic_i16(12);
        rte_trace_lib_eal_generic_i8(-3);
        rte_trace_lib_eal_generic_string("my string");
        rte_trace_lib_eal_generic_function(__func__);

        for (i = 0; i < 128; i++)
                rte_trace_lib_eal_generic_u8(i);
</code>

Install babeltrace package in Linux and point the generated trace file
to babel trace. By default trace file created under
<user>/dpdk-traces/time_stamp/

example:
# babeltrace /root/dpdk-traces/rte-2020-02-15-PM-02-56-51 | more

[13:27:36.138468807] (+?.?????????) lib.eal.generic.void: { cpu_id =
0, name = "dpdk-test" }, { }
[13:27:36.138468851] (+0.000000044) lib.eal.generic.u64: { cpu_id = 0,
name = "dpdk-test" }, { in = 4503599627370496 }
[13:27:36.138468860] (+0.000000009) lib.eal.generic.u32: { cpu_id = 0,
name = "dpdk-test" }, { in = 268435456 }
[13:27:36.138468934] (+0.000000074) lib.eal.generic.u16: { cpu_id = 0,
name = "dpdk-test" }, { in = 65518 }
[13:27:36.138468949] (+0.000000015) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 12 }
[13:27:36.138468956] (+0.000000007) lib.eal.generic.i64: { cpu_id = 0,
name = "dpdk-test" }, { in = -1234 }
[13:27:36.138468963] (+0.000000007) lib.eal.generic.i32: { cpu_id = 0,
name = "dpdk-test" }, { in = -1234567 }
[13:27:36.138469024] (+0.000000061) lib.eal.generic.i16: { cpu_id = 0,
name = "dpdk-test" }, { in = 12 }
[13:27:36.138469044] (+0.000000020) lib.eal.generic.i8: { cpu_id = 0,
name = "dpdk-test" }, { in = -3 }
[13:27:36.138469051] (+0.000000007) lib.eal.generic.string: { cpu_id =
0, name = "dpdk-test" }, { str = "my string" }
[13:27:36.138469203] (+0.000000152) lib.eal.generic.func: { cpu_id =
0, name = "dpdk-test" }, { func = "test_trace_points" }
[13:27:36.138469239] (+0.000000036) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 0 }
[13:27:36.138469246] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 1 }
[13:27:36.138469252] (+0.000000006) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 2 }
[13:27:36.138469262] (+0.000000010) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 3 }
[13:27:36.138469269] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 4 }
[13:27:36.138469276] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
name = "dpdk-test" }, { in = 5 }

# There is a  GUI based trace viewer available in Windows, Linux and Mac.
It is called as tracecompass.(https://www.eclipse.org/tracecompass/)

The example screenshot and Histogram of above DPDK trace using Tracecompass.

https://github.com/jerinjacobk/share/blob/master/dpdk_trace.JPG


[2] Added performance test case to find the Trace overhead.
Command to test trace overhead. It is the overhead of writing
zero-argument trace.

echo "trace_perf" | sudo ./build/app/test/dpdk-test  -c 0x3 --trace-level=8
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Trace dir: /root/dpdk-traces/rte-2020-02-15-PM-03-37-33
RTE>>trace_perf
Timer running at 2600.00MHz
ZERO_ARG: cycles=17.901031 ns=6.885012
Test OK

[3] The above test is ported to LTTng for finding the LTTng trace
overhead. It available at
https://github.com/jerinjacobk/lttng-overhead
https://github.com/jerinjacobk/lttng-overhead/blob/master/README

File walkthrough:

lib/librte_eal/common/include/rte_trace.h - Public API for Trace
provider and Trace control
lib/librte_eal/common/eal_common_trace.c - main trace implemention
lib/librte_eal/common/eal_common_trace_ctf.c - CTF metadata spec implementation
lib/librte_eal/common/eal_common_trace_utils.c - command line utils
and filesystem operations.
lib/librte_eal/common/eal_common_trace_points.c -  trace points for EAL library
lib/librte_eal/common/include/rte_trace_eal.h - EAL tracepoint public API.
lib/librte_eal/common/eal_trace.h - Private trace header file.


> I think, based on the performance we can decide one or another?

The above performance data shows a much higher improvement with LTTng.

Let me know if anyone have any comments on this and or any suggestion.
If there are no comments, I will submit the v1 with after code clean
up before the 20.05 proposal deadline.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-02-15 10:21                     ` Jerin Jacob
@ 2020-02-17  9:35                       ` Mattias Rönnblom
  2020-02-17 10:23                         ` Jerin Jacob
  0 siblings, 1 reply; 24+ messages in thread
From: Mattias Rönnblom @ 2020-02-17  9:35 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: David Marchand, Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi, dave

On 2020-02-15 11:21, Jerin Jacob wrote:
> On Fri, Jan 17, 2020 at 4:24 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>> On Fri, Jan 17, 2020 at 4:00 PM Mattias Rönnblom
>> <mattias.ronnblom@ericsson.com> wrote:
>>>> LTTng kernel tracing only needs kmod support.
>>>> For the userspace tracing at minium following libraries are required.
>>>>
>>>> a) LTTng-UST
>>>> b) LTTng-tools
>>>> c) liburcu
>>>> d) libpopt-dev
>>> This "DPDK CTF trace emitter" would make DPDK interoperate with, but
>>> without any build-time dependencies to, LTTng. Correct?
>> Yes. Native CTF trace emitter without LTTng dependency.
>>
>>> Do you have any idea of what the performance benefits one would receive
>>> from having something DPDK native, compared to just depending on LTTng UST?
>> I calibrated LTTng cost and pushed the test code to github[1].
>>
>> I just started working on the DPDK native CTF emitter.
>> I am sure it will be less than LTTng as we can leverage hugepage, exploit
>> dpdk worker thread usage to avoid atomics and use per core variables,
>> avoid a lot function pointers in fast-path etc
>> I can share the exact overhead after the PoC.
> I did the almost feature-complete PoC. The code shared in Github[1]
> The documentation and code cleanup etc is still pending.
>
> [1]
> https://protect2.fireeye.com/v1/url?k=74b2fae0-283b556d-74b2ba7b-0cc47ad93de2-2bd7c54f29450187&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Fdpdk-trace.git
>
> trace overhead data on x86:[2]
> # 236 cyles with LTTng(>100ns)
> # 18 cycles(7ns) with Native DPDK CTF emitter.
>
> trace overhead data on arm64:
> #  312  cycles to  1100 cycles with LTTng based on the class of arm64 CPU.
> #  11 cycles to 13 cycles with Native DPDK CTF emitter based on the
> class of arm64 CPU.
>
> 18 cycles(on x86) vs 11 cycles(on arm64) is due to rdtsc() overhead in
> x86. It seems  rdtsc takes around 15cycles in x86.
>
> # The Native DPDK CTF trace support does not have any dependency on
> third-party library.
> The generated output file is compatible with LTTng as both are using
> CTF trace format.
>
> The performance gain comes from:
> 1) exploit dpdk worker thread usage model to avoid atomics and use per
> core variables
> 2) use hugepage,
> 3) avoid a lot function pointers in fast-path etc
> 4) avoid unaligned store for arm64 etc
>
> Features:
> - APIs and Features are similar to rte_log dynamic framework
> API(expect log prints on stdout vs it dumps on trace file)
> - No specific limit on the events. A string-based event like rte_log
> for pattern matching
> - Dynamic enable/disable support.
> - Instructmention overhead is ~1 cycle. i.e cost of adding the code
> wth out using trace feature.
> - Timestamp support for all the events using DPDK rte_rtdsc
> - No dependency on another library. Cleanroom native implementation of CTF.
>
> Functional test case:
> a) echo "trace_autotest" | sudo ./build/app/test/dpdk-test  -c 0x3
> --trace-level=8
>
> The above command emits the following trace events
> <code>
>          uint8_t i;
>
>          rte_trace_lib_eal_generic_void();
>          rte_trace_lib_eal_generic_u64(0x10000000000000);
>          rte_trace_lib_eal_generic_u32(0x10000000);
>          rte_trace_lib_eal_generic_u16(0xffee);
>          rte_trace_lib_eal_generic_u8(0xc);
>          rte_trace_lib_eal_generic_i64(-1234);
>          rte_trace_lib_eal_generic_i32(-1234567);
>          rte_trace_lib_eal_generic_i16(12);
>          rte_trace_lib_eal_generic_i8(-3);
>          rte_trace_lib_eal_generic_string("my string");
>          rte_trace_lib_eal_generic_function(__func__);
>
>          for (i = 0; i < 128; i++)
>                  rte_trace_lib_eal_generic_u8(i);
> </code>

Is it possible to specify custom types for the events? The equivalent of 
the TRACEPOINT_EVENT() macro in LTTng.

> Install babeltrace package in Linux and point the generated trace file
> to babel trace. By default trace file created under
> <user>/dpdk-traces/time_stamp/
>
> example:
> # babeltrace /root/dpdk-traces/rte-2020-02-15-PM-02-56-51 | more
>
> [13:27:36.138468807] (+?.?????????) lib.eal.generic.void: { cpu_id =
> 0, name = "dpdk-test" }, { }
> [13:27:36.138468851] (+0.000000044) lib.eal.generic.u64: { cpu_id = 0,
> name = "dpdk-test" }, { in = 4503599627370496 }
> [13:27:36.138468860] (+0.000000009) lib.eal.generic.u32: { cpu_id = 0,
> name = "dpdk-test" }, { in = 268435456 }
> [13:27:36.138468934] (+0.000000074) lib.eal.generic.u16: { cpu_id = 0,
> name = "dpdk-test" }, { in = 65518 }
> [13:27:36.138468949] (+0.000000015) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 12 }
> [13:27:36.138468956] (+0.000000007) lib.eal.generic.i64: { cpu_id = 0,
> name = "dpdk-test" }, { in = -1234 }
> [13:27:36.138468963] (+0.000000007) lib.eal.generic.i32: { cpu_id = 0,
> name = "dpdk-test" }, { in = -1234567 }
> [13:27:36.138469024] (+0.000000061) lib.eal.generic.i16: { cpu_id = 0,
> name = "dpdk-test" }, { in = 12 }
> [13:27:36.138469044] (+0.000000020) lib.eal.generic.i8: { cpu_id = 0,
> name = "dpdk-test" }, { in = -3 }
> [13:27:36.138469051] (+0.000000007) lib.eal.generic.string: { cpu_id =
> 0, name = "dpdk-test" }, { str = "my string" }
> [13:27:36.138469203] (+0.000000152) lib.eal.generic.func: { cpu_id =
> 0, name = "dpdk-test" }, { func = "test_trace_points" }
> [13:27:36.138469239] (+0.000000036) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 0 }
> [13:27:36.138469246] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 1 }
> [13:27:36.138469252] (+0.000000006) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 2 }
> [13:27:36.138469262] (+0.000000010) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 3 }
> [13:27:36.138469269] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 4 }
> [13:27:36.138469276] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> name = "dpdk-test" }, { in = 5 }
>
> # There is a  GUI based trace viewer available in Windows, Linux and Mac.
> It is called as tracecompass.(https://www.eclipse.org/tracecompass/)
>
> The example screenshot and Histogram of above DPDK trace using Tracecompass.
>
> https://protect2.fireeye.com/v1/url?k=9cad9dc9-c0243244-9caddd52-0cc47ad93de2-306d4b4409146643&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Fshare%2Fblob%2Fmaster%2Fdpdk_trace.JPG
>
>
> [2] Added performance test case to find the Trace overhead.
> Command to test trace overhead. It is the overhead of writing
> zero-argument trace.
>
> echo "trace_perf" | sudo ./build/app/test/dpdk-test  -c 0x3 --trace-level=8
> EAL: Detected 56 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Trace dir: /root/dpdk-traces/rte-2020-02-15-PM-03-37-33
> RTE>>trace_perf
> Timer running at 2600.00MHz
> ZERO_ARG: cycles=17.901031 ns=6.885012
> Test OK
>
> [3] The above test is ported to LTTng for finding the LTTng trace
> overhead. It available at
> https://protect2.fireeye.com/v1/url?k=7eb42ff5-223d8078-7eb46f6e-0cc47ad93de2-e41c22a09211c207&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Flttng-overhead
> https://protect2.fireeye.com/v1/url?k=616a430a-3de3ec87-616a0391-0cc47ad93de2-c75160108b40b11b&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Flttng-overhead%2Fblob%2Fmaster%2FREADME
>
> File walkthrough:
>
> lib/librte_eal/common/include/rte_trace.h - Public API for Trace
> provider and Trace control
> lib/librte_eal/common/eal_common_trace.c - main trace implemention
> lib/librte_eal/common/eal_common_trace_ctf.c - CTF metadata spec implementation
> lib/librte_eal/common/eal_common_trace_utils.c - command line utils
> and filesystem operations.
> lib/librte_eal/common/eal_common_trace_points.c -  trace points for EAL library
> lib/librte_eal/common/include/rte_trace_eal.h - EAL tracepoint public API.
> lib/librte_eal/common/eal_trace.h - Private trace header file.
>
>
>> I think, based on the performance we can decide one or another?
> The above performance data shows a much higher improvement with LTTng.
>
> Let me know if anyone have any comments on this and or any suggestion.
> If there are no comments, I will submit the v1 with after code clean
> up before the 20.05 proposal deadline.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC] DPDK Trace support
  2020-02-17  9:35                       ` Mattias Rönnblom
@ 2020-02-17 10:23                         ` Jerin Jacob
  0 siblings, 0 replies; 24+ messages in thread
From: Jerin Jacob @ 2020-02-17 10:23 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: David Marchand, Bruce Richardson, Jerin Jacob Kollanukkaran, dev,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde,
	Qi Zhang, Xiaolong Ye, Raslan Darawsheh, Maxime Coquelin,
	Tiwei Bie, Akhil Goyal, Luca Boccassi, Kevin Traynor,
	maintainers, John McNamara, Marko Kovacevic, Ray Kinsella,
	Aaron Conole, Michael Santana, Harry van Haaren,
	Cristian Dumitrescu, Phil Yang, Joyce Kong, Jan Viktorin,
	Gavin Hu, David Christensen, Konstantin Ananyev, Anatoly Burakov,
	Harini Ramakrishnan, Omar Cardona, Anand Rawat, Olivier Matz,
	Gage Eads, Adrien Mazarguil, Nicolas Chautru, Declan Doherty,
	Fiona Trahe, Ashish Gupta, Erik Gabriel Carrillo,
	Abhinandan Gujjar, Hemant Agrawal, Artem V. Andreev,
	Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Rosen Xu,
	Sachin Saxena, Stephen Hemminger, Chas Williams,
	John W. Linville, Prasun Kapoor, Marcin Wojtas, Michal Krawczyk,
	Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Ravi Kumar,
	Igor Russkikh, Pavel Belous, Shepard Siegel, Ed Czeck,
	John Miller, Somnath Kotur, Maciej Czekaj, Shijith Thotton,
	Srisivasubramanian Srinivasan, Rahul Lakkireddy, John Daley,
	Hyong Youb Kim, Wei Hu (Xavier, Min Hu (Connor, Yisen Zhuang,
	Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Xiao Wang,
	Jingjing Wu, Wenzhuo Lu, Qiming Yang, Tomasz Duszynski,
	Liron Himi, Zyta Szpak, Kiran Kumar Kokkilagadda, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, K. Y. Srinivasan,
	Haiyang Zhang, Jan Remes, Heinrich Kuhn, Jan Gutter,
	Gagandeep Singh, Rasesh Mody, Shahed Shaikh, Yong Wang,
	Zhihong Wang, Steven Webster, Matt Peters, Keith Wiles,
	Tetsuya Mukawa, Jasvinder Singh, Jakub Grajciar, Ruifeng Wang,
	Anoob Joseph, Fan Zhang, Pablo de Lara, John Griffin,
	Deepak Kumar Jain, Michael Shamis, Nagadheeraj Rottela,
	Srikanth Jampala, Ankur Dwivedi, Jay Zhou, Lee Daly, Sunila Sahu,
	Nipun Gupta, Liang Ma, Peter Mccarthy, Tianfei zhang,
	Satha Koteswara Rao Kottidi, Xiaoyun Li, Bernard Iremonger,
	Vladimir Medvedkin, David Hunt, Reshma Pattan, Byron Marohn,
	Sameh Gobriel, Yipeng Wang, Honnappa Nagarahalli, Robert Sanford,
	Kevin Laatz, Maryam Tahhan, Ori Kam, Radu Nicolau,
	Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh Bhagavatula,
	Kirill Rybalchenko, Kadam, Pallavi, dave

On Mon, Feb 17, 2020 at 3:05 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> On 2020-02-15 11:21, Jerin Jacob wrote:
> > On Fri, Jan 17, 2020 at 4:24 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >> On Fri, Jan 17, 2020 at 4:00 PM Mattias Rönnblom
> >> <mattias.ronnblom@ericsson.com> wrote:
> >>>> LTTng kernel tracing only needs kmod support.
> >>>> For the userspace tracing at minium following libraries are required.
> >>>>
> >>>> a) LTTng-UST
> >>>> b) LTTng-tools
> >>>> c) liburcu
> >>>> d) libpopt-dev
> >>> This "DPDK CTF trace emitter" would make DPDK interoperate with, but
> >>> without any build-time dependencies to, LTTng. Correct?
> >> Yes. Native CTF trace emitter without LTTng dependency.
> >>
> >>> Do you have any idea of what the performance benefits one would receive
> >>> from having something DPDK native, compared to just depending on LTTng UST?
> >> I calibrated LTTng cost and pushed the test code to github[1].
> >>
> >> I just started working on the DPDK native CTF emitter.
> >> I am sure it will be less than LTTng as we can leverage hugepage, exploit
> >> dpdk worker thread usage to avoid atomics and use per core variables,
> >> avoid a lot function pointers in fast-path etc
> >> I can share the exact overhead after the PoC.
> > I did the almost feature-complete PoC. The code shared in Github[1]
> > The documentation and code cleanup etc is still pending.
> >
> > [1]
> > https://protect2.fireeye.com/v1/url?k=74b2fae0-283b556d-74b2ba7b-0cc47ad93de2-2bd7c54f29450187&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Fdpdk-trace.git
> >
> > trace overhead data on x86:[2]
> > # 236 cyles with LTTng(>100ns)
> > # 18 cycles(7ns) with Native DPDK CTF emitter.
> >
> > trace overhead data on arm64:
> > #  312  cycles to  1100 cycles with LTTng based on the class of arm64 CPU.
> > #  11 cycles to 13 cycles with Native DPDK CTF emitter based on the
> > class of arm64 CPU.
> >
> > 18 cycles(on x86) vs 11 cycles(on arm64) is due to rdtsc() overhead in
> > x86. It seems  rdtsc takes around 15cycles in x86.
> >
> > # The Native DPDK CTF trace support does not have any dependency on
> > third-party library.
> > The generated output file is compatible with LTTng as both are using
> > CTF trace format.
> >
> > The performance gain comes from:
> > 1) exploit dpdk worker thread usage model to avoid atomics and use per
> > core variables
> > 2) use hugepage,
> > 3) avoid a lot function pointers in fast-path etc
> > 4) avoid unaligned store for arm64 etc
> >
> > Features:
> > - APIs and Features are similar to rte_log dynamic framework
> > API(expect log prints on stdout vs it dumps on trace file)
> > - No specific limit on the events. A string-based event like rte_log
> > for pattern matching
> > - Dynamic enable/disable support.
> > - Instructmention overhead is ~1 cycle. i.e cost of adding the code
> > wth out using trace feature.
> > - Timestamp support for all the events using DPDK rte_rtdsc
> > - No dependency on another library. Cleanroom native implementation of CTF.
> >
> > Functional test case:
> > a) echo "trace_autotest" | sudo ./build/app/test/dpdk-test  -c 0x3
> > --trace-level=8
> >
> > The above command emits the following trace events
> > <code>
> >          uint8_t i;
> >
> >          rte_trace_lib_eal_generic_void();
> >          rte_trace_lib_eal_generic_u64(0x10000000000000);
> >          rte_trace_lib_eal_generic_u32(0x10000000);
> >          rte_trace_lib_eal_generic_u16(0xffee);
> >          rte_trace_lib_eal_generic_u8(0xc);
> >          rte_trace_lib_eal_generic_i64(-1234);
> >          rte_trace_lib_eal_generic_i32(-1234567);
> >          rte_trace_lib_eal_generic_i16(12);
> >          rte_trace_lib_eal_generic_i8(-3);
> >          rte_trace_lib_eal_generic_string("my string");
> >          rte_trace_lib_eal_generic_function(__func__);
> >
> >          for (i = 0; i < 128; i++)
> >                  rte_trace_lib_eal_generic_u8(i);
> > </code>
>
> Is it possible to specify custom types for the events? The equivalent of
> the TRACEPOINT_EVENT() macro in LTTng.

Yes. It is possble to specify the custom event using array of
rte_trace_emit_datatype of rte_trace_emit_string basic blocks.
For example, ethdev configure event would be like.

RTE_TRACE_POINT_DECLARE(__rte_trace_lib_ethdev_configure);

static __rte_always_inline void
rte_trace_lib_ethdev_configure(uint16_t port_id, uint16_t nb_rx_q,
uint16_t nb_tx_q,
                      const struct rte_eth_conf *dev_conf)
{
        rte_trace_emit_begin(&__rte_trace_lib_ethdev_configure);
        rte_trace_emit_datatype(mem, port_id);
        rte_trace_emit_datatype(mem, nb_rx_q);
        rte_trace_emit_datatype(mem, nb_tx_q);
        rte_trace_emit_datatype(mem, dev_conf->link_speeds);
        ..
        ..
}

I tried  avoid usage of complex macro as DPDK community prefers non
macro solutions.

>
> > Install babeltrace package in Linux and point the generated trace file
> > to babel trace. By default trace file created under
> > <user>/dpdk-traces/time_stamp/
> >
> > example:
> > # babeltrace /root/dpdk-traces/rte-2020-02-15-PM-02-56-51 | more
> >
> > [13:27:36.138468807] (+?.?????????) lib.eal.generic.void: { cpu_id =
> > 0, name = "dpdk-test" }, { }
> > [13:27:36.138468851] (+0.000000044) lib.eal.generic.u64: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 4503599627370496 }
> > [13:27:36.138468860] (+0.000000009) lib.eal.generic.u32: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 268435456 }
> > [13:27:36.138468934] (+0.000000074) lib.eal.generic.u16: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 65518 }
> > [13:27:36.138468949] (+0.000000015) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 12 }
> > [13:27:36.138468956] (+0.000000007) lib.eal.generic.i64: { cpu_id = 0,
> > name = "dpdk-test" }, { in = -1234 }
> > [13:27:36.138468963] (+0.000000007) lib.eal.generic.i32: { cpu_id = 0,
> > name = "dpdk-test" }, { in = -1234567 }
> > [13:27:36.138469024] (+0.000000061) lib.eal.generic.i16: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 12 }
> > [13:27:36.138469044] (+0.000000020) lib.eal.generic.i8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = -3 }
> > [13:27:36.138469051] (+0.000000007) lib.eal.generic.string: { cpu_id =
> > 0, name = "dpdk-test" }, { str = "my string" }
> > [13:27:36.138469203] (+0.000000152) lib.eal.generic.func: { cpu_id =
> > 0, name = "dpdk-test" }, { func = "test_trace_points" }
> > [13:27:36.138469239] (+0.000000036) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 0 }
> > [13:27:36.138469246] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 1 }
> > [13:27:36.138469252] (+0.000000006) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 2 }
> > [13:27:36.138469262] (+0.000000010) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 3 }
> > [13:27:36.138469269] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 4 }
> > [13:27:36.138469276] (+0.000000007) lib.eal.generic.u8: { cpu_id = 0,
> > name = "dpdk-test" }, { in = 5 }
> >
> > # There is a  GUI based trace viewer available in Windows, Linux and Mac.
> > It is called as tracecompass.(https://www.eclipse.org/tracecompass/)
> >
> > The example screenshot and Histogram of above DPDK trace using Tracecompass.
> >
> > https://protect2.fireeye.com/v1/url?k=9cad9dc9-c0243244-9caddd52-0cc47ad93de2-306d4b4409146643&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Fshare%2Fblob%2Fmaster%2Fdpdk_trace.JPG
> >
> >
> > [2] Added performance test case to find the Trace overhead.
> > Command to test trace overhead. It is the overhead of writing
> > zero-argument trace.
> >
> > echo "trace_perf" | sudo ./build/app/test/dpdk-test  -c 0x3 --trace-level=8
> > EAL: Detected 56 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Trace dir: /root/dpdk-traces/rte-2020-02-15-PM-03-37-33
> > RTE>>trace_perf
> > Timer running at 2600.00MHz
> > ZERO_ARG: cycles=17.901031 ns=6.885012
> > Test OK
> >
> > [3] The above test is ported to LTTng for finding the LTTng trace
> > overhead. It available at
> > https://protect2.fireeye.com/v1/url?k=7eb42ff5-223d8078-7eb46f6e-0cc47ad93de2-e41c22a09211c207&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Flttng-overhead
> > https://protect2.fireeye.com/v1/url?k=616a430a-3de3ec87-616a0391-0cc47ad93de2-c75160108b40b11b&q=1&e=2ef0a614-dea6-4d17-ab4e-a79a7c17ac73&u=https%3A%2F%2Fgithub.com%2Fjerinjacobk%2Flttng-overhead%2Fblob%2Fmaster%2FREADME
> >
> > File walkthrough:
> >
> > lib/librte_eal/common/include/rte_trace.h - Public API for Trace
> > provider and Trace control
> > lib/librte_eal/common/eal_common_trace.c - main trace implemention
> > lib/librte_eal/common/eal_common_trace_ctf.c - CTF metadata spec implementation
> > lib/librte_eal/common/eal_common_trace_utils.c - command line utils
> > and filesystem operations.
> > lib/librte_eal/common/eal_common_trace_points.c -  trace points for EAL library
> > lib/librte_eal/common/include/rte_trace_eal.h - EAL tracepoint public API.
> > lib/librte_eal/common/eal_trace.h - Private trace header file.
> >
> >
> >> I think, based on the performance we can decide one or another?
> > The above performance data shows a much higher improvement with LTTng.
> >
> > Let me know if anyone have any comments on this and or any suggestion.
> > If there are no comments, I will submit the v1 with after code clean
> > up before the 20.05 proposal deadline.
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
  2020-01-20  4:48 Jerin Jacob Kollanukkaran
@ 2020-01-20 12:08 ` Ray Kinsella
  0 siblings, 0 replies; 24+ messages in thread
From: Ray Kinsella @ 2020-01-20 12:08 UTC (permalink / raw)
  To: Jerin Jacob Kollanukkaran, dave, 'dpdk-dev'

+1 - thanks Dave

On 20/01/2020 04:48, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: dave@barachs.net <dave@barachs.net>
>> Sent: Saturday, January 18, 2020 8:45 PM
>> To: 'Ray Kinsella' <mdr@ashroe.eu>; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>; 'dpdk-dev' <dev@dpdk.org>
>> Subject: [EXT] RE: [RFC] [dpdk-dev] DPDK Trace support
>>
>> It would be well worth considering one of the vpp techniques to minimize trace
>> impact:
>>
>> static inline ring_handler_inline (..., int is_traced) {
>>   for (i = 0; i < vector_size; i++)
>>     {
>>       if (is_traced)
>> 	{
>> 	  do_trace_work;
>> 	}
>>       normal_packet_processing;
>>     }
>> }
>>
>> ring_handler (...)
>> {
>>   if (PREDICT_FALSE(global_trace_flag != 0))
>>     return ring_handler_inline (..., 1 /* is_traced */);
>>   else
>>     return ring_handler_inline (..., 0 /* is_traced */); }
>>
>> This reduces the runtime tax to the absolute minimum, but costs space.
>>
>> Please consider it.
> 
> Thanks Dave for your thoughts.
> 
>>
>> HTH... Dave
>>
>> -----Original Message-----
>> From: Ray Kinsella <mdr@ashroe.eu>
>> Sent: Monday, January 13, 2020 6:00 AM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dpdk-dev
>> <dev@dpdk.org>; dave@barachs.net
>> Subject: Re: [RFC] [dpdk-dev] DPDK Trace support
>>
>> Hi Jerin,
>>
>> Any idea why lttng performance is so poor?
>> I would have naturally gone there to benefit from the existing toolchain.
>>
>> Have you looked at the FD.io logging/tracing infrastructure for inspiration?
>> https://urldefense.proofpoint.com/v2/url?u=https-
>> 3A__wiki.fd.io_view_VPP_elog&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1
>> DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_ijKT84q47_
>> fO7MrN-LddnfpVSuNh6ce6Ks&s=WNwcIA86Rk2TY_C7O4bNTj3055Ofutab-
>> bMPuM9-D4A&e=
>>
>> Ray K
>>
>> On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
>>> Hi All,
>>>
>>> I would like to add tracing support for DPDK.
>>> I am planning to add this support in v20.05 release.
>>>
>>> This RFC attempts to get feedback from the community on
>>>
>>> a) Tracing Use cases.
>>> b) Tracing Requirements.
>>> b) Implementation choices.
>>> c) Trace format.
>>>
>>> Use-cases
>>> ---------
>>> - Most of the cases, The DPDK provider will not have access to the DPDK
>> customer applications.
>>> To debug/analyze the slow path and fast path DPDK API usage from the
>>> field, we need to have integrated trace support in DPDK.
>>>
>>> - Need a low overhead Fast path multi-core PMD driver
>>> debugging/analysis infrastructure in DPDK to fix the functional and
>> performance issue(s) of PMD.
>>>
>>> - Post trace analysis tools can provide various status across the
>>> system such as cpu_idle() using the timestamp added in the trace.
>>>
>>>
>>> Requirements:
>>> -------------
>>> - Support for Linux, FreeBSD and Windows OS
>>> - Open trace format
>>> - Multi-platform Open source trace viewer
>>> - Absolute low overhead trace API for DPDK fast path tracing/debugging.
>>> - Dynamic enable/disable of trace events
>>>
>>>
>>> To enable trace support in DPDK, following items need to work out:
>>>
>>> a) Add the DPDK trace points in the DPDK source code.
>>>
>>> - This includes updating DPDK functions such as,
>>> rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit
>> the trace.
>>>
>>> b) Choosing suitable serialization-format
>>>
>>> - Common Trace Format, CTF, is an open format and language to describe
>> trace formats.
>>> This enables tool reuse, of which line-textual (babeltrace) and
>>> graphical (TraceCompass) variants already exist.
>>>
>>> CTF should look familiar to C programmers but adds stronger typing.
>>> See CTF - A Flexible, High-performance Binary Trace Format.
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__diamon.org_ctf_&d
>>>
>> =DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4
>> wCNtTa4
>>> UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
>> LddnfpVSuNh6ce6Ks&s=QErjHnVHM1me2
>>> 4a6NGGIwiU6O5yot32ZW0vHbPnwZRg&e=
>>>
>>> c) Writing the on-target serialization code,
>>>
>>> See the section below.(Lttng CTF trace emitter vs DPDK specific CTF
>>> trace emitter)
>>>
>>> d) Deciding on and writing the I/O transport mechanics,
>>>
>>> For performance reasons, it should be backed by a huge-page and write to file
>> IO.
>>>
>>> e) Writing the PC-side deserializer/parser,
>>>
>>> Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
>>> See:
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_viewers
>>>
>> _&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s
>> 5f4wCNt
>>> Ta4UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
>> LddnfpVSuNh6ce6Ks&s=JCCywchwpf
>>> jb7Cta5ykYG-SHkMnNUyqPRHh9QAFIcXg&e=
>>>
>>> f) Writing tools for filtering and presentation.
>>>
>>> See item (e)
>>>
>>>
>>> Lttng CTF trace emitter vs DPDK specific CTF trace emitter
>>> ----------------------------------------------------------
>>>
>>> I have written a performance evaluation application to measure the
>>> overhead of Lttng CTF emitter(The fastpath infrastructure used by
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_&d=DwIF
>>>
>> aQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtT
>> a4UUKvc
>>> svI&m=b9wJHO_k_ijKT84q47_fO7MrN-
>> LddnfpVSuNh6ce6Ks&s=dgfSVlEy8_W0IovAga
>>> TnUT2ZbwCojfHimNxuyp4w7gI&e=  library to emit the trace)
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
>>> acobk_lttng-
>> 2Doverhead&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz
>>> 6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
>> LddnfpVSu
>>> Nh6ce6Ks&s=uSB4IwIan6cs9NuEUvGezK_jfdJj7Rjp0qrbThjk08M&e=
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
>>> acobk_lttng-
>> 2Doverhead_blob_master_README&d=DwIFaQ&c=nKjWec2b6R0mOyPaz
>>>
>> 7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_i
>> jKT84q
>>> 47_fO7MrN-LddnfpVSuNh6ce6Ks&s=CudvGIANC2gl_e-
>> TIAQt2IfpoczlIJIUee9IF78L
>>> GHo&e=
>>>
>>> I could improve the performance by 30% by adding the "DPDK"
>>> based plugin for get_clock() and get_cpu(), Here are the performance
>>> numbers after adding the plugin on
>>> x86 and various arm64 board that I have access to,
>>>
>>> On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
>>> last line in the log(ZERO_ARG)) On arm64, it varies from 312 cycles to 1100
>> cycles(based on the class of CPU).
>>> In short, Based on the "IPC capabilities", The cost would be around
>>> 100ns to 400ns for single void trace(a trace without any argument)
>>>
>>>
>>> [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
>>> make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
>>> make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
>>> EAL: Detected 56 lcore(s)
>>> EAL: Detected 2 NUMA nodes
>>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>> EAL: Selected IOVA mode 'PA'
>>> EAL: Probing VFIO support...
>>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>>> EAL:   probe driver: 8086:1521 net_e1000_igb
>>> EAL: PCI device 0000:01:00.1 on NUMA socket 0
>>> EAL:   probe driver: 8086:1521 net_e1000_igb
>>> CPU Timer freq is 2600.000000MHz
>>> NOP: cycles=0.194834 ns=0.074936
>>> GET_CLOCK: cycles=47.854658 ns=18.405638
>>> GET_CPU: cycles=30.995892 ns=11.921497
>>> ZERO_ARG: cycles=236.945113 ns=91.132736
>>>
>>>
>>> We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng
>>> CTF emitter may not fit the DPDK fast path purpose due to the cost
>> associated with generic Lttng features.
>>>
>>> One option could be to have, native CTF emitter in EAL/DPDK to emit
>>> the trace in a hugepage. I think it would be a handful of cycles if we
>>> limit the features to the requirements above:
>>>
>>> The upside of using Lttng CTF emitter:
>>> a) No need to write a new CTF trace emitter(the item (c))
>>>
>>> The downside of Lttng CTF emitter(the item (c))
>>> a) performance issue(See above)
>>> b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
>>> c) dpdk library dependency to lttng for trace.
>>>
>>> So, Probably it good to have native CTF emitter in DPDK and reuse all
>>> open-source trace viewer(babeltrace and  TraceCompass) and format(CTF)
>> infrastructure.
>>> I think, it would be best of both world.
>>>
>>> Any thoughts on this subject? Based on the community feedback, I can work
>> on the patch for v20.05.
>>>
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [RFC]  DPDK Trace support
@ 2020-01-20  4:48 Jerin Jacob Kollanukkaran
  2020-01-20 12:08 ` Ray Kinsella
  0 siblings, 1 reply; 24+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-01-20  4:48 UTC (permalink / raw)
  To: dave, 'Ray Kinsella', 'dpdk-dev'

> -----Original Message-----
> From: dave@barachs.net <dave@barachs.net>
> Sent: Saturday, January 18, 2020 8:45 PM
> To: 'Ray Kinsella' <mdr@ashroe.eu>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; 'dpdk-dev' <dev@dpdk.org>
> Subject: [EXT] RE: [RFC] [dpdk-dev] DPDK Trace support
> 
> It would be well worth considering one of the vpp techniques to minimize trace
> impact:
> 
> static inline ring_handler_inline (..., int is_traced) {
>   for (i = 0; i < vector_size; i++)
>     {
>       if (is_traced)
> 	{
> 	  do_trace_work;
> 	}
>       normal_packet_processing;
>     }
> }
> 
> ring_handler (...)
> {
>   if (PREDICT_FALSE(global_trace_flag != 0))
>     return ring_handler_inline (..., 1 /* is_traced */);
>   else
>     return ring_handler_inline (..., 0 /* is_traced */); }
> 
> This reduces the runtime tax to the absolute minimum, but costs space.
> 
> Please consider it.

Thanks Dave for your thoughts.

> 
> HTH... Dave
> 
> -----Original Message-----
> From: Ray Kinsella <mdr@ashroe.eu>
> Sent: Monday, January 13, 2020 6:00 AM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dpdk-dev
> <dev@dpdk.org>; dave@barachs.net
> Subject: Re: [RFC] [dpdk-dev] DPDK Trace support
> 
> Hi Jerin,
> 
> Any idea why lttng performance is so poor?
> I would have naturally gone there to benefit from the existing toolchain.
> 
> Have you looked at the FD.io logging/tracing infrastructure for inspiration?
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__wiki.fd.io_view_VPP_elog&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1
> DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_ijKT84q47_
> fO7MrN-LddnfpVSuNh6ce6Ks&s=WNwcIA86Rk2TY_C7O4bNTj3055Ofutab-
> bMPuM9-D4A&e=
> 
> Ray K
> 
> On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
> > Hi All,
> >
> > I would like to add tracing support for DPDK.
> > I am planning to add this support in v20.05 release.
> >
> > This RFC attempts to get feedback from the community on
> >
> > a) Tracing Use cases.
> > b) Tracing Requirements.
> > b) Implementation choices.
> > c) Trace format.
> >
> > Use-cases
> > ---------
> > - Most of the cases, The DPDK provider will not have access to the DPDK
> customer applications.
> > To debug/analyze the slow path and fast path DPDK API usage from the
> > field, we need to have integrated trace support in DPDK.
> >
> > - Need a low overhead Fast path multi-core PMD driver
> > debugging/analysis infrastructure in DPDK to fix the functional and
> performance issue(s) of PMD.
> >
> > - Post trace analysis tools can provide various status across the
> > system such as cpu_idle() using the timestamp added in the trace.
> >
> >
> > Requirements:
> > -------------
> > - Support for Linux, FreeBSD and Windows OS
> > - Open trace format
> > - Multi-platform Open source trace viewer
> > - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> > - Dynamic enable/disable of trace events
> >
> >
> > To enable trace support in DPDK, following items need to work out:
> >
> > a) Add the DPDK trace points in the DPDK source code.
> >
> > - This includes updating DPDK functions such as,
> > rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit
> the trace.
> >
> > b) Choosing suitable serialization-format
> >
> > - Common Trace Format, CTF, is an open format and language to describe
> trace formats.
> > This enables tool reuse, of which line-textual (babeltrace) and
> > graphical (TraceCompass) variants already exist.
> >
> > CTF should look familiar to C programmers but adds stronger typing.
> > See CTF - A Flexible, High-performance Binary Trace Format.
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__diamon.org_ctf_&d
> >
> =DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4
> wCNtTa4
> > UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
> LddnfpVSuNh6ce6Ks&s=QErjHnVHM1me2
> > 4a6NGGIwiU6O5yot32ZW0vHbPnwZRg&e=
> >
> > c) Writing the on-target serialization code,
> >
> > See the section below.(Lttng CTF trace emitter vs DPDK specific CTF
> > trace emitter)
> >
> > d) Deciding on and writing the I/O transport mechanics,
> >
> > For performance reasons, it should be backed by a huge-page and write to file
> IO.
> >
> > e) Writing the PC-side deserializer/parser,
> >
> > Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> > See:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_viewers
> >
> _&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s
> 5f4wCNt
> > Ta4UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
> LddnfpVSuNh6ce6Ks&s=JCCywchwpf
> > jb7Cta5ykYG-SHkMnNUyqPRHh9QAFIcXg&e=
> >
> > f) Writing tools for filtering and presentation.
> >
> > See item (e)
> >
> >
> > Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> > ----------------------------------------------------------
> >
> > I have written a performance evaluation application to measure the
> > overhead of Lttng CTF emitter(The fastpath infrastructure used by
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_&d=DwIF
> >
> aQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtT
> a4UUKvc
> > svI&m=b9wJHO_k_ijKT84q47_fO7MrN-
> LddnfpVSuNh6ce6Ks&s=dgfSVlEy8_W0IovAga
> > TnUT2ZbwCojfHimNxuyp4w7gI&e=  library to emit the trace)
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz
> > 6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_ijKT84q47_fO7MrN-
> LddnfpVSu
> > Nh6ce6Ks&s=uSB4IwIan6cs9NuEUvGezK_jfdJj7Rjp0qrbThjk08M&e=
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead_blob_master_README&d=DwIFaQ&c=nKjWec2b6R0mOyPaz
> >
> 7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=b9wJHO_k_i
> jKT84q
> > 47_fO7MrN-LddnfpVSuNh6ce6Ks&s=CudvGIANC2gl_e-
> TIAQt2IfpoczlIJIUee9IF78L
> > GHo&e=
> >
> > I could improve the performance by 30% by adding the "DPDK"
> > based plugin for get_clock() and get_cpu(), Here are the performance
> > numbers after adding the plugin on
> > x86 and various arm64 board that I have access to,
> >
> > On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
> > last line in the log(ZERO_ARG)) On arm64, it varies from 312 cycles to 1100
> cycles(based on the class of CPU).
> > In short, Based on the "IPC capabilities", The cost would be around
> > 100ns to 400ns for single void trace(a trace without any argument)
> >
> >
> > [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> > make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> > make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> > EAL: Detected 56 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:01:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device 0000:01:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > CPU Timer freq is 2600.000000MHz
> > NOP: cycles=0.194834 ns=0.074936
> > GET_CLOCK: cycles=47.854658 ns=18.405638
> > GET_CPU: cycles=30.995892 ns=11.921497
> > ZERO_ARG: cycles=236.945113 ns=91.132736
> >
> >
> > We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng
> > CTF emitter may not fit the DPDK fast path purpose due to the cost
> associated with generic Lttng features.
> >
> > One option could be to have, native CTF emitter in EAL/DPDK to emit
> > the trace in a hugepage. I think it would be a handful of cycles if we
> > limit the features to the requirements above:
> >
> > The upside of using Lttng CTF emitter:
> > a) No need to write a new CTF trace emitter(the item (c))
> >
> > The downside of Lttng CTF emitter(the item (c))
> > a) performance issue(See above)
> > b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> > c) dpdk library dependency to lttng for trace.
> >
> > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > open-source trace viewer(babeltrace and  TraceCompass) and format(CTF)
> infrastructure.
> > I think, it would be best of both world.
> >
> > Any thoughts on this subject? Based on the community feedback, I can work
> on the patch for v20.05.
> >


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, back to index

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-13 10:40 [dpdk-dev] [RFC] DPDK Trace support Jerin Jacob Kollanukkaran
2020-01-13 11:00 ` Ray Kinsella
2020-01-13 12:04   ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2020-01-18 15:14   ` [dpdk-dev] " dave
2020-01-20 16:51     ` Stephen Hemminger
2020-01-13 13:05 ` Bruce Richardson
2020-01-13 14:46   ` Jerin Jacob
2020-01-13 14:58     ` Bruce Richardson
2020-01-13 15:13       ` Jerin Jacob
2020-01-13 16:12         ` Bruce Richardson
2020-01-17  4:41           ` Jerin Jacob
2020-01-17  8:04             ` David Marchand
2020-01-17  9:52               ` Jerin Jacob
2020-01-17 10:30                 ` Mattias Rönnblom
2020-01-17 10:54                   ` Jerin Jacob
2020-02-15 10:21                     ` Jerin Jacob
2020-02-17  9:35                       ` Mattias Rönnblom
2020-02-17 10:23                         ` Jerin Jacob
2020-01-17 10:43                 ` David Marchand
2020-01-17 11:08                   ` Jerin Jacob
2020-01-27 16:12 ` Aaron Conole
2020-01-27 17:23   ` Jerin Jacob
2020-01-20  4:48 Jerin Jacob Kollanukkaran
2020-01-20 12:08 ` Ray Kinsella

DPDK-dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \
		dev@dpdk.org
	public-inbox-index dpdk-dev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git