All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Babeltrace performance issue in live-reading mode
       [not found] <CAMYaK0TVYEgM_qxCExiFn6BcgZ58+KXKqx72h_0Gv4g2JN40Yg@mail.gmail.com>
@ 2017-09-18 15:18 ` Jonathan Rajotte-Julien
       [not found] ` <20170918151857.afnn5n6ntzhyzfnw@psrcode-TP-X230>
  1 sibling, 0 replies; 10+ messages in thread
From: Jonathan Rajotte-Julien @ 2017-09-18 15:18 UTC (permalink / raw)
  To: liguang li; +Cc: lttng-dev

Hi,

On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
>    Hi,
> 
>    Create a session in live-reading mode, run a application which having very
>    high event throughput, then prints
>    the events with babeltrace. We found the live trace viewer are viewing
>    events a few seconds ago, and as time

Could you provide us the version used for babeltrace, lttng-tools and lttng-ust?

>    goes on, the delay will be bigger and bigger.

A similar issues was observed a couple months ago, which implicated multiple delayed ack
problems during communication between lttng-relayd and babeltrace.

The following fixes were merged:

[1] https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088
[2] https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d
[3] https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648

In the event that you are already using an updated version of babeltrace and
lttng-tools, it would be pertinent to provide us with a simple reproducer so we
can assess the issue.

Cheers

>    I checked the source code, found Babeltrace in live-reading mode will read
>    the recorded events in the CTF
>    files, and then parse and print it in a single thread. The process is a
>    little slow, do you have any ideas to
>    improve the process.
>    Thanks,
>    Liguang

> _______________________________________________
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


-- 
Jonathan Rajotte-Julien
EfficiOS
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found] ` <20170918151857.afnn5n6ntzhyzfnw@psrcode-TP-X230>
@ 2017-09-19  7:53   ` liguang li
       [not found]   ` <CAMYaK0TGprotjj0v-HayqHAaKpJaTXgqqYh55CK9JmOKBO-U8A@mail.gmail.com>
  1 sibling, 0 replies; 10+ messages in thread
From: liguang li @ 2017-09-19  7:53 UTC (permalink / raw)
  To: Jonathan Rajotte-Julien; +Cc: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 2319 bytes --]

Hi,

On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien <
jonathan.rajotte-julien@efficios.com> wrote:

> Hi,
>
> On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
> >    Hi,
> >
> >    Create a session in live-reading mode, run a application which having
> very
> >    high event throughput, then prints
> >    the events with babeltrace. We found the live trace viewer are viewing
> >    events a few seconds ago, and as time
>
> Could you provide us the version used for babeltrace, lttng-tools and
> lttng-ust?
>

 Babeltrace: 1.5.1
 Lttng-tools: 2.8.6
 Lttng-ust: 2.8.2

>    goes on, the delay will be bigger and bigger.
>
> A similar issues was observed a couple months ago, which implicated
> multiple delayed ack
> problems during communication between lttng-relayd and babeltrace.
>
> The following fixes were merged:
>
> [1] https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088
> [2] https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d
> [3] https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648
>
> In the event that you are already using an updated version of babeltrace
> and
> lttng-tools, it would be pertinent to provide us with a simple reproducer
> so we
> can assess the issue.
>
>
 Steps:

 lttng create session --live -U net://*
 lttng enable-channel -s session -u ch1
 lttng enable-event -s session -c ch1 -u -a
 lttng start

 Run a high event throughput application, which is multithreaded
application.

 babeltrace -i lttng-live net://*

 After a while, we found the timestamp of the event in the babeltrace is
different with the time in host
 which run the application. And the delay will be bigger and bigger with
time goes.


> Cheers
>
> >    I checked the source code, found Babeltrace in live-reading mode will
> read
> >    the recorded events in the CTF
> >    files, and then parse and print it in a single thread. The process is
> a
> >    little slow, do you have any ideas to
> >    improve the process.
> >    Thanks,
> >    Liguang
>
> > _______________________________________________
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
>
> --
> Jonathan Rajotte-Julien
> EfficiOS
>

[-- Attachment #1.2: Type: text/html, Size: 4066 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]   ` <CAMYaK0TGprotjj0v-HayqHAaKpJaTXgqqYh55CK9JmOKBO-U8A@mail.gmail.com>
@ 2017-09-19 14:57     ` Jonathan Rajotte-Julien
       [not found]     ` <20170919145707.5wh3e23zet7jm3bc@psrcode-TP-X230>
  1 sibling, 0 replies; 10+ messages in thread
From: Jonathan Rajotte-Julien @ 2017-09-19 14:57 UTC (permalink / raw)
  To: liguang li; +Cc: lttng-dev

On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
>    Hi,
>    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
>    <[1]jonathan.rajotte-julien@efficios.com> wrote:
> 
>      Hi,
>      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
>      >    Hi,
>      >
>      >    Create a session in live-reading mode, run a application which
>      having very
>      >    high event throughput, then prints
>      >    the events with babeltrace. We found the live trace viewer are
>      viewing
>      >    events a few seconds ago, and as time
> 
>      Could you provide us the version used for babeltrace, lttng-tools and
>      lttng-ust?
> 
>     Babeltrace: 1.5.1

Update to babeltrace 1.5.3.

>     Lttng-tools: 2.8.6

Update to lttng-tools 2.8.8

>     Lttng-ust: 2.8.2
> 
>      >    goes on, the delay will be bigger and bigger.
> 
>      A similar issues was observed a couple months ago, which implicated
>      multiple delayed ack
>      problems during communication between lttng-relayd and babeltrace.
> 
>      The following fixes were merged:
> 
>      [1]
>      [2]https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088
>      [2]
>      [3]https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d
>      [3]
>      [4]https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648
> 
>      In the event that you are already using an updated version of babeltrace
>      and
>      lttng-tools, it would be pertinent to provide us with a simple
>      reproducer so we
>      can assess the issue.

The version you are using does not include the mentioned fixes. Please update
and redo your experiment.

Cheers

> 
>     
>     Steps:
>     lttng create session --live -U net://*
>     lttng enable-channel -s session -u ch1
>     lttng enable-event -s session -c ch1 -u -a
>     lttng start
>     
>     Run a high event throughput application, which is multithreaded
>    application.
>     babeltrace -i lttng-live net://*
>     
>     After a while, we found the timestamp of the event in the babeltrace is
>    different with the time in host
>     which run the application. And the delay will be bigger and bigger with
>    time goes.
>     
> 
>      Cheers
>      >    I checked the source code, found Babeltrace in live-reading mode
>      will read
>      >    the recorded events in the CTF
>      >    files, and then parse and print it in a single thread. The process
>      is a
>      >    little slow, do you have any ideas to
>      >    improve the process.
>      >    Thanks,
>      >    Liguang
> 
>      > _______________________________________________
>      > lttng-dev mailing list
>      > [5]lttng-dev@lists.lttng.org
>      > [6]https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> 
>      --
>      Jonathan Rajotte-Julien
>      EfficiOS
> 
> References
> 
>    Visible links
>    1. mailto:jonathan.rajotte-julien@efficios.com
>    2. https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088
>    3. https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d
>    4. https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648
>    5. mailto:lttng-dev@lists.lttng.org
>    6. https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Jonathan Rajotte-Julien
EfficiOS
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]     ` <20170919145707.5wh3e23zet7jm3bc@psrcode-TP-X230>
@ 2017-09-20  9:12       ` liguang li
       [not found]       ` <CAMYaK0SQkdjs_gp+4fwBL1BH1oqEiqYnN6PXq-H-=ApMRbe-bg@mail.gmail.com>
  1 sibling, 0 replies; 10+ messages in thread
From: liguang li @ 2017-09-20  9:12 UTC (permalink / raw)
  To: Jonathan Rajotte-Julien; +Cc: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 4405 bytes --]

On Tue, Sep 19, 2017 at 10:57 PM, Jonathan Rajotte-Julien <
jonathan.rajotte-julien@efficios.com> wrote:

> On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
> >    Hi,
> >    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
> >    <[1]jonathan.rajotte-julien@efficios.com> wrote:
> >
> >      Hi,
> >      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
> >      >    Hi,
> >      >
> >      >    Create a session in live-reading mode, run a application which
> >      having very
> >      >    high event throughput, then prints
> >      >    the events with babeltrace. We found the live trace viewer are
> >      viewing
> >      >    events a few seconds ago, and as time
> >
> >      Could you provide us the version used for babeltrace, lttng-tools
> and
> >      lttng-ust?
> >
> >     Babeltrace: 1.5.1
>
> Update to babeltrace 1.5.3.
>
> >     Lttng-tools: 2.8.6
>
> Update to lttng-tools 2.8.8
>
> >     Lttng-ust: 2.8.2
> >
> >      >    goes on, the delay will be bigger and bigger.
> >
> >      A similar issues was observed a couple months ago, which implicated
> >      multiple delayed ack
> >      problems during communication between lttng-relayd and babeltrace.
> >
> >      The following fixes were merged:
> >
> >      [1]
> >      [2]https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088
> >      [2]
> >      [3]https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d
> >      [3]
> >      [4]https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648
> >
> >      In the event that you are already using an updated version of
> babeltrace
> >      and
> >      lttng-tools, it would be pertinent to provide us with a simple
> >      reproducer so we
> >      can assess the issue.
>
> The version you are using does not include the mentioned fixes. Please
> update
> and redo your experiment.
>
>
Test this issue in the version you have listed, the issue still exists.


> Cheers
>
> >
> >
> >     Steps:
> >     lttng create session --live -U net://*
> >     lttng enable-channel -s session -u ch1
> >     lttng enable-event -s session -c ch1 -u -a
> >     lttng start
> >
> >     Run a high event throughput application, which is multithreaded
> >    application.
>

In the multithreaded application, each tracepoint will have the wall
time of the system,then we can easily reproduce this issue through
comparing the time of recorded event and the system wall time.


> >     babeltrace -i lttng-live net://*
> >
> >     After a while, we found the timestamp of the event in the babeltrace
> is
> >    different with the time in host
> >     which run the application. And the delay will be bigger and bigger
> with
> >    time goes.
> >
> >
> >      Cheers
> >      >    I checked the source code, found Babeltrace in live-reading
> mode
> >      will read
> >      >    the recorded events in the CTF
> >      >    files, and then parse and print it in a single thread. The
> process
> >      is a
> >      >    little slow, do you have any ideas to
> >      >    improve the process.
>

From my understanding of the source code, the process of parse and
print event will consume a lot of time. For example, the multithreaded
application will consume 3 CPUs, in a specified time,3 subbuffers will
be filled and sent to lttng-relayd daemon, recorded into the CTF files.
If in the specified time, babeltrace only handled 2 subbuffers' event,
thenthe issue will happens.


> >      >    Thanks,
> >      >    Liguang
> >
> >      > _______________________________________________
> >      > lttng-dev mailing list
> >      > [5]lttng-dev@lists.lttng.org
> >      > [6]https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> >
> >      --
> >      Jonathan Rajotte-Julien
> >      EfficiOS
> >
> > References
> >
> >    Visible links
> >    1. mailto:jonathan.rajotte-julien@efficios.com
> >    2. https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088
> >    3. https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d
> >    4. https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648
> >    5. mailto:lttng-dev@lists.lttng.org
> >    6. https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
> --
> Jonathan Rajotte-Julien
> EfficiOS
>

[-- Attachment #1.2: Type: text/html, Size: 7725 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]       ` <CAMYaK0SQkdjs_gp+4fwBL1BH1oqEiqYnN6PXq-H-=ApMRbe-bg@mail.gmail.com>
@ 2017-09-20 14:58         ` Jonathan Rajotte Julien
       [not found]         ` <cdf53e97-49f0-e6d4-e4b2-0bca6ab84534@efficios.com>
  1 sibling, 0 replies; 10+ messages in thread
From: Jonathan Rajotte Julien @ 2017-09-20 14:58 UTC (permalink / raw)
  To: liguang li; +Cc: lttng-dev

Hi,

On 2017-09-20 05:12 AM, liguang li wrote:
> 
> 
> On Tue, Sep 19, 2017 at 10:57 PM, Jonathan Rajotte-Julien <jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com>> wrote:
> 
>     On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
>     >    Hi,
>     >    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
>     >    <[1]jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com>> wrote:
>     >
>     >      Hi,
>     >      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
>     >      >    Hi,
>     >      >
>     >      >    Create a session in live-reading mode, run a application which
>     >      having very
>     >      >    high event throughput, then prints
>     >      >    the events with babeltrace. We found the live trace viewer are
>     >      viewing
>     >      >    events a few seconds ago, and as time
>     >
>     >      Could you provide us the version used for babeltrace, lttng-tools and
>     >      lttng-ust?
>     >
>     >     Babeltrace: 1.5.1
> 
>     Update to babeltrace 1.5.3.
> 
>     >     Lttng-tools: 2.8.6
> 
>     Update to lttng-tools 2.8.8
> 
>     >     Lttng-ust: 2.8.2
>     >
>     >      >    goes on, the delay will be bigger and bigger.
>     >
>     >      A similar issues was observed a couple months ago, which implicated
>     >      multiple delayed ack
>     >      problems during communication between lttng-relayd and babeltrace.
>     >
>     >      The following fixes were merged:
>     >
>     >      [1]
>     >      [2]https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088 <https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088>
>     >      [2]
>     >      [3]https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d>
>     >      [3]
>     >      [4]https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648>
>     >
>     >      In the event that you are already using an updated version of babeltrace
>     >      and
>     >      lttng-tools, it would be pertinent to provide us with a simple
>     >      reproducer so we
>     >      can assess the issue.
> 
>     The version you are using does not include the mentioned fixes. Please update
>     and redo your experiment.
> 
> 
> Test this issue in the version you have listed, the issue still exists.

Given that previous versions had a major timing problem there I would expect to have some improvement.

In that case, we will need a lot more information on your benchmarking strategy.
We will need a simple reproducer, your benchmark code (r,gnuplot etc.), your overall methodology
to be able to reproduce the issue locally. Otherwise, it will be very hard 
to come to any conclusion.

>  
> 
>     Cheers
> 
>     >
>     >     
>     >     Steps:
>     >     lttng create session --live -U net://*
>     >     lttng enable-channel -s session -u ch1
>     >     lttng enable-event -s session -c ch1 -u -a
>     >     lttng start
>     >     
>     >     Run a high event throughput application, which is multithreaded
>     >    application> 
> 
> In the multithreaded application, each tracepoint will have the wall
> time of the system,then we can easily reproduce this issue through
> comparing the time of recorded event and the system wall time.
>  
> 
>     >     babeltrace -i lttng-live net://*
>     >     
>     >     After a while, we found the timestamp of the event in the babeltrace is
>     >    different with the time in host
>     >     which run the application. And the delay will be bigger and bigger with
>     >    time goes.
>     >     
>     >
>     >      Cheers
>     >      >    I checked the source code, found Babeltrace in live-reading mode
>     >      will read
>     >      >    the recorded events in the CTF
>     >      >    files, and then parse and print it in a single thread. The process
>     >      is a
>     >      >    little slow, do you have any ideas to
>     >      >    improve the process.
> 
> 
> From my understanding of the source code, the process of parse and
> print event will consume a lot of time. For example, the multithreaded
> application will consume 3 CPUs, in a specified time,3 subbuffers will
> be filled and sent to lttng-relayd daemon, recorded into the CTF files.
> If in the specified time, babeltrace only handled 2 subbuffers' event,
> thenthe issue will happens.

Did you perform a bisection for where the delay come from? Reception of packet? formatting of event?
What is the throughput of the application?
How many tracepoint definition?
Does babeltrace catch up if a quiescent period is given?
Could you provide us with statistics, timing data, etc.?
What type of delay are we talking about?


-- 
Jonathan R. Julien
Efficios
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]         ` <cdf53e97-49f0-e6d4-e4b2-0bca6ab84534@efficios.com>
@ 2017-09-21  9:34           ` liguang li
       [not found]           ` <CAMYaK0TEWZGtZnvO5aDw5CZi8Rr1tXdoaBrkDRO6k-biydVn_g@mail.gmail.com>
  1 sibling, 0 replies; 10+ messages in thread
From: liguang li @ 2017-09-21  9:34 UTC (permalink / raw)
  To: Jonathan Rajotte Julien; +Cc: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 5934 bytes --]

Hi,

On Wed, Sep 20, 2017 at 10:58 PM, Jonathan Rajotte Julien <
Jonathan.rajotte-julien@efficios.com> wrote:

> Hi,
>
> On 2017-09-20 05:12 AM, liguang li wrote:
> >
> >
> > On Tue, Sep 19, 2017 at 10:57 PM, Jonathan Rajotte-Julien <
> jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-
> julien@efficios.com>> wrote:
> >
> >     On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
> >     >    Hi,
> >     >    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
> >     >    <[1]jonathan.rajotte-julien@efficios.com <mailto:
> jonathan.rajotte-julien@efficios.com>> wrote:
> >     >
> >     >      Hi,
> >     >      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
> >     >      >    Hi,
> >     >      >
> >     >      >    Create a session in live-reading mode, run a application
> which
> >     >      having very
> >     >      >    high event throughput, then prints
> >     >      >    the events with babeltrace. We found the live trace
> viewer are
> >     >      viewing
> >     >      >    events a few seconds ago, and as time
> >     >
> >     >      Could you provide us the version used for babeltrace,
> lttng-tools and
> >     >      lttng-ust?
> >     >
> >     >     Babeltrace: 1.5.1
> >
> >     Update to babeltrace 1.5.3.
> >
> >     >     Lttng-tools: 2.8.6
> >
> >     Update to lttng-tools 2.8.8
> >
> >     >     Lttng-ust: 2.8.2
> >     >
> >     >      >    goes on, the delay will be bigger and bigger.
> >     >
> >     >      A similar issues was observed a couple months ago, which
> implicated
> >     >      multiple delayed ack
> >     >      problems during communication between lttng-relayd and
> babeltrace.
> >     >
> >     >      The following fixes were merged:
> >     >
> >     >      [1]
> >     >      [2]https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088 <https://github.com/lttng/
> lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088>
> >     >      [2]
> >     >      [3]https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/
> babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d>
> >     >      [3]
> >     >      [4]https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/
> babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648>
> >     >
> >     >      In the event that you are already using an updated version of
> babeltrace
> >     >      and
> >     >      lttng-tools, it would be pertinent to provide us with a simple
> >     >      reproducer so we
> >     >      can assess the issue.
> >
> >     The version you are using does not include the mentioned fixes.
> Please update
> >     and redo your experiment.
> >
> >
> > Test this issue in the version you have listed, the issue still exists.
>
> Given that previous versions had a major timing problem there I would
> expect to have some improvement.
>
> In that case, we will need a lot more information on your benchmarking
> strategy.
> We will need a simple reproducer, your benchmark code (r,gnuplot etc.),
> your overall methodology
> to be able to reproduce the issue locally. Otherwise, it will be very hard
> to come to any conclusion.
>
>
Sorry, we can not provide the detailed information about the benchmark code
due to the company regulations.


> >
> >
> >     Cheers
> >
> >     >
> >     >
> >     >     Steps:
> >     >     lttng create session --live -U net://*
> >     >     lttng enable-channel -s session -u ch1
> >     >     lttng enable-event -s session -c ch1 -u -a
> >     >     lttng start
> >     >
> >     >     Run a high event throughput application, which is multithreaded
> >     >    application>
> >
> > In the multithreaded application, each tracepoint will have the wall
> > time of the system,then we can easily reproduce this issue through
> > comparing the time of recorded event and the system wall time.
> >
> >
> >     >     babeltrace -i lttng-live net://*
> >     >
> >     >     After a while, we found the timestamp of the event in the
> babeltrace is
> >     >    different with the time in host
> >     >     which run the application. And the delay will be bigger and
> bigger with
> >     >    time goes.
> >     >
> >     >
> >     >      Cheers
> >     >      >    I checked the source code, found Babeltrace in
> live-reading mode
> >     >      will read
> >     >      >    the recorded events in the CTF
> >     >      >    files, and then parse and print it in a single thread.
> The process
> >     >      is a
> >     >      >    little slow, do you have any ideas to
> >     >      >    improve the process.
> >
> >
> > From my understanding of the source code, the process of parse and
> > print event will consume a lot of time. For example, the multithreaded
> > application will consume 3 CPUs, in a specified time,3 subbuffers will
> > be filled and sent to lttng-relayd daemon, recorded into the CTF files.
> > If in the specified time, babeltrace only handled 2 subbuffers' event,
> > thenthe issue will happens.
>
> Did you perform a bisection for where the delay come from? Reception of
> packet? formatting of event?
> What is the throughput of the application?
> How many tracepoint definition?
> Does babeltrace catch up if a quiescent period is given?
>

I think so.


> Could you provide us with statistics, timing data, etc.?
>
What type of delay are we talking about?
>
>
For example, the system wall time is 5:18 PM now, but the Babeltrace is
still printing events which
have time 5:10 PM. Then stop running the application, the Babeltrace still
need some times to print
the recorded events.

I think the root cause is that the parsing and printing process is a bit
slow. So i want to know if there
are any method to improve the performance of this process.

Regards,
Liguang

--
> Jonathan R. Julien
> Efficios
>

[-- Attachment #1.2: Type: text/html, Size: 9896 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]           ` <CAMYaK0TEWZGtZnvO5aDw5CZi8Rr1tXdoaBrkDRO6k-biydVn_g@mail.gmail.com>
@ 2017-09-21 15:03             ` Jonathan Rajotte Julien
       [not found]             ` <05dc30ee-bf01-1960-3ec6-54018dd4512d@efficios.com>
  1 sibling, 0 replies; 10+ messages in thread
From: Jonathan Rajotte Julien @ 2017-09-21 15:03 UTC (permalink / raw)
  To: liguang li; +Cc: lttng-dev

Hi,

On 2017-09-21 05:34 AM, liguang li wrote:
> Hi,
> 
> On Wed, Sep 20, 2017 at 10:58 PM, Jonathan Rajotte Julien <Jonathan.rajotte-julien@efficios.com <mailto:Jonathan.rajotte-julien@efficios.com>> wrote:
> 
>     Hi,
> 
>     On 2017-09-20 05:12 AM, liguang li wrote:
>     >
>     >
>     > On Tue, Sep 19, 2017 at 10:57 PM, Jonathan Rajotte-Julien <jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com> <mailto:jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com>>> wrote:
>     >
>     >     On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
>     >     >    Hi,
>     >     >    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
>     >     >    <[1]jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com> <mailto:jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com>>> wrote:
>     >     >
>     >     >      Hi,
>     >     >      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li wrote:
>     >     >      >    Hi,
>     >     >      >
>     >     >      >    Create a session in live-reading mode, run a application which
>     >     >      having very
>     >     >      >    high event throughput, then prints
>     >     >      >    the events with babeltrace. We found the live trace viewer are
>     >     >      viewing
>     >     >      >    events a few seconds ago, and as time
>     >     >
>     >     >      Could you provide us the version used for babeltrace, lttng-tools and
>     >     >      lttng-ust?
>     >     >
>     >     >     Babeltrace: 1.5.1
>     >
>     >     Update to babeltrace 1.5.3.
>     >
>     >     >     Lttng-tools: 2.8.6
>     >
>     >     Update to lttng-tools 2.8.8
>     >
>     >     >     Lttng-ust: 2.8.2
>     >     >
>     >     >      >    goes on, the delay will be bigger and bigger.
>     >     >
>     >     >      A similar issues was observed a couple months ago, which implicated
>     >     >      multiple delayed ack
>     >     >      problems during communication between lttng-relayd and babeltrace.
>     >     >
>     >     >      The following fixes were merged:
>     >     >
>     >     >      [1]
>     >     >      [2]https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088 <https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088> <https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088 <https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088>>
>     >     >      [2]
>     >     >      [3]https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d> <https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d>>
>     >     >      [3]
>     >     >      [4]https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648> <https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648>>
>     >     >
>     >     >      In the event that you are already using an updated version of babeltrace
>     >     >      and
>     >     >      lttng-tools, it would be pertinent to provide us with a simple
>     >     >      reproducer so we
>     >     >      can assess the issue.
>     >
>     >     The version you are using does not include the mentioned fixes. Please update
>     >     and redo your experiment.
>     >
>     >
>     > Test this issue in the version you have listed, the issue still exists.
> 
>     Given that previous versions had a major timing problem there I would expect to have some improvement.
> 
>     In that case, we will need a lot more information on your benchmarking strategy.
>     We will need a simple reproducer, your benchmark code (r,gnuplot etc.), your overall methodology
>     to be able to reproduce the issue locally. Otherwise, it will be very hard
>     to come to any conclusion.
> 
> 
> Sorry, we can not provide the detailed information about the benchmark code due to the company regulations.
>  
> 
>     >  
>     >
>     >     Cheers
>     >
>     >     >
>     >     >     
>     >     >     Steps:
>     >     >     lttng create session --live -U net://*
>     >     >     lttng enable-channel -s session -u ch1
>     >     >     lttng enable-event -s session -c ch1 -u -a
>     >     >     lttng start
>     >     >     
>     >     >     Run a high event throughput application, which is multithreaded
>     >     >    application>
>     >
>     > In the multithreaded application, each tracepoint will have the wall
>     > time of the system,then we can easily reproduce this issue through
>     > comparing the time of recorded event and the system wall time.
>     >  
>     >
>     >     >     babeltrace -i lttng-live net://*
>     >     >     
>     >     >     After a while, we found the timestamp of the event in the babeltrace is
>     >     >    different with the time in host
>     >     >     which run the application. And the delay will be bigger and bigger with
>     >     >    time goes.
>     >     >     
>     >     >
>     >     >      Cheers
>     >     >      >    I checked the source code, found Babeltrace in live-reading mode
>     >     >      will read
>     >     >      >    the recorded events in the CTF
>     >     >      >    files, and then parse and print it in a single thread. The process
>     >     >      is a
>     >     >      >    little slow, do you have any ideas to
>     >     >      >    improve the process.
>     >
>     >
>     > From my understanding of the source code, the process of parse and
>     > print event will consume a lot of time. For example, the multithreaded
>     > application will consume 3 CPUs, in a specified time,3 subbuffers will
>     > be filled and sent to lttng-relayd daemon, recorded into the CTF files.
>     > If in the specified time, babeltrace only handled 2 subbuffers' event,
>     > thenthe issue will happens.
> 
>     Did you perform a bisection for where the delay come from? Reception of packet? formatting of event?

?

>     What is the throughput of the application?


? 
This is a key element if anyone want to test at home. It can be expressed in event/s, event/ms.

>     How many tracepoint definition>     Does babeltrace catch up if a quiescent period is given?
> 
> 
> I think so.
>  
> 
>     Could you provide us with statistics, timing data, etc.?
> 
>     What type of delay are we talking about?
> 
> 
> For example, the system wall time is 5:18 PM now, but the Babeltrace is still printing events which
> have time 5:10 PM. Then stop running the application, the Babeltrace still need some times to print
> the recorded events.
> 
> I think the root cause is that the parsing and printing process is a bit slow. So i want to know if there
> are any method to improve the performance of this process.

A simple experiment would be to run a bounded workload 
to perform a quick timing evaluation of babeltrace with a streamed trace (on disk) and in live mode.
Calculate the time (time babeltrace path_to_streamed_trace) it takes babeltrace to read the on disk trace
and perform the same experiment with a live trace. It is important to bound your experiment
and have mostly the same workload in both experiment.

This will give us a rough estimate of the disparity between both scenario.

Cheers
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]             ` <05dc30ee-bf01-1960-3ec6-54018dd4512d@efficios.com>
@ 2017-10-11 10:44               ` liguang li
       [not found]               ` <CAMYaK0QBmug12risvr1_7ws5OhrCiXbfDD7+m6vSwwpYs2-c2A@mail.gmail.com>
  1 sibling, 0 replies; 10+ messages in thread
From: liguang li @ 2017-10-11 10:44 UTC (permalink / raw)
  To: Jonathan Rajotte Julien; +Cc: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 8879 bytes --]

Hi,


On Thu, Sep 21, 2017 at 11:03 PM, Jonathan Rajotte Julien <
Jonathan.rajotte-julien@efficios.com> wrote:

> Hi,
>
> On 2017-09-21 05:34 AM, liguang li wrote:
> > Hi,
> >
> > On Wed, Sep 20, 2017 at 10:58 PM, Jonathan Rajotte Julien <
> Jonathan.rajotte-julien@efficios.com <mailto:Jonathan.rajotte-
> julien@efficios.com>> wrote:
> >
> >     Hi,
> >
> >     On 2017-09-20 05:12 AM, liguang li wrote:
> >     >
> >     >
> >     > On Tue, Sep 19, 2017 at 10:57 PM, Jonathan Rajotte-Julien <
> jonathan.rajotte-julien@efficios.com <mailto:jonathan.rajotte-
> julien@efficios.com> <mailto:jonathan.rajotte-julien@efficios.com <mailto:
> jonathan.rajotte-julien@efficios.com>>> wrote:
> >     >
> >     >     On Tue, Sep 19, 2017 at 03:53:27PM +0800, liguang li wrote:
> >     >     >    Hi,
> >     >     >    On Mon, Sep 18, 2017 at 11:18 PM, Jonathan Rajotte-Julien
> >     >     >    <[1]jonathan.rajotte-julien@efficios.com <mailto:
> jonathan.rajotte-julien@efficios.com> <mailto:jonathan.rajotte-
> julien@efficios.com <mailto:jonathan.rajotte-julien@efficios.com>>> wrote:
> >     >     >
> >     >     >      Hi,
> >     >     >      On Mon, Sep 18, 2017 at 11:32:07AM +0800, liguang li
> wrote:
> >     >     >      >    Hi,
> >     >     >      >
> >     >     >      >    Create a session in live-reading mode, run a
> application which
> >     >     >      having very
> >     >     >      >    high event throughput, then prints
> >     >     >      >    the events with babeltrace. We found the live
> trace viewer are
> >     >     >      viewing
> >     >     >      >    events a few seconds ago, and as time
> >     >     >
> >     >     >      Could you provide us the version used for babeltrace,
> lttng-tools and
> >     >     >      lttng-ust?
> >     >     >
> >     >     >     Babeltrace: 1.5.1
> >     >
> >     >     Update to babeltrace 1.5.3.
> >     >
> >     >     >     Lttng-tools: 2.8.6
> >     >
> >     >     Update to lttng-tools 2.8.8
> >     >
> >     >     >     Lttng-ust: 2.8.2
> >     >     >
> >     >     >      >    goes on, the delay will be bigger and bigger.
> >     >     >
> >     >     >      A similar issues was observed a couple months ago,
> which implicated
> >     >     >      multiple delayed ack
> >     >     >      problems during communication between lttng-relayd and
> babeltrace.
> >     >     >
> >     >     >      The following fixes were merged:
> >     >     >
> >     >     >      [1]
> >     >     >      [2]https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088 <https://github.com/lttng/
> lttng-tools/commit/b6025e9476332b75eb8184345c3eb3e924780088> <
> https://github.com/lttng/lttng-tools/commit/b6025e9476332b75eb8184345c3eb3
> e924780088 <https://github.com/lttng/lttng-tools/commit/
> b6025e9476332b75eb8184345c3eb3e924780088>>
> >     >     >      [2]
> >     >     >      [3]https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/
> babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d> <
> https://github.com/efficios/babeltrace/commit/
> de417d04317ca3bc30f59685a9d19de670e4b11d <https://github.com/efficios/
> babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d>>
> >     >     >      [3]
> >     >     >      [4]https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/
> babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648> <
> https://github.com/efficios/babeltrace/commit/
> 4594dbd8f7c2af2446a3e310bee74ba4a2e9d648 <https://github.com/efficios/
> babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648>>
> >     >     >
> >     >     >      In the event that you are already using an updated
> version of babeltrace
> >     >     >      and
> >     >     >      lttng-tools, it would be pertinent to provide us with a
> simple
> >     >     >      reproducer so we
> >     >     >      can assess the issue.
> >     >
> >     >     The version you are using does not include the mentioned
> fixes. Please update
> >     >     and redo your experiment.
> >     >
> >     >
> >     > Test this issue in the version you have listed, the issue still
> exists.
> >
> >     Given that previous versions had a major timing problem there I
> would expect to have some improvement.
> >
> >     In that case, we will need a lot more information on your
> benchmarking strategy.
> >     We will need a simple reproducer, your benchmark code (r,gnuplot
> etc.), your overall methodology
> >     to be able to reproduce the issue locally. Otherwise, it will be
> very hard
> >     to come to any conclusion.
> >
> >
> > Sorry, we can not provide the detailed information about the benchmark
> code due to the company regulations.
> >
> >
> >     >
> >     >
> >     >     Cheers
> >     >
> >     >     >
> >     >     >
> >     >     >     Steps:
> >     >     >     lttng create session --live -U net://*
> >     >     >     lttng enable-channel -s session -u ch1
> >     >     >     lttng enable-event -s session -c ch1 -u -a
> >     >     >     lttng start
> >     >     >
> >     >     >     Run a high event throughput application, which is
> multithreaded
> >     >     >    application>
> >     >
> >     > In the multithreaded application, each tracepoint will have the
> wall
> >     > time of the system,then we can easily reproduce this issue through
> >     > comparing the time of recorded event and the system wall time.
> >     >
> >     >
> >     >     >     babeltrace -i lttng-live net://*
> >     >     >
> >     >     >     After a while, we found the timestamp of the event in
> the babeltrace is
> >     >     >    different with the time in host
> >     >     >     which run the application. And the delay will be bigger
> and bigger with
> >     >     >    time goes.
> >     >     >
> >     >     >
> >     >     >      Cheers
> >     >     >      >    I checked the source code, found Babeltrace in
> live-reading mode
> >     >     >      will read
> >     >     >      >    the recorded events in the CTF
> >     >     >      >    files, and then parse and print it in a single
> thread. The process
> >     >     >      is a
> >     >     >      >    little slow, do you have any ideas to
> >     >     >      >    improve the process.
> >     >
> >     >
> >     > From my understanding of the source code, the process of parse and
> >     > print event will consume a lot of time. For example, the
> multithreaded
> >     > application will consume 3 CPUs, in a specified time,3 subbuffers
> will
> >     > be filled and sent to lttng-relayd daemon, recorded into the CTF
> files.
> >     > If in the specified time, babeltrace only handled 2 subbuffers'
> event,
> >     > thenthe issue will happens.
> >
> >     Did you perform a bisection for where the delay come from? Reception
> of packet? formatting of event?
>
> ?
>
> >     What is the throughput of the application?
>
>
> ?
> This is a key element if anyone want to test at home. It can be expressed
> in event/s, event/ms.
>
> >     How many tracepoint definition>     Does babeltrace catch up if a
> quiescent period is given?
> >
> >
> > I think so.
> >
> >
> >     Could you provide us with statistics, timing data, etc.?
> >
> >     What type of delay are we talking about?
> >
> >
> > For example, the system wall time is 5:18 PM now, but the Babeltrace is
> still printing events which
> > have time 5:10 PM. Then stop running the application, the Babeltrace
> still need some times to print
> > the recorded events.
> >
> > I think the root cause is that the parsing and printing process is a bit
> slow. So i want to know if there
> > are any method to improve the performance of this process.
>
> A simple experiment would be to run a bounded workload
> to perform a quick timing evaluation of babeltrace with a streamed trace
> (on disk) and in live mode.
> Calculate the time (time babeltrace path_to_streamed_trace) it takes
> babeltrace to read the on disk trace
> and perform the same experiment with a live trace. It is important to
> bound your experiment
> and have mostly the same workload in both experiment.
>
>
I made a simple example to illustrate the issue which i had met.

Run the application for about 60 seconds, the trace is stored on disk. The
time of reading the trace using babeltrace is about 150 seconds. The speed
of processing is much lower than generating the trace.

I think the performance of babeltrace in live mode will not greater than
reading the trace on disk. So in live mode, timestamp of the event will
have delay (printing event with timestamp of few minutes ago), i wonder to
know if there are any methods to improve the performance of babeltrace in
live mode.

Regards,
Liguang


> This will give us a rough estimate of the disparity between both scenario.
>
> Cheers
>

[-- Attachment #1.2: Type: text/html, Size: 14348 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Babeltrace performance issue in live-reading mode
       [not found]               ` <CAMYaK0QBmug12risvr1_7ws5OhrCiXbfDD7+m6vSwwpYs2-c2A@mail.gmail.com>
@ 2017-10-11 15:34                 ` Jonathan Rajotte-Julien
  0 siblings, 0 replies; 10+ messages in thread
From: Jonathan Rajotte-Julien @ 2017-10-11 15:34 UTC (permalink / raw)
  To: liguang li; +Cc: lttng-dev

Hi,

> > > I think the root cause is that the parsing and printing process is a bit
> > slow. So i want to know if there
> > > are any method to improve the performance of this process.
> >
> > A simple experiment would be to run a bounded workload
> > to perform a quick timing evaluation of babeltrace with a streamed trace
> > (on disk) and in live mode.
> > Calculate the time (time babeltrace path_to_streamed_trace) it takes
> > babeltrace to read the on disk trace
> > and perform the same experiment with a live trace. It is important to
> > bound your experiment
> > and have mostly the same workload in both experiment.
> >
> >


> I made a simple example to illustrate the issue which i had met.
> 
> Run the application for about 60 seconds, the trace is stored on disk. The
Could you also provide the time taken by babeltrace to read the same trace (on
disk) with the -o dummy parameter?

e.g: babeltrace -o dummy /path/to/trace

This will exclude the formatting and outputting delays.

> time of reading the trace using babeltrace is about 150 seconds. The speed

This indicates that you are currently replicating a high throughput scenario.
Keep in mind that at the end of the day given that babeltrace needs to serialize a
multi cores system there will always be some delays in the serialization step of a
trace reading be it locally or in live mode. What we are looking for here are
outliers to those delays.

> of processing is much lower than generating the trace.
> 
> I think the performance of babeltrace in live mode will not greater than

It is not about what you think, it is about what is actually happening.
Give us more information on the experimentation, provide us with your actual
tests so we can validate that it indeed test correctly the scenario at hand and
then we can can go forward from there.

> reading the trace on disk. So in live mode, timestamp of the event will
> have delay (printing event with timestamp of few minutes ago), i wonder to

You provide partial answers or no answers at all to most of the questions asked.
This is problematic and does not help at all the troubleshooting of the issue.

Did you perform the same experiment in live mode?
How much time does it take in live mode to process the same trace/scenario?

> know if there are any methods to improve the performance of babeltrace in
> live mode.

The most useful thing you could do is profile, perform *concrete performance
analysis* and report your finding to the community so we can work together on
improving the situation.

The previously mentioned fixes regarding lttng-live tcp communication [1] problem
was brought up to us with a comprehensive technical report with reproducers and
metrics. We were more than happy to provide feedback, time and our expertise to
alleviate the problem reported.

If there was a "quick fix", that we were aware of, it would be
already implemented and merged. We do not have any incentives to keep such fixes to
ourselves.

I'm not saying that nothing can be done to improve the performance but our
efforts, at EfficiOS, is focused on Babeltrace 2.0 for the time being. Hence we
need comprehensive data to pursue any performance related investigation
regarding babeltrace 1.X.

Cheers

[1]
https://github.com/efficios/babeltrace/commit/de417d04317ca3bc30f59685a9d19de670e4b11d
https://github.com/efficios/babeltrace/commit/4594dbd8f7c2af2446a3e310bee74ba4a2e9d648

> 
> Regards,
> Liguang
> 
> 
> > This will give us a rough estimate of the disparity between both scenario.
> >
> > Cheers
> >

-- 
Jonathan Rajotte-Julien
EfficiOS
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Babeltrace performance issue in live-reading mode
@ 2017-09-18  3:32 liguang li
  0 siblings, 0 replies; 10+ messages in thread
From: liguang li @ 2017-09-18  3:32 UTC (permalink / raw)
  To: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 525 bytes --]

Hi,

Create a session in live-reading mode, run a application which having very
high event throughput, then prints
the events with babeltrace. We found the live trace viewer are viewing
events a few seconds ago, and as time
goes on, the delay will be bigger and bigger.

I checked the source code, found Babeltrace in live-reading mode will read
the recorded events in the CTF
files, and then parse and print it in a single thread. The process is a
little slow, do you have any ideas to
improve the process.

Thanks,
Liguang

[-- Attachment #1.2: Type: text/html, Size: 701 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-10-11 15:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAMYaK0TVYEgM_qxCExiFn6BcgZ58+KXKqx72h_0Gv4g2JN40Yg@mail.gmail.com>
2017-09-18 15:18 ` Babeltrace performance issue in live-reading mode Jonathan Rajotte-Julien
     [not found] ` <20170918151857.afnn5n6ntzhyzfnw@psrcode-TP-X230>
2017-09-19  7:53   ` liguang li
     [not found]   ` <CAMYaK0TGprotjj0v-HayqHAaKpJaTXgqqYh55CK9JmOKBO-U8A@mail.gmail.com>
2017-09-19 14:57     ` Jonathan Rajotte-Julien
     [not found]     ` <20170919145707.5wh3e23zet7jm3bc@psrcode-TP-X230>
2017-09-20  9:12       ` liguang li
     [not found]       ` <CAMYaK0SQkdjs_gp+4fwBL1BH1oqEiqYnN6PXq-H-=ApMRbe-bg@mail.gmail.com>
2017-09-20 14:58         ` Jonathan Rajotte Julien
     [not found]         ` <cdf53e97-49f0-e6d4-e4b2-0bca6ab84534@efficios.com>
2017-09-21  9:34           ` liguang li
     [not found]           ` <CAMYaK0TEWZGtZnvO5aDw5CZi8Rr1tXdoaBrkDRO6k-biydVn_g@mail.gmail.com>
2017-09-21 15:03             ` Jonathan Rajotte Julien
     [not found]             ` <05dc30ee-bf01-1960-3ec6-54018dd4512d@efficios.com>
2017-10-11 10:44               ` liguang li
     [not found]               ` <CAMYaK0QBmug12risvr1_7ws5OhrCiXbfDD7+m6vSwwpYs2-c2A@mail.gmail.com>
2017-10-11 15:34                 ` Jonathan Rajotte-Julien
2017-09-18  3:32 liguang li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.