All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Using Babeltrace to write CTF for MPI application
       [not found] <CAK-w0DeOHsrygOt4CLVBB55edg6W+hrEzX2+A-UwFn6E14HDoQ@mail.gmail.com>
@ 2017-04-04 23:52 ` Jonathan Rajotte Julien
       [not found] ` <e60c08f5-189c-bacc-b4bc-5438c6a36476@efficios.com>
  1 sibling, 0 replies; 2+ messages in thread
From: Jonathan Rajotte Julien @ 2017-04-04 23:52 UTC (permalink / raw)
  To: lttng-dev

Hi Rocky,

Not sure if pertinent but did you take a look at the barectf [1] project?

Cheers

[1] https://github.com/efficios/barectf

On 2017-04-04 05:29 PM, Rocky Dunlap wrote:
> I am instrumenting an MPI application to output a custom application trace in CTF using Babeltrace 1.5.2.  I would like to end up with a single trace with multiple streams, one per process.  All streams share the same stream class (and metadata). All processes have access to the same file system.  I am using the C CTF writer API using this test as an example:
> https://github.com/efficios/babeltrace/blob/stable-1.5/tests/lib/test_ctf_writer.c
> 
> What I have in mind is something like this: process 0 would be responsible for writing the metadata and its own stream, while all other processes would only need to write their own stream.
> 
> The issue I have run into is that the individual stream file names are determined by appending stream->id to the stream filename and the stream id is determined behind the scenes as stream->id = stream_class->next_stream_id++.  Since each process has its own address space, all processes want stream id of 0.
> 
> Is there a way using the current API to explicitly set the stream id so that each process will write to a separate file?
> 
> I'm also open for suggestions on the overall approach.  The main reason to have each process as a separate stream in a single trace is so that I can open the entire trace in an analysis tool like TraceCompass and see all processes together.
> 
> Rocky
> 
> 
> _______________________________________________
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> 

-- 
Jonathan R. Julien
Efficios
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Fwd: Using Babeltrace to write CTF for MPI application
       [not found]   ` <CAK-w0DeNJ2CjfALbf5qLcN8FL6BWYumDZs-xk3ea_BL_xxWKKw@mail.gmail.com>
@ 2017-04-05 17:37     ` Rocky Dunlap
  0 siblings, 0 replies; 2+ messages in thread
From: Rocky Dunlap @ 2017-04-05 17:37 UTC (permalink / raw)
  To: lttng-dev


[-- Attachment #1.1: Type: text/plain, Size: 2640 bytes --]

Apologies... I meant to reply to the list.


---------- Forwarded message ----------
From: Rocky Dunlap <rsdunlapiv@gmail.com>
Date: Wed, Apr 5, 2017 at 10:13 AM
Subject: Re: [lttng-dev] Using Babeltrace to write CTF for MPI application
To: Jonathan Rajotte Julien <Jonathan.rajotte-julien@efficios.com>


Jonathan,

I started with barectf and I really like it!  It works great for basic
cases.  Unfortunately, it only supports simple event structures right now,
so for this reason I switched to the babeltrace CTF library, which has full
support for CTF.

Rocky

On Tue, Apr 4, 2017 at 5:52 PM, Jonathan Rajotte Julien <
Jonathan.rajotte-julien@efficios.com> wrote:

> Hi Rocky,
>
> Not sure if pertinent but did you take a look at the barectf [1] project?
>
> Cheers
>
> [1] https://github.com/efficios/barectf
>
> On 2017-04-04 05:29 PM, Rocky Dunlap wrote:
> > I am instrumenting an MPI application to output a custom application
> trace in CTF using Babeltrace 1.5.2.  I would like to end up with a single
> trace with multiple streams, one per process.  All streams share the same
> stream class (and metadata). All processes have access to the same file
> system.  I am using the C CTF writer API using this test as an example:
> > https://github.com/efficios/babeltrace/blob/stable-1.5/tests
> /lib/test_ctf_writer.c
> >
> > What I have in mind is something like this: process 0 would be
> responsible for writing the metadata and its own stream, while all other
> processes would only need to write their own stream.
> >
> > The issue I have run into is that the individual stream file names are
> determined by appending stream->id to the stream filename and the stream id
> is determined behind the scenes as stream->id =
> stream_class->next_stream_id++.  Since each process has its own address
> space, all processes want stream id of 0.
> >
> > Is there a way using the current API to explicitly set the stream id so
> that each process will write to a separate file?
> >
> > I'm also open for suggestions on the overall approach.  The main reason
> to have each process as a separate stream in a single trace is so that I
> can open the entire trace in an analysis tool like TraceCompass and see all
> processes together.
> >
> > Rocky
> >
> >
> > _______________________________________________
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> >
>
> --
> Jonathan R. Julien
> Efficios
> _______________________________________________
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>

[-- Attachment #1.2: Type: text/html, Size: 4293 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-04-05 17:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAK-w0DeOHsrygOt4CLVBB55edg6W+hrEzX2+A-UwFn6E14HDoQ@mail.gmail.com>
2017-04-04 23:52 ` Using Babeltrace to write CTF for MPI application Jonathan Rajotte Julien
     [not found] ` <e60c08f5-189c-bacc-b4bc-5438c6a36476@efficios.com>
     [not found]   ` <CAK-w0DeNJ2CjfALbf5qLcN8FL6BWYumDZs-xk3ea_BL_xxWKKw@mail.gmail.com>
2017-04-05 17:37     ` Fwd: " Rocky Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.