lttng-dev.lists.lttng.org archive mirror
 help / color / mirror / Atom feed
* Re: Using lttng-ust with xenomai
       [not found] <CADYdroN-i9yrpd-wjPSW36GUptRV+kOCJT=Yv6+Z5sCVBmo_SQ@mail.gmail.com>
@ 2019-11-22 15:42 ` Mathieu Desnoyers
       [not found] ` <2012667816.853.1574437363737.JavaMail.zimbra@efficios.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2019-11-22 15:42 UTC (permalink / raw)
  To: Norbert Lange; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:

> Hello,
> 
> I already started a thread over at xenomai.org [1], but I guess its
> more efficient to ask here aswell.
> The basic concept is that xenomai thread run *below* Linux (threads
> and irg handlers), which means that xenomai threads must not use any

I guess you mean "irq handlers" here.

> linux services like the futex syscall or socket communication.
> 
> ## tracepoints
> 
> expecting that tracepoints are the only thing that should be used from
> the xenomai threads, is there anything using linux services.
> the "bulletproof" urcu apparently does not need anything for the
> reader lock (aslong as the thread is already registered),

Indeed the first time the urcu-bp read-lock is encountered by a thread,
the thread registration is performed, which requires locks, memory allocation,
and so on. After that, the thread can use urcu-bp read-side lock without
requiring any system call.

> but I dont know how the write-buffers are prepared.

LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
which is injected into the process by a lttng-ust constructor.

What you will care about is how the tracepoint call-site (within a Xenomai
thread) interacts with the ring buffers.

The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
invokes "write(2)" to a pipe to let the consumer daemon know there is data
available in that ring buffer. You will want to get rid of that write(2) system
call from a Xenomai thread.

The proper configuration is to use lttng-enable-channel(1) "--read-timer"
option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
ensure that the consumer daemon uses a polling approach to check periodically
whether data needs to be consumed within each buffer, thus removing the
use of the write(2) system call on the application-side.

> 
> You can call linux sycalls from xenomai threads (it will switch to the
> linux shadow thread for that and lose realtime characteristics), so a
> one time setup/shutdown like registering the threads is not an issue.

OK, good, so you can actually do the initial setup when launching the thread.
You need to remember to invoke use a liburcu-bp read-side lock/unlock pair,
or call urcu_bp_read_ongoing() at thread startup within that initialization
phase to ensure urcu-bp registration has been performed.

> 
> ## membarrier syscall
> 
> I haven't got an explanation yet, but I believe this syscall does
> nothing to xenomai threads (each has a shadow linux thread, that is
> *idle* when the xenomai thread is active).

That's indeed a good point. I suspect membarrier may not send any IPI
to Xenomai threads (that would have to be confirmed). I suspect the
latency introduced by this IPI would be unwanted.

> liburcu has configure options allow forcing the usage of this syscall
> but not disabling it, which likely is necessary for Xenomai.

I suspect what you'd need there is a way to allow a process to tell
liburcu-bp (or liburcu) to always use the fall-back mechanism which does
not rely on sys_membarrier. This could be allowed before the first use of
the library. I think extending the liburcu APIs to allow this should be
straightforward enough. This approach would be more flexible than requiring
liburcu to be specialized at configure time. This new API would return an error
if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.

If you have control over your entire system's kernel, you may want to try
just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.

Another thing to make sure is to have a glibc and Linux kernel which perform
clock_gettime() as vDSO for the monotonic clock, because you don't want a
system call there. If that does not work for you, you can alternatively
implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
the clock used by lttng, and for instance use TSC directly. See for instance
the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.

Thanks,

Mathieu


> 
> Any input is welcome.
> Kind regards, Norbert
> 
> [1] - https://xenomai.org/pipermail/xenomai/2019-November/042027.html
> _______________________________________________
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found] ` <2012667816.853.1574437363737.JavaMail.zimbra@efficios.com>
@ 2019-11-22 15:52   ` Jan Kiszka
       [not found]   ` <4aab99be-5451-4582-f75d-7637614b1d37@siemens.com>
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Jan Kiszka @ 2019-11-22 15:52 UTC (permalink / raw)
  To: Mathieu Desnoyers, Norbert Lange; +Cc: lttng-dev, paulmck, Xenomai

On 22.11.19 16:42, Mathieu Desnoyers wrote:
> ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
> 
>> Hello,
>>
>> I already started a thread over at xenomai.org [1], but I guess its
>> more efficient to ask here aswell.
>> The basic concept is that xenomai thread run *below* Linux (threads
>> and irg handlers), which means that xenomai threads must not use any
> 
> I guess you mean "irq handlers" here.
> 
>> linux services like the futex syscall or socket communication.
>>
>> ## tracepoints
>>
>> expecting that tracepoints are the only thing that should be used from
>> the xenomai threads, is there anything using linux services.
>> the "bulletproof" urcu apparently does not need anything for the
>> reader lock (aslong as the thread is already registered),
> 
> Indeed the first time the urcu-bp read-lock is encountered by a thread,
> the thread registration is performed, which requires locks, memory allocation,
> and so on. After that, the thread can use urcu-bp read-side lock without
> requiring any system call.

So, we will probably want to perform such a registration unconditionally 
(in case lttng usage is enabled) for our RT threads during their setup.

> 
>> but I dont know how the write-buffers are prepared.
> 
> LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
> which is injected into the process by a lttng-ust constructor.
> 
> What you will care about is how the tracepoint call-site (within a Xenomai
> thread) interacts with the ring buffers.
> 
> The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
> threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
> corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
> invokes "write(2)" to a pipe to let the consumer daemon know there is data
> available in that ring buffer. You will want to get rid of that write(2) system
> call from a Xenomai thread.
> 
> The proper configuration is to use lttng-enable-channel(1) "--read-timer"
> option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
> ensure that the consumer daemon uses a polling approach to check periodically
> whether data needs to be consumed within each buffer, thus removing the
> use of the write(2) system call on the application-side.
> 
>>
>> You can call linux sycalls from xenomai threads (it will switch to the
>> linux shadow thread for that and lose realtime characteristics), so a
>> one time setup/shutdown like registering the threads is not an issue.
> 
> OK, good, so you can actually do the initial setup when launching the thread.
> You need to remember to invoke use a liburcu-bp read-side lock/unlock pair,
> or call urcu_bp_read_ongoing() at thread startup within that initialization
> phase to ensure urcu-bp registration has been performed.
> 
>>
>> ## membarrier syscall
>>
>> I haven't got an explanation yet, but I believe this syscall does
>> nothing to xenomai threads (each has a shadow linux thread, that is
>> *idle* when the xenomai thread is active).
> 
> That's indeed a good point. I suspect membarrier may not send any IPI
> to Xenomai threads (that would have to be confirmed). I suspect the
> latency introduced by this IPI would be unwanted.

Is an "IPI" a POSIX signal here? Or are real IPI that delivers an 
interrupt to Linux on another CPU? The latter would still be possible, 
but it would be delayed until all Xenomai threads on that core eventual 
took a break (which should happen a couple of times per second under 
normal conditions - 100% RT load is an illegal application state).

> 
>> liburcu has configure options allow forcing the usage of this syscall
>> but not disabling it, which likely is necessary for Xenomai.
> 
> I suspect what you'd need there is a way to allow a process to tell
> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> not rely on sys_membarrier. This could be allowed before the first use of
> the library. I think extending the liburcu APIs to allow this should be
> straightforward enough. This approach would be more flexible than requiring
> liburcu to be specialized at configure time. This new API would return an error
> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
> 
> If you have control over your entire system's kernel, you may want to try
> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
> 
> Another thing to make sure is to have a glibc and Linux kernel which perform
> clock_gettime() as vDSO for the monotonic clock, because you don't want a
> system call there. If that does not work for you, you can alternatively
> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> the clock used by lttng, and for instance use TSC directly. See for instance
> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.

clock_gettime & Co for a Xenomai application is syscall-free as well.

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]   ` <4aab99be-5451-4582-f75d-7637614b1d37@siemens.com>
@ 2019-11-22 17:01     ` Mathieu Desnoyers
       [not found]     ` <480743920.929.1574442078799.JavaMail.zimbra@efficios.com>
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2019-11-22 17:01 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: lttng-dev, paulmck, Xenomai

----- On Nov 22, 2019, at 10:52 AM, Jan Kiszka jan.kiszka@siemens.com wrote:

> On 22.11.19 16:42, Mathieu Desnoyers wrote:
>> ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
>> 
>>> Hello,
>>>
>>> I already started a thread over at xenomai.org [1], but I guess its
>>> more efficient to ask here aswell.
>>> The basic concept is that xenomai thread run *below* Linux (threads
>>> and irg handlers), which means that xenomai threads must not use any
>> 
>> I guess you mean "irq handlers" here.
>> 
>>> linux services like the futex syscall or socket communication.
>>>
>>> ## tracepoints
>>>
>>> expecting that tracepoints are the only thing that should be used from
>>> the xenomai threads, is there anything using linux services.
>>> the "bulletproof" urcu apparently does not need anything for the
>>> reader lock (aslong as the thread is already registered),
>> 
>> Indeed the first time the urcu-bp read-lock is encountered by a thread,
>> the thread registration is performed, which requires locks, memory allocation,
>> and so on. After that, the thread can use urcu-bp read-side lock without
>> requiring any system call.
> 
> So, we will probably want to perform such a registration unconditionally
> (in case lttng usage is enabled) for our RT threads during their setup.

Yes. I'm currently doing a slight update to liburcu master branch to
allow urcu_bp_register_thread() calls to invoke urcu_bp_register() if
the thread is not registered yet. This seems more expected than implementing
urcu_bp_register_thread() as a no-op.

If you care about older liburcu versions, you will want to stick to use
rcu read lock/unlock pairs or rcu_read_ongoing to initialize urcu-bp, but
with future liburcu versions, urcu_bp_register_thread() will be another
option. See:

commit 5b46e39d0e4d2592853c7bfc11add02b1101c04b
Author: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Date:   Fri Nov 22 11:02:36 2019 -0500

    urcu-bp: perform thread registration on urcu_bp_register_thread

> 
>> 
>>> but I dont know how the write-buffers are prepared.
>> 
>> LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
>> which is injected into the process by a lttng-ust constructor.
>> 
>> What you will care about is how the tracepoint call-site (within a Xenomai
>> thread) interacts with the ring buffers.
>> 
>> The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
>> threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
>> corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
>> invokes "write(2)" to a pipe to let the consumer daemon know there is data
>> available in that ring buffer. You will want to get rid of that write(2) system
>> call from a Xenomai thread.
>> 
>> The proper configuration is to use lttng-enable-channel(1) "--read-timer"
>> option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
>> ensure that the consumer daemon uses a polling approach to check periodically
>> whether data needs to be consumed within each buffer, thus removing the
>> use of the write(2) system call on the application-side.
>> 
>>>
>>> You can call linux sycalls from xenomai threads (it will switch to the
>>> linux shadow thread for that and lose realtime characteristics), so a
>>> one time setup/shutdown like registering the threads is not an issue.
>> 
>> OK, good, so you can actually do the initial setup when launching the thread.
>> You need to remember to invoke use a liburcu-bp read-side lock/unlock pair,
>> or call urcu_bp_read_ongoing() at thread startup within that initialization
>> phase to ensure urcu-bp registration has been performed.
>> 
>>>
>>> ## membarrier syscall
>>>
>>> I haven't got an explanation yet, but I believe this syscall does
>>> nothing to xenomai threads (each has a shadow linux thread, that is
>>> *idle* when the xenomai thread is active).
>> 
>> That's indeed a good point. I suspect membarrier may not send any IPI
>> to Xenomai threads (that would have to be confirmed). I suspect the
>> latency introduced by this IPI would be unwanted.
> 
> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
> interrupt to Linux on another CPU? The latter would still be possible,
> but it would be delayed until all Xenomai threads on that core eventual
> took a break (which should happen a couple of times per second under
> normal conditions - 100% RT load is an illegal application state).

I'm talking about a real in-kernel IPI (as in inter-processor interrupt).
However, the way sys_membarrier detects which CPUs should receive that IPI
is by iterating on all cpu runqueues, and figure out which CPU is currently
running a thread which uses the same mm as the sys_membarrier caller
(for the PRIVATE membarrier commands).

So I suspect that the Xenomai thread is really not within the Linux scheduler
runqueue when it runs.

> 
>> 
>>> liburcu has configure options allow forcing the usage of this syscall
>>> but not disabling it, which likely is necessary for Xenomai.
>> 
>> I suspect what you'd need there is a way to allow a process to tell
>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
>> not rely on sys_membarrier. This could be allowed before the first use of
>> the library. I think extending the liburcu APIs to allow this should be
>> straightforward enough. This approach would be more flexible than requiring
>> liburcu to be specialized at configure time. This new API would return an error
>> if invoked with a liburcu library compiled with
>> --disable-sys-membarrier-fallback.
>> 
>> If you have control over your entire system's kernel, you may want to try
>> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
>> 
>> Another thing to make sure is to have a glibc and Linux kernel which perform
>> clock_gettime() as vDSO for the monotonic clock, because you don't want a
>> system call there. If that does not work for you, you can alternatively
>> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
>> the clock used by lttng, and for instance use TSC directly. See for instance
>> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
> 
> clock_gettime & Co for a Xenomai application is syscall-free as well.

Very good then!

Thanks,

Mathieu

> 
> Thanks,
> Jan
> 
> --
> Siemens AG, Corporate Technology, CT RDA IOT SES-DE
> Corporate Competence Center Embedded Linux

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]     ` <480743920.929.1574442078799.JavaMail.zimbra@efficios.com>
@ 2019-11-22 17:36       ` Jan Kiszka
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kiszka @ 2019-11-22 17:36 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: lttng-dev, paulmck, Xenomai

On 22.11.19 18:01, Mathieu Desnoyers wrote:
>>>>
>>>> ## membarrier syscall
>>>>
>>>> I haven't got an explanation yet, but I believe this syscall does
>>>> nothing to xenomai threads (each has a shadow linux thread, that is
>>>> *idle* when the xenomai thread is active).
>>>
>>> That's indeed a good point. I suspect membarrier may not send any IPI
>>> to Xenomai threads (that would have to be confirmed). I suspect the
>>> latency introduced by this IPI would be unwanted.
>>
>> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
>> interrupt to Linux on another CPU? The latter would still be possible,
>> but it would be delayed until all Xenomai threads on that core eventual
>> took a break (which should happen a couple of times per second under
>> normal conditions - 100% RT load is an illegal application state).
> 
> I'm talking about a real in-kernel IPI (as in inter-processor interrupt).
> However, the way sys_membarrier detects which CPUs should receive that IPI
> is by iterating on all cpu runqueues, and figure out which CPU is currently
> running a thread which uses the same mm as the sys_membarrier caller
> (for the PRIVATE membarrier commands).
> 
> So I suspect that the Xenomai thread is really not within the Linux scheduler
> runqueue when it runs.

True. Xenomai first suspends the RT thread's Linux shadow and then kicks 
the Xenomai scheduler to interrupt Linux (and schedule in the RT 
thread). So, from a remote Linux perspective, something else will be 
running at this point.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]   ` <4aab99be-5451-4582-f75d-7637614b1d37@siemens.com>
  2019-11-22 17:01     ` Mathieu Desnoyers
       [not found]     ` <480743920.929.1574442078799.JavaMail.zimbra@efficios.com>
@ 2019-11-22 17:44     ` Norbert Lange
       [not found]     ` <CADYdroOh+T8pOcNBW74KSMfCh--ujD8L3_G96LWR1migpsUq0g@mail.gmail.com>
  3 siblings, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 17:44 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: lttng-dev, paulmck, Xenomai

Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>
> On 22.11.19 16:42, Mathieu Desnoyers wrote:
> > ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
> >
> >> Hello,
> >>
> >> I already started a thread over at xenomai.org [1], but I guess its
> >> more efficient to ask here aswell.
> >> The basic concept is that xenomai thread run *below* Linux (threads
> >> and irg handlers), which means that xenomai threads must not use any
> >
> > I guess you mean "irq handlers" here.
> >
> >> linux services like the futex syscall or socket communication.
> >>
> >> ## tracepoints
> >>
> >> expecting that tracepoints are the only thing that should be used from
> >> the xenomai threads, is there anything using linux services.
> >> the "bulletproof" urcu apparently does not need anything for the
> >> reader lock (aslong as the thread is already registered),
> >
> > Indeed the first time the urcu-bp read-lock is encountered by a thread,
> > the thread registration is performed, which requires locks, memory allocation,
> > and so on. After that, the thread can use urcu-bp read-side lock without
> > requiring any system call.
>
> So, we will probably want to perform such a registration unconditionally
> (in case lttng usage is enabled) for our RT threads during their setup.

Who is we? Do you plan to add automatic support at xenomai mainline?

But yes, some setup is likely needed if one wants to use lttng


> >
> > That's indeed a good point. I suspect membarrier may not send any IPI
> > to Xenomai threads (that would have to be confirmed). I suspect the
> > latency introduced by this IPI would be unwanted.
>
> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
> interrupt to Linux on another CPU? The latter would still be possible,
> but it would be delayed until all Xenomai threads on that core eventual
> took a break (which should happen a couple of times per second under
> normal conditions - 100% RT load is an illegal application state).

Not POSIX, some inter-thread interrupts. point is the syscall waits
for the set of
registered *running* Linux threads. I doubt Xenomai threads can be reached that
way, the shadow Linux thread will be idle and it won't block.
I dont think its worth extending this syscall (seems rather dangerous actually,
given that I had some deadlocks with other "lazy schemes", see below)

>
> >
> >> liburcu has configure options allow forcing the usage of this syscall
> >> but not disabling it, which likely is necessary for Xenomai.
> >
> > I suspect what you'd need there is a way to allow a process to tell
> > liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> > not rely on sys_membarrier. This could be allowed before the first use of
> > the library. I think extending the liburcu APIs to allow this should be
> > straightforward enough. This approach would be more flexible than requiring
> > liburcu to be specialized at configure time. This new API would return an error
> > if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
> >
> > If you have control over your entire system's kernel, you may want to try
> > just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
> >
> > Another thing to make sure is to have a glibc and Linux kernel which perform
> > clock_gettime() as vDSO for the monotonic clock, because you don't want a
> > system call there. If that does not work for you, you can alternatively
> > implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> > the clock used by lttng, and for instance use TSC directly. See for instance
> > the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
>
> clock_gettime & Co for a Xenomai application is syscall-free as well.

Yes, and that gave me a deadlock already, if a library us not compiled
for Xenomai,
it will either use the syscall (and you detect that immediatly) or it
will work most of the time,
and lock up once in a while if a Linux thread took the "writer lock"
of the VDSO structures
and your high priority xenomai thread is busy waiting infinitely.

Only sane approach would be to use either the xenomai function directly,
or recreate the function (rdtsc + interpolation on x86).
Either compiling/patching lttng for Cobalt (which I really would not
want to do) or using a
clock plugin.
If the later is supposed to be minimal, then that would mean I would
have to get the
interpolation factors cobalt uses (without bringing in libcobalt).

Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
AFAIK, so timestamps will
be different to the rest of Linux.
On my last plattform I did some tracing using internal stamp and
regulary wrote a
block with internal and external timestamps so those could be
converted "offline".
Anything similar with lttng or tools handling the traces?

regards, Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]     ` <CADYdroOh+T8pOcNBW74KSMfCh--ujD8L3_G96LWR1migpsUq0g@mail.gmail.com>
@ 2019-11-22 17:52       ` Jan Kiszka
       [not found]       ` <63a51fa5-db96-9cf9-0eb3-51954ebf98f4@siemens.com>
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Jan Kiszka @ 2019-11-22 17:52 UTC (permalink / raw)
  To: Norbert Lange; +Cc: lttng-dev, paulmck, Xenomai

On 22.11.19 18:44, Norbert Lange wrote:
> Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>>
>> On 22.11.19 16:42, Mathieu Desnoyers wrote:
>>> ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
>>>
>>>> Hello,
>>>>
>>>> I already started a thread over at xenomai.org [1], but I guess its
>>>> more efficient to ask here aswell.
>>>> The basic concept is that xenomai thread run *below* Linux (threads
>>>> and irg handlers), which means that xenomai threads must not use any
>>>
>>> I guess you mean "irq handlers" here.
>>>
>>>> linux services like the futex syscall or socket communication.
>>>>
>>>> ## tracepoints
>>>>
>>>> expecting that tracepoints are the only thing that should be used from
>>>> the xenomai threads, is there anything using linux services.
>>>> the "bulletproof" urcu apparently does not need anything for the
>>>> reader lock (aslong as the thread is already registered),
>>>
>>> Indeed the first time the urcu-bp read-lock is encountered by a thread,
>>> the thread registration is performed, which requires locks, memory allocation,
>>> and so on. After that, the thread can use urcu-bp read-side lock without
>>> requiring any system call.
>>
>> So, we will probably want to perform such a registration unconditionally
>> (in case lttng usage is enabled) for our RT threads during their setup.
> 
> Who is we? Do you plan to add automatic support at xenomai mainline?
> 
> But yes, some setup is likely needed if one wants to use lttng

I wouldn't refuse patches to make this happen in mainline. If patches 
are best applied there. We could use a deterministic and fast 
application tracing frame work people can build upon, and that they can 
smoothly combine with system level traces.

> 
> 
>>>
>>> That's indeed a good point. I suspect membarrier may not send any IPI
>>> to Xenomai threads (that would have to be confirmed). I suspect the
>>> latency introduced by this IPI would be unwanted.
>>
>> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
>> interrupt to Linux on another CPU? The latter would still be possible,
>> but it would be delayed until all Xenomai threads on that core eventual
>> took a break (which should happen a couple of times per second under
>> normal conditions - 100% RT load is an illegal application state).
> 
> Not POSIX, some inter-thread interrupts. point is the syscall waits
> for the set of
> registered *running* Linux threads. I doubt Xenomai threads can be reached that
> way, the shadow Linux thread will be idle and it won't block.
> I dont think its worth extending this syscall (seems rather dangerous actually,
> given that I had some deadlocks with other "lazy schemes", see below)

Ack. It sounds like this will become messy at best, fragile at worst.

> 
>>
>>>
>>>> liburcu has configure options allow forcing the usage of this syscall
>>>> but not disabling it, which likely is necessary for Xenomai.
>>>
>>> I suspect what you'd need there is a way to allow a process to tell
>>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
>>> not rely on sys_membarrier. This could be allowed before the first use of
>>> the library. I think extending the liburcu APIs to allow this should be
>>> straightforward enough. This approach would be more flexible than requiring
>>> liburcu to be specialized at configure time. This new API would return an error
>>> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
>>>
>>> If you have control over your entire system's kernel, you may want to try
>>> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
>>>
>>> Another thing to make sure is to have a glibc and Linux kernel which perform
>>> clock_gettime() as vDSO for the monotonic clock, because you don't want a
>>> system call there. If that does not work for you, you can alternatively
>>> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
>>> the clock used by lttng, and for instance use TSC directly. See for instance
>>> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
>>
>> clock_gettime & Co for a Xenomai application is syscall-free as well.
> 
> Yes, and that gave me a deadlock already, if a library us not compiled
> for Xenomai,
> it will either use the syscall (and you detect that immediatly) or it
> will work most of the time,
> and lock up once in a while if a Linux thread took the "writer lock"
> of the VDSO structures
> and your high priority xenomai thread is busy waiting infinitely.
> 
> Only sane approach would be to use either the xenomai function directly,
> or recreate the function (rdtsc + interpolation on x86).

rdtsc is not portable, thus a no-go.

> Either compiling/patching lttng for Cobalt (which I really would not
> want to do) or using a
> clock plugin.

I suspect you will want to have at least a plugin that was built against 
Xenomai libs.

> If the later is supposed to be minimal, then that would mean I would
> have to get the
> interpolation factors cobalt uses (without bringing in libcobalt).
> 
> Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> AFAIK, so timestamps will
> be different to the rest of Linux.

CLOCK_HOST_REALTIME is synchronized.

> On my last plattform I did some tracing using internal stamp and
> regulary wrote a
> block with internal and external timestamps so those could be
> converted "offline".

Sounds not like something we want to promote.

Jan

> Anything similar with lttng or tools handling the traces?
> 
> regards, Norbert
> 

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found] ` <2012667816.853.1574437363737.JavaMail.zimbra@efficios.com>
  2019-11-22 15:52   ` Jan Kiszka
       [not found]   ` <4aab99be-5451-4582-f75d-7637614b1d37@siemens.com>
@ 2019-11-22 17:55   ` Norbert Lange
       [not found]   ` <CADYdroNGcY6adA5cGTzwzsKqO7+iuWts9k8Haz8k3HSvzQfc=g@mail.gmail.com>
  3 siblings, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 17:55 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

>
> LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
> which is injected into the process by a lttng-ust constructor.
>
> What you will care about is how the tracepoint call-site (within a Xenomai
> thread) interacts with the ring buffers.
>
> The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
> threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
> corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
> invokes "write(2)" to a pipe to let the consumer daemon know there is data
> available in that ring buffer. You will want to get rid of that write(2) system
> call from a Xenomai thread.
>
> The proper configuration is to use lttng-enable-channel(1) "--read-timer"
> option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
> ensure that the consumer daemon uses a polling approach to check periodically
> whether data needs to be consumed within each buffer, thus removing the
> use of the write(2) system call on the application-side.

Ah thanks.

But that's configuration outside of the RT app if I understand this correctly.
So if one configures a tracer wrong, then the app will suddenly misbehave.
Would be nice to be able to somehow tell that there is only read-timer allowed.


>
> > liburcu has configure options allow forcing the usage of this syscall
> > but not disabling it, which likely is necessary for Xenomai.
>
> I suspect what you'd need there is a way to allow a process to tell
> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> not rely on sys_membarrier. This could be allowed before the first use of
> the library. I think extending the liburcu APIs to allow this should be
> straightforward enough. This approach would be more flexible than requiring
> liburcu to be specialized at configure time. This new API would return an error
> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.

I was under the impression, that you counted clock-cycles for every operation ;)
Not sure, maybe a separate lib for realtime is the better way. Having no option
can be considered foolproof, and sideeffects of the syscall not working would be
a real pain.

regards, Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]       ` <63a51fa5-db96-9cf9-0eb3-51954ebf98f4@siemens.com>
@ 2019-11-22 18:01         ` Norbert Lange
       [not found]         ` <CADYdroP+H3DiqCaH9o1jHxAurh_2YG_p7MK4H2kFqSoTCV0w6A@mail.gmail.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 18:01 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: lttng-dev, paulmck, Xenomai

Am Fr., 22. Nov. 2019 um 18:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>
> On 22.11.19 18:44, Norbert Lange wrote:
> > Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
> >>
> >> On 22.11.19 16:42, Mathieu Desnoyers wrote:
> >>> ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
> >>>
> >>>> Hello,
> >>>>
> >>>> I already started a thread over at xenomai.org [1], but I guess its
> >>>> more efficient to ask here aswell.
> >>>> The basic concept is that xenomai thread run *below* Linux (threads
> >>>> and irg handlers), which means that xenomai threads must not use any
> >>>
> >>> I guess you mean "irq handlers" here.
> >>>
> >>>> linux services like the futex syscall or socket communication.
> >>>>
> >>>> ## tracepoints
> >>>>
> >>>> expecting that tracepoints are the only thing that should be used from
> >>>> the xenomai threads, is there anything using linux services.
> >>>> the "bulletproof" urcu apparently does not need anything for the
> >>>> reader lock (aslong as the thread is already registered),
> >>>
> >>> Indeed the first time the urcu-bp read-lock is encountered by a thread,
> >>> the thread registration is performed, which requires locks, memory allocation,
> >>> and so on. After that, the thread can use urcu-bp read-side lock without
> >>> requiring any system call.
> >>
> >> So, we will probably want to perform such a registration unconditionally
> >> (in case lttng usage is enabled) for our RT threads during their setup.
> >
> > Who is we? Do you plan to add automatic support at xenomai mainline?
> >
> > But yes, some setup is likely needed if one wants to use lttng
>
> I wouldn't refuse patches to make this happen in mainline. If patches
> are best applied there. We could use a deterministic and fast
> application tracing frame work people can build upon, and that they can
> smoothly combine with system level traces.

Sure (good to hear), I just dont think enabling it automatic/unconditionally
is a good thing.


>
> >
> >>
> >>>
> >>>> liburcu has configure options allow forcing the usage of this syscall
> >>>> but not disabling it, which likely is necessary for Xenomai.
> >>>
> >>> I suspect what you'd need there is a way to allow a process to tell
> >>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> >>> not rely on sys_membarrier. This could be allowed before the first use of
> >>> the library. I think extending the liburcu APIs to allow this should be
> >>> straightforward enough. This approach would be more flexible than requiring
> >>> liburcu to be specialized at configure time. This new API would return an error
> >>> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
> >>>
> >>> If you have control over your entire system's kernel, you may want to try
> >>> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
> >>>
> >>> Another thing to make sure is to have a glibc and Linux kernel which perform
> >>> clock_gettime() as vDSO for the monotonic clock, because you don't want a
> >>> system call there. If that does not work for you, you can alternatively
> >>> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> >>> the clock used by lttng, and for instance use TSC directly. See for instance
> >>> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
> >>
> >> clock_gettime & Co for a Xenomai application is syscall-free as well.
> >
> > Yes, and that gave me a deadlock already, if a library us not compiled
> > for Xenomai,
> > it will either use the syscall (and you detect that immediatly) or it
> > will work most of the time,
> > and lock up once in a while if a Linux thread took the "writer lock"
> > of the VDSO structures
> > and your high priority xenomai thread is busy waiting infinitely.
> >
> > Only sane approach would be to use either the xenomai function directly,
> > or recreate the function (rdtsc + interpolation on x86).
>
> rdtsc is not portable, thus a no-go.

Its not portable, but you have equivalents on ARM, powerpc.
ie. "Do the same think as Xenomai"

> > Either compiling/patching lttng for Cobalt (which I really would not
> > want to do) or using a
> > clock plugin.
>
> I suspect you will want to have at least a plugin that was built against
> Xenomai libs.

That will then do alot other stuff like spwaning a printf thread.

>
> > If the later is supposed to be minimal, then that would mean I would
> > have to get the
> > interpolation factors cobalt uses (without bringing in libcobalt).
> >
> > Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> > AFAIK, so timestamps will
> > be different to the rest of Linux.
>
> CLOCK_HOST_REALTIME is synchronized.

Thats not monotonic?

>
> > On my last plattform I did some tracing using internal stamp and
> > regulary wrote a
> > block with internal and external timestamps so those could be
> > converted "offline".
>
> Sounds not like something we want to promote.

This was a questing to lttng and its tool environment. I suppose we
werent the first
ones with multiple clocks in a system.
If anything needs to be done in Xenomai it might be a concurrent
readout of Linux/cobalt time(s),
the rest would be done offline, potentially on another system.

 regards, Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]         ` <CADYdroP+H3DiqCaH9o1jHxAurh_2YG_p7MK4H2kFqSoTCV0w6A@mail.gmail.com>
@ 2019-11-22 18:07           ` Jan Kiszka
       [not found]           ` <a2036ba8-c22a-e549-b68d-35524ee7f9a9@siemens.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Jan Kiszka @ 2019-11-22 18:07 UTC (permalink / raw)
  To: Norbert Lange; +Cc: lttng-dev, paulmck, Xenomai

On 22.11.19 19:01, Norbert Lange wrote:
> Am Fr., 22. Nov. 2019 um 18:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>>
>> On 22.11.19 18:44, Norbert Lange wrote:
>>> Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>>>>
>>>> On 22.11.19 16:42, Mathieu Desnoyers wrote:
>>>>> ----- On Nov 22, 2019, at 4:14 AM, Norbert Lange nolange79@gmail.com wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I already started a thread over at xenomai.org [1], but I guess its
>>>>>> more efficient to ask here aswell.
>>>>>> The basic concept is that xenomai thread run *below* Linux (threads
>>>>>> and irg handlers), which means that xenomai threads must not use any
>>>>>
>>>>> I guess you mean "irq handlers" here.
>>>>>
>>>>>> linux services like the futex syscall or socket communication.
>>>>>>
>>>>>> ## tracepoints
>>>>>>
>>>>>> expecting that tracepoints are the only thing that should be used from
>>>>>> the xenomai threads, is there anything using linux services.
>>>>>> the "bulletproof" urcu apparently does not need anything for the
>>>>>> reader lock (aslong as the thread is already registered),
>>>>>
>>>>> Indeed the first time the urcu-bp read-lock is encountered by a thread,
>>>>> the thread registration is performed, which requires locks, memory allocation,
>>>>> and so on. After that, the thread can use urcu-bp read-side lock without
>>>>> requiring any system call.
>>>>
>>>> So, we will probably want to perform such a registration unconditionally
>>>> (in case lttng usage is enabled) for our RT threads during their setup.
>>>
>>> Who is we? Do you plan to add automatic support at xenomai mainline?
>>>
>>> But yes, some setup is likely needed if one wants to use lttng
>>
>> I wouldn't refuse patches to make this happen in mainline. If patches
>> are best applied there. We could use a deterministic and fast
>> application tracing frame work people can build upon, and that they can
>> smoothly combine with system level traces.
> 
> Sure (good to hear), I just dont think enabling it automatic/unconditionally
> is a good thing.

I don't disagree. If it requires built-time control or could also be 
enabled during application setup is something to be seen later.

> 
> 
>>
>>>
>>>>
>>>>>
>>>>>> liburcu has configure options allow forcing the usage of this syscall
>>>>>> but not disabling it, which likely is necessary for Xenomai.
>>>>>
>>>>> I suspect what you'd need there is a way to allow a process to tell
>>>>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
>>>>> not rely on sys_membarrier. This could be allowed before the first use of
>>>>> the library. I think extending the liburcu APIs to allow this should be
>>>>> straightforward enough. This approach would be more flexible than requiring
>>>>> liburcu to be specialized at configure time. This new API would return an error
>>>>> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
>>>>>
>>>>> If you have control over your entire system's kernel, you may want to try
>>>>> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
>>>>>
>>>>> Another thing to make sure is to have a glibc and Linux kernel which perform
>>>>> clock_gettime() as vDSO for the monotonic clock, because you don't want a
>>>>> system call there. If that does not work for you, you can alternatively
>>>>> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
>>>>> the clock used by lttng, and for instance use TSC directly. See for instance
>>>>> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
>>>>
>>>> clock_gettime & Co for a Xenomai application is syscall-free as well.
>>>
>>> Yes, and that gave me a deadlock already, if a library us not compiled
>>> for Xenomai,
>>> it will either use the syscall (and you detect that immediatly) or it
>>> will work most of the time,
>>> and lock up once in a while if a Linux thread took the "writer lock"
>>> of the VDSO structures
>>> and your high priority xenomai thread is busy waiting infinitely.
>>>
>>> Only sane approach would be to use either the xenomai function directly,
>>> or recreate the function (rdtsc + interpolation on x86).
>>
>> rdtsc is not portable, thus a no-go.
> 
> Its not portable, but you have equivalents on ARM, powerpc.
> ie. "Do the same think as Xenomai"

If you use existing code, I'm fine. Just not invent something "new" here.

> 
>>> Either compiling/patching lttng for Cobalt (which I really would not
>>> want to do) or using a
>>> clock plugin.
>>
>> I suspect you will want to have at least a plugin that was built against
>> Xenomai libs.
> 
> That will then do alot other stuff like spwaning a printf thread.
> 
>>
>>> If the later is supposed to be minimal, then that would mean I would
>>> have to get the
>>> interpolation factors cobalt uses (without bringing in libcobalt).
>>>
>>> Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
>>> AFAIK, so timestamps will
>>> be different to the rest of Linux.
>>
>> CLOCK_HOST_REALTIME is synchronized.
> 
> Thats not monotonic?

Yeah, it's REALTIME, in synch with CLOCK_REALTIME of Linux. 
CLOCK_MONOTONIC should have a static offset at worst. I think that could 
be resolved if it wasn't yet.

> 
>>
>>> On my last plattform I did some tracing using internal stamp and
>>> regulary wrote a
>>> block with internal and external timestamps so those could be
>>> converted "offline".
>>
>> Sounds not like something we want to promote.
> 
> This was a questing to lttng and its tool environment. I suppose we
> werent the first
> ones with multiple clocks in a system.
> If anything needs to be done in Xenomai it might be a concurrent
> readout of Linux/cobalt time(s),
> the rest would be done offline, potentially on another system.

Sure, doable, but I prefer not having to do that.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]     ` <CADYdroOh+T8pOcNBW74KSMfCh--ujD8L3_G96LWR1migpsUq0g@mail.gmail.com>
  2019-11-22 17:52       ` Jan Kiszka
       [not found]       ` <63a51fa5-db96-9cf9-0eb3-51954ebf98f4@siemens.com>
@ 2019-11-22 19:00       ` Mathieu Desnoyers
       [not found]       ` <2088324778.1063.1574449205629.JavaMail.zimbra@efficios.com>
  3 siblings, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2019-11-22 19:00 UTC (permalink / raw)
  To: Norbert Lange; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

----- On Nov 22, 2019, at 12:44 PM, Norbert Lange nolange79@gmail.com wrote:

> Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>>
>> On 22.11.19 16:42, Mathieu Desnoyers wrote:

[...]

> 
> 
>> >
>> > That's indeed a good point. I suspect membarrier may not send any IPI
>> > to Xenomai threads (that would have to be confirmed). I suspect the
>> > latency introduced by this IPI would be unwanted.
>>
>> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
>> interrupt to Linux on another CPU? The latter would still be possible,
>> but it would be delayed until all Xenomai threads on that core eventual
>> took a break (which should happen a couple of times per second under
>> normal conditions - 100% RT load is an illegal application state).
> 
> Not POSIX, some inter-thread interrupts. point is the syscall waits
> for the set of
> registered *running* Linux threads.

Just a small clarification: the PRIVATE membarrier command does not *wait*
for other threads, but it rather ensures that all other running threads
have had IPIs that issue memory barriers before it returns.

This is just a building block that can be used to speed up stuff like liburcu
and JIT memory reclaim.

[...]

>> >
>> > Another thing to make sure is to have a glibc and Linux kernel which perform
>> > clock_gettime() as vDSO for the monotonic clock, because you don't want a
>> > system call there. If that does not work for you, you can alternatively
>> > implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
>> > the clock used by lttng, and for instance use TSC directly. See for instance
>> > the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
>>
>> clock_gettime & Co for a Xenomai application is syscall-free as well.
> 
> Yes, and that gave me a deadlock already, if a library us not compiled
> for Xenomai,
> it will either use the syscall (and you detect that immediatly) or it
> will work most of the time,
> and lock up once in a while if a Linux thread took the "writer lock"
> of the VDSO structures
> and your high priority xenomai thread is busy waiting infinitely.
> 
> Only sane approach would be to use either the xenomai function directly,
> or recreate the function (rdtsc + interpolation on x86).
> Either compiling/patching lttng for Cobalt (which I really would not
> want to do) or using a
> clock plugin.
> If the later is supposed to be minimal, then that would mean I would
> have to get the
> interpolation factors cobalt uses (without bringing in libcobalt).
> 
> Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> AFAIK, so timestamps will
> be different to the rest of Linux.
> On my last plattform I did some tracing using internal stamp and
> regulary wrote a
> block with internal and external timestamps so those could be
> converted "offline".
> Anything similar with lttng or tools handling the traces?

Can a Xenomai thread issue clock_gettime(CLOCK_MONOTONIC) ?

AFAIK we don't have tooling to do what you describe out of the box,
but it could probably be implemented as a babeltrace 2 filter plugin.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]   ` <CADYdroNGcY6adA5cGTzwzsKqO7+iuWts9k8Haz8k3HSvzQfc=g@mail.gmail.com>
@ 2019-11-22 19:03     ` Mathieu Desnoyers
       [not found]     ` <1007669875.1091.1574449410739.JavaMail.zimbra@efficios.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2019-11-22 19:03 UTC (permalink / raw)
  To: Norbert Lange; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

----- On Nov 22, 2019, at 12:55 PM, Norbert Lange nolange79@gmail.com wrote:

>>
>> LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
>> which is injected into the process by a lttng-ust constructor.
>>
>> What you will care about is how the tracepoint call-site (within a Xenomai
>> thread) interacts with the ring buffers.
>>
>> The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
>> threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
>> corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
>> invokes "write(2)" to a pipe to let the consumer daemon know there is data
>> available in that ring buffer. You will want to get rid of that write(2) system
>> call from a Xenomai thread.
>>
>> The proper configuration is to use lttng-enable-channel(1) "--read-timer"
>> option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
>> ensure that the consumer daemon uses a polling approach to check periodically
>> whether data needs to be consumed within each buffer, thus removing the
>> use of the write(2) system call on the application-side.
> 
> Ah thanks.
> 
> But that's configuration outside of the RT app if I understand this correctly.
> So if one configures a tracer wrong, then the app will suddenly misbehave.
> Would be nice to be able to somehow tell that there is only read-timer allowed.

So an RT application would prohibit tracing to non-RT ring buffers ? IOW, if a
channel is configured without the --read-timer option, nothing would appear from
the RT threads in those buffers.

Should this be per-process or per-thread ?

> 
> 
>>
>> > liburcu has configure options allow forcing the usage of this syscall
>> > but not disabling it, which likely is necessary for Xenomai.
>>
>> I suspect what you'd need there is a way to allow a process to tell
>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
>> not rely on sys_membarrier. This could be allowed before the first use of
>> the library. I think extending the liburcu APIs to allow this should be
>> straightforward enough. This approach would be more flexible than requiring
>> liburcu to be specialized at configure time. This new API would return an error
>> if invoked with a liburcu library compiled with
>> --disable-sys-membarrier-fallback.
> 
> I was under the impression, that you counted clock-cycles for every operation ;)

Well it's just a new API that allows tweaking the state of a boolean which controls
branches which are already there on the fast-path. ;)

> Not sure, maybe a separate lib for realtime is the better way. Having no option
> can be considered foolproof, and sideeffects of the syscall not working would be
> a real pain.

e.g. a liburcu-bp-rt.so ? That would bring interesting integration challenges with
lttng-ust though. Should we then build a liblttng-ust-rt.so as well ?

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]       ` <2088324778.1063.1574449205629.JavaMail.zimbra@efficios.com>
@ 2019-11-22 19:57         ` Norbert Lange
       [not found]         ` <CADYdroM7acqfMym1sUbwaa773SLSzHPSni9uRxiLZbbHtteLug@mail.gmail.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 19:57 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

Am Fr., 22. Nov. 2019 um 20:00 Uhr schrieb Mathieu Desnoyers
<mathieu.desnoyers@efficios.com>:
>
> ----- On Nov 22, 2019, at 12:44 PM, Norbert Lange nolange79@gmail.com wrote:
>
> > Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
> >>
> >> On 22.11.19 16:42, Mathieu Desnoyers wrote:
>
> [...]
>
> >
> >
> >> >
> >> > That's indeed a good point. I suspect membarrier may not send any IPI
> >> > to Xenomai threads (that would have to be confirmed). I suspect the
> >> > latency introduced by this IPI would be unwanted.
> >>
> >> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
> >> interrupt to Linux on another CPU? The latter would still be possible,
> >> but it would be delayed until all Xenomai threads on that core eventual
> >> took a break (which should happen a couple of times per second under
> >> normal conditions - 100% RT load is an illegal application state).
> >
> > Not POSIX, some inter-thread interrupts. point is the syscall waits
> > for the set of
> > registered *running* Linux threads.
>
> Just a small clarification: the PRIVATE membarrier command does not *wait*
> for other threads, but it rather ensures that all other running threads
> have had IPIs that issue memory barriers before it returns.

Ok, normal linux IRQs have to wait till Xenomai gives the cores back,
hence the waiting.

>
> >> >
> >> > Another thing to make sure is to have a glibc and Linux kernel which perform
> >> > clock_gettime() as vDSO for the monotonic clock, because you don't want a
> >> > system call there. If that does not work for you, you can alternatively
> >> > implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> >> > the clock used by lttng, and for instance use TSC directly. See for instance
> >> > the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
> >>
> >> clock_gettime & Co for a Xenomai application is syscall-free as well.
> >
> > Yes, and that gave me a deadlock already, if a library us not compiled
> > for Xenomai,
> > it will either use the syscall (and you detect that immediatly) or it
> > will work most of the time,
> > and lock up once in a while if a Linux thread took the "writer lock"
> > of the VDSO structures
> > and your high priority xenomai thread is busy waiting infinitely.
> >
> > Only sane approach would be to use either the xenomai function directly,
> > or recreate the function (rdtsc + interpolation on x86).
> > Either compiling/patching lttng for Cobalt (which I really would not
> > want to do) or using a
> > clock plugin.
> > If the later is supposed to be minimal, then that would mean I would
> > have to get the
> > interpolation factors cobalt uses (without bringing in libcobalt).
> >
> > Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> > AFAIK, so timestamps will
> > be different to the rest of Linux.
> > On my last plattform I did some tracing using internal stamp and
> > regulary wrote a
> > block with internal and external timestamps so those could be
> > converted "offline".
> > Anything similar with lttng or tools handling the traces?
>
> Can a Xenomai thread issue clock_gettime(CLOCK_MONOTONIC) ?

Yes it can, if the calls goes through the VDSO, then it mostly works.
And once in a while deadlocks the system if a Xenomai thread waits for a
spinlock that the Linux kernel owns and doesnt give back as said thread will
not let the Linux Kernel run (as described above).

>
> AFAIK we don't have tooling to do what you describe out of the box,
> but it could probably be implemented as a babeltrace 2 filter plugin.

There are alot ways to do that, I hoped for some standardized way.
regards, Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]     ` <1007669875.1091.1574449410739.JavaMail.zimbra@efficios.com>
@ 2019-11-22 20:04       ` Norbert Lange
  0 siblings, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 20:04 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

Am Fr., 22. Nov. 2019 um 20:03 Uhr schrieb Mathieu Desnoyers
<mathieu.desnoyers@efficios.com>:
>
> ----- On Nov 22, 2019, at 12:55 PM, Norbert Lange nolange79@gmail.com wrote:
>
> >>
> >> LTTng-UST prepares the ring buffers from lttng-ust's "listener" thread,
> >> which is injected into the process by a lttng-ust constructor.
> >>
> >> What you will care about is how the tracepoint call-site (within a Xenomai
> >> thread) interacts with the ring buffers.
> >>
> >> The "default" setup for lttng-ust ring buffers is not suitable for Xenomai
> >> threads. The lttng-ust ring buffer is split into sub-buffers, each sub-buffer
> >> corresponding to a CTF trace "packet". When a sub-buffer is filled, lttng-ust
> >> invokes "write(2)" to a pipe to let the consumer daemon know there is data
> >> available in that ring buffer. You will want to get rid of that write(2) system
> >> call from a Xenomai thread.
> >>
> >> The proper configuration is to use lttng-enable-channel(1) "--read-timer"
> >> option (see https://lttng.org/docs/v2.11/#doc-channel-read-timer). This will
> >> ensure that the consumer daemon uses a polling approach to check periodically
> >> whether data needs to be consumed within each buffer, thus removing the
> >> use of the write(2) system call on the application-side.
> >
> > Ah thanks.
> >
> > But that's configuration outside of the RT app if I understand this correctly.
> > So if one configures a tracer wrong, then the app will suddenly misbehave.
> > Would be nice to be able to somehow tell that there is only read-timer allowed.
>
> So an RT application would prohibit tracing to non-RT ring buffers ? IOW, if a
> channel is configured without the --read-timer option, nothing would appear from
> the RT threads in those buffers.
>
> Should this be per-process or per-thread ?

I dont know lttng internals, I'd give this as an option to the lttng
control-thread
for the whole process?

> >> > liburcu has configure options allow forcing the usage of this syscall
> >> > but not disabling it, which likely is necessary for Xenomai.
> >>
> >> I suspect what you'd need there is a way to allow a process to tell
> >> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> >> not rely on sys_membarrier. This could be allowed before the first use of
> >> the library. I think extending the liburcu APIs to allow this should be
> >> straightforward enough. This approach would be more flexible than requiring
> >> liburcu to be specialized at configure time. This new API would return an error
> >> if invoked with a liburcu library compiled with
> >> --disable-sys-membarrier-fallback.
> >
> > I was under the impression, that you counted clock-cycles for every operation ;)
>
> Well it's just a new API that allows tweaking the state of a boolean which controls
> branches which are already there on the fast-path. ;)
>
> > Not sure, maybe a separate lib for realtime is the better way. Having no option
> > can be considered foolproof, and sideeffects of the syscall not working would be
> > a real pain.
>
> e.g. a liburcu-bp-rt.so ? That would bring interesting integration challenges with
> lttng-ust though. Should we then build a liblttng-ust-rt.so as well ?

For my usecase, there is a xenomai-system with everything compiled from scratch,
and that would be a compile-time option, no new names.

If you want something more generic, think of such a layout:

/usr/lib/liblttng-ust.so
/usr/lib/liblttng-ust-*.so
/usr/lib/liburcu-bp.so
/usr/xenomai/lib/liburcu-bp.so

Then compile your app with RUNPATH=/usr/xenomai/lib and the
xenomai-flavour of  liburcu-bp.so
should be picked up (I believe that works even for preloaded libs).

Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]         ` <CADYdroM7acqfMym1sUbwaa773SLSzHPSni9uRxiLZbbHtteLug@mail.gmail.com>
@ 2019-11-22 20:15           ` Mathieu Desnoyers
       [not found]           ` <547908110.1420.1574453756850.JavaMail.zimbra@efficios.com>
  1 sibling, 0 replies; 16+ messages in thread
From: Mathieu Desnoyers @ 2019-11-22 20:15 UTC (permalink / raw)
  To: Norbert Lange; +Cc: Jan Kiszka, lttng-dev, paulmck, Xenomai

----- On Nov 22, 2019, at 2:57 PM, Norbert Lange nolange79@gmail.com wrote:

> Am Fr., 22. Nov. 2019 um 20:00 Uhr schrieb Mathieu Desnoyers
> <mathieu.desnoyers@efficios.com>:
>>
>> ----- On Nov 22, 2019, at 12:44 PM, Norbert Lange nolange79@gmail.com wrote:
>>
>> > Am Fr., 22. Nov. 2019 um 16:52 Uhr schrieb Jan Kiszka <jan.kiszka@siemens.com>:
>> >>
>> >> On 22.11.19 16:42, Mathieu Desnoyers wrote:
>>
>> [...]
>>
>> >
>> >
>> >> >
>> >> > That's indeed a good point. I suspect membarrier may not send any IPI
>> >> > to Xenomai threads (that would have to be confirmed). I suspect the
>> >> > latency introduced by this IPI would be unwanted.
>> >>
>> >> Is an "IPI" a POSIX signal here? Or are real IPI that delivers an
>> >> interrupt to Linux on another CPU? The latter would still be possible,
>> >> but it would be delayed until all Xenomai threads on that core eventual
>> >> took a break (which should happen a couple of times per second under
>> >> normal conditions - 100% RT load is an illegal application state).
>> >
>> > Not POSIX, some inter-thread interrupts. point is the syscall waits
>> > for the set of
>> > registered *running* Linux threads.
>>
>> Just a small clarification: the PRIVATE membarrier command does not *wait*
>> for other threads, but it rather ensures that all other running threads
>> have had IPIs that issue memory barriers before it returns.
> 
> Ok, normal linux IRQs have to wait till Xenomai gives the cores back,
> hence the waiting.

In the case of membarrier, IPIs are only sent to CPUs which runqueues
show that the currently running thread belongs to the same process
(for the PRIVATE command). So in this case we would not be sending
any IPI to the cores running Xenomai threads.

> 
>>
>> >> >
>> >> > Another thing to make sure is to have a glibc and Linux kernel which perform
>> >> > clock_gettime() as vDSO for the monotonic clock, because you don't want a
>> >> > system call there. If that does not work for you, you can alternatively
>> >> > implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
>> >> > the clock used by lttng, and for instance use TSC directly. See for instance
>> >> > the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
>> >>
>> >> clock_gettime & Co for a Xenomai application is syscall-free as well.
>> >
>> > Yes, and that gave me a deadlock already, if a library us not compiled
>> > for Xenomai,
>> > it will either use the syscall (and you detect that immediatly) or it
>> > will work most of the time,
>> > and lock up once in a while if a Linux thread took the "writer lock"
>> > of the VDSO structures
>> > and your high priority xenomai thread is busy waiting infinitely.
>> >
>> > Only sane approach would be to use either the xenomai function directly,
>> > or recreate the function (rdtsc + interpolation on x86).
>> > Either compiling/patching lttng for Cobalt (which I really would not
>> > want to do) or using a
>> > clock plugin.
>> > If the later is supposed to be minimal, then that would mean I would
>> > have to get the
>> > interpolation factors cobalt uses (without bringing in libcobalt).
>> >
>> > Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
>> > AFAIK, so timestamps will
>> > be different to the rest of Linux.
>> > On my last plattform I did some tracing using internal stamp and
>> > regulary wrote a
>> > block with internal and external timestamps so those could be
>> > converted "offline".
>> > Anything similar with lttng or tools handling the traces?
>>
>> Can a Xenomai thread issue clock_gettime(CLOCK_MONOTONIC) ?
> 
> Yes it can, if the calls goes through the VDSO, then it mostly works.
> And once in a while deadlocks the system if a Xenomai thread waits for a
> spinlock that the Linux kernel owns and doesnt give back as said thread will
> not let the Linux Kernel run (as described above).

Ah, yes, read seqlock can be tricky in that kind of scenario indeed.

Then what we'd need is the nmi-safe monotonic clock that went into the
Linux kernel a while ago. It's called "monotonic fast", but really what
it does is to remove the need to use a read-seqlock. AFAIK it's not
exposed through the vDSO at the moment though.

Thanks,

Mathieu

> 
>>
>> AFAIK we don't have tooling to do what you describe out of the box,
>> but it could probably be implemented as a babeltrace 2 filter plugin.
> 
> There are alot ways to do that, I hoped for some standardized way.
> regards, Norbert

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [lttng-dev] Using lttng-ust with xenomai
       [not found]           ` <547908110.1420.1574453756850.JavaMail.zimbra@efficios.com>
@ 2019-11-22 21:27             ` Norbert Lange via Xenomai
  0 siblings, 0 replies; 16+ messages in thread
From: Norbert Lange via Xenomai @ 2019-11-22 21:27 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: lttng-dev, paulmck, Xenomai

> >> >> > Another thing to make sure is to have a glibc and Linux kernel which perform
> >> >> > clock_gettime() as vDSO for the monotonic clock, because you don't want a
> >> >> > system call there. If that does not work for you, you can alternatively
> >> >> > implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> >> >> > the clock used by lttng, and for instance use TSC directly. See for instance
> >> >> > the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
> >> >>
> >> >> clock_gettime & Co for a Xenomai application is syscall-free as well.
> >> >
> >> > Yes, and that gave me a deadlock already, if a library us not compiled
> >> > for Xenomai,
> >> > it will either use the syscall (and you detect that immediatly) or it
> >> > will work most of the time,
> >> > and lock up once in a while if a Linux thread took the "writer lock"
> >> > of the VDSO structures
> >> > and your high priority xenomai thread is busy waiting infinitely.
> >> >
> >> > Only sane approach would be to use either the xenomai function directly,
> >> > or recreate the function (rdtsc + interpolation on x86).
> >> > Either compiling/patching lttng for Cobalt (which I really would not
> >> > want to do) or using a
> >> > clock plugin.
> >> > If the later is supposed to be minimal, then that would mean I would
> >> > have to get the
> >> > interpolation factors cobalt uses (without bringing in libcobalt).
> >> >
> >> > Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> >> > AFAIK, so timestamps will
> >> > be different to the rest of Linux.
> >> > On my last plattform I did some tracing using internal stamp and
> >> > regulary wrote a
> >> > block with internal and external timestamps so those could be
> >> > converted "offline".
> >> > Anything similar with lttng or tools handling the traces?
> >>
> >> Can a Xenomai thread issue clock_gettime(CLOCK_MONOTONIC) ?
> >
> > Yes it can, if the calls goes through the VDSO, then it mostly works.
> > And once in a while deadlocks the system if a Xenomai thread waits for a
> > spinlock that the Linux kernel owns and doesnt give back as said thread will
> > not let the Linux Kernel run (as described above).
>
> Ah, yes, read seqlock can be tricky in that kind of scenario indeed.
>
> Then what we'd need is the nmi-safe monotonic clock that went into the
> Linux kernel a while ago. It's called "monotonic fast", but really what
> it does is to remove the need to use a read-seqlock. AFAIK it's not
> exposed through the vDSO at the moment though.

An easy to use, consistent clock between Linux and Xenomai? Should be
the ultimate goal.
But I think its way less intrusive to just make the existing vDSO read/writes
safe by using the same scheme of atomic modification-count +
alternating buffers.

The vDSO is weird anyway, CLOCK_MONOTONIC_RAW was missing for a long
time (or still is?).

Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Using lttng-ust with xenomai
       [not found]           ` <a2036ba8-c22a-e549-b68d-35524ee7f9a9@siemens.com>
@ 2019-11-22 21:38             ` Norbert Lange
  0 siblings, 0 replies; 16+ messages in thread
From: Norbert Lange @ 2019-11-22 21:38 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: lttng-dev, paulmck, Xenomai

> >>>>>> liburcu has configure options allow forcing the usage of this syscall
> >>>>>> but not disabling it, which likely is necessary for Xenomai.
> >>>>>
> >>>>> I suspect what you'd need there is a way to allow a process to tell
> >>>>> liburcu-bp (or liburcu) to always use the fall-back mechanism which does
> >>>>> not rely on sys_membarrier. This could be allowed before the first use of
> >>>>> the library. I think extending the liburcu APIs to allow this should be
> >>>>> straightforward enough. This approach would be more flexible than requiring
> >>>>> liburcu to be specialized at configure time. This new API would return an error
> >>>>> if invoked with a liburcu library compiled with --disable-sys-membarrier-fallback.
> >>>>>
> >>>>> If you have control over your entire system's kernel, you may want to try
> >>>>> just configuring the kernel within CONFIG_MEMBARRIER=n in the meantime.
> >>>>>
> >>>>> Another thing to make sure is to have a glibc and Linux kernel which perform
> >>>>> clock_gettime() as vDSO for the monotonic clock, because you don't want a
> >>>>> system call there. If that does not work for you, you can alternatively
> >>>>> implement your own lttng-ust and lttng-modules clock plugin .so/.ko to override
> >>>>> the clock used by lttng, and for instance use TSC directly. See for instance
> >>>>> the lttng-ust(3) LTTNG_UST_CLOCK_PLUGIN environment variable.
> >>>>
> >>>> clock_gettime & Co for a Xenomai application is syscall-free as well.
> >>>
> >>> Yes, and that gave me a deadlock already, if a library us not compiled
> >>> for Xenomai,
> >>> it will either use the syscall (and you detect that immediatly) or it
> >>> will work most of the time,
> >>> and lock up once in a while if a Linux thread took the "writer lock"
> >>> of the VDSO structures
> >>> and your high priority xenomai thread is busy waiting infinitely.
> >>>
> >>> Only sane approach would be to use either the xenomai function directly,
> >>> or recreate the function (rdtsc + interpolation on x86).
> >>
> >> rdtsc is not portable, thus a no-go.
> >
> > Its not portable, but you have equivalents on ARM, powerpc.
> > ie. "Do the same think as Xenomai"
>
> If you use existing code, I'm fine. Just not invent something "new" here.

The idea it to build the lttng plugin from the same code,
just using the things that are necessary for reading the monotonic clock

> >>> Either compiling/patching lttng for Cobalt (which I really would not
> >>> want to do) or using a
> >>> clock plugin.
> >>
> >> I suspect you will want to have at least a plugin that was built against
> >> Xenomai libs.
> >
> > That will then do alot other stuff like spwaning a printf thread.
> >
> >>
> >>> If the later is supposed to be minimal, then that would mean I would
> >>> have to get the
> >>> interpolation factors cobalt uses (without bringing in libcobalt).
> >>>
> >>> Btw. the Xenomai and Linux monotonic clocks arent synchronised at all
> >>> AFAIK, so timestamps will
> >>> be different to the rest of Linux.
> >>
> >> CLOCK_HOST_REALTIME is synchronized.
> >
> > Thats not monotonic?
>
> Yeah, it's REALTIME, in synch with CLOCK_REALTIME of Linux.
> CLOCK_MONOTONIC should have a static offset at worst. I think that could
> be resolved if it wasn't yet.

Linux CLOCK_MONOTONIC is skew corrected to increment at the same rate as
CLOCK_REALTIME.
You might have a chance with Linux CLOCK_MONOTONIC_RAW,
if you use the identical scaling method.

>
> >
> >>
> >>> On my last plattform I did some tracing using internal stamp and
> >>> regulary wrote a
> >>> block with internal and external timestamps so those could be
> >>> converted "offline".
> >>
> >> Sounds not like something we want to promote.
> >
> > This was a questing to lttng and its tool environment. I suppose we
> > werent the first
> > ones with multiple clocks in a system.
> > If anything needs to be done in Xenomai it might be a concurrent
> > readout of Linux/cobalt time(s),
> > the rest would be done offline, potentially on another system.
>
> Sure, doable, but I prefer not having to do that.

It might offer alot flexibility you don't get otherwise (without a ton of work).

Norbert

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-11-22 21:38 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CADYdroN-i9yrpd-wjPSW36GUptRV+kOCJT=Yv6+Z5sCVBmo_SQ@mail.gmail.com>
2019-11-22 15:42 ` Using lttng-ust with xenomai Mathieu Desnoyers
     [not found] ` <2012667816.853.1574437363737.JavaMail.zimbra@efficios.com>
2019-11-22 15:52   ` Jan Kiszka
     [not found]   ` <4aab99be-5451-4582-f75d-7637614b1d37@siemens.com>
2019-11-22 17:01     ` Mathieu Desnoyers
     [not found]     ` <480743920.929.1574442078799.JavaMail.zimbra@efficios.com>
2019-11-22 17:36       ` Jan Kiszka
2019-11-22 17:44     ` Norbert Lange
     [not found]     ` <CADYdroOh+T8pOcNBW74KSMfCh--ujD8L3_G96LWR1migpsUq0g@mail.gmail.com>
2019-11-22 17:52       ` Jan Kiszka
     [not found]       ` <63a51fa5-db96-9cf9-0eb3-51954ebf98f4@siemens.com>
2019-11-22 18:01         ` Norbert Lange
     [not found]         ` <CADYdroP+H3DiqCaH9o1jHxAurh_2YG_p7MK4H2kFqSoTCV0w6A@mail.gmail.com>
2019-11-22 18:07           ` Jan Kiszka
     [not found]           ` <a2036ba8-c22a-e549-b68d-35524ee7f9a9@siemens.com>
2019-11-22 21:38             ` Norbert Lange
2019-11-22 19:00       ` Mathieu Desnoyers
     [not found]       ` <2088324778.1063.1574449205629.JavaMail.zimbra@efficios.com>
2019-11-22 19:57         ` Norbert Lange
     [not found]         ` <CADYdroM7acqfMym1sUbwaa773SLSzHPSni9uRxiLZbbHtteLug@mail.gmail.com>
2019-11-22 20:15           ` Mathieu Desnoyers
     [not found]           ` <547908110.1420.1574453756850.JavaMail.zimbra@efficios.com>
2019-11-22 21:27             ` [lttng-dev] " Norbert Lange via Xenomai
2019-11-22 17:55   ` Norbert Lange
     [not found]   ` <CADYdroNGcY6adA5cGTzwzsKqO7+iuWts9k8Haz8k3HSvzQfc=g@mail.gmail.com>
2019-11-22 19:03     ` Mathieu Desnoyers
     [not found]     ` <1007669875.1091.1574449410739.JavaMail.zimbra@efficios.com>
2019-11-22 20:04       ` Norbert Lange

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).