All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai] interrupt service
@ 2015-02-18 22:03 Lowell Gilbert
  2015-02-18 22:08 ` Gilles Chanteperdrix
  0 siblings, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-18 22:03 UTC (permalink / raw)
  To: xenomai

Hi.

I have a kernel task created with rtdm_task_init(). I can wake it up
from my ioctl handler in non-RT, but not from inside my ISR, which was
hooked with rtdm_irq_request(). I tried it with a semaphore, with an
event, and then with just rtdm_task_unblock(). I'm probably doing
something silly here; are there any obvious places to look?

Thanks.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-18 22:03 [Xenomai] interrupt service Lowell Gilbert
@ 2015-02-18 22:08 ` Gilles Chanteperdrix
  2015-02-19  4:44   ` Lowell Gilbert
  0 siblings, 1 reply; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-18 22:08 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Wed, Feb 18, 2015 at 05:03:33PM -0500, Lowell Gilbert wrote:
> Hi.
> 
> I have a kernel task created with rtdm_task_init(). I can wake it up
> from my ioctl handler in non-RT, but not from inside my ISR, which was
> hooked with rtdm_irq_request(). I tried it with a semaphore, with an
> event, and then with just rtdm_task_unblock(). I'm probably doing
> something silly here; are there any obvious places to look?

Are you sure the irq handler is actually called ?

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-18 22:08 ` Gilles Chanteperdrix
@ 2015-02-19  4:44   ` Lowell Gilbert
  2015-02-19 21:06     ` Lowell Gilbert
  2015-02-20 19:38     ` Lowell Gilbert
  0 siblings, 2 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-19  4:44 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Wed, Feb 18, 2015 at 05:03:33PM -0500, Lowell Gilbert wrote:
>> Hi.
>> 
>> I have a kernel task created with rtdm_task_init(). I can wake it up
>> from my ioctl handler in non-RT, but not from inside my ISR, which was
>> hooked with rtdm_irq_request(). I tried it with a semaphore, with an
>> event, and then with just rtdm_task_unblock(). I'm probably doing
>> something silly here; are there any obvious places to look?
>
> Are you sure the irq handler is actually called ?

Yes.

[I increment a variable every time the IRQ runs, just to be sure.]


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-19  4:44   ` Lowell Gilbert
@ 2015-02-19 21:06     ` Lowell Gilbert
  2015-02-20 19:38     ` Lowell Gilbert
  1 sibling, 0 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-19 21:06 UTC (permalink / raw)
  To: xenomai

Lowell Gilbert <kludge@be-well.ilk.org> writes:

> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>
>> On Wed, Feb 18, 2015 at 05:03:33PM -0500, Lowell Gilbert wrote:
>>> Hi.
>>> 
>>> I have a kernel task created with rtdm_task_init(). I can wake it up
>>> from my ioctl handler in non-RT, but not from inside my ISR, which was
>>> hooked with rtdm_irq_request(). I tried it with a semaphore, with an
>>> event, and then with just rtdm_task_unblock(). I'm probably doing
>>> something silly here; are there any obvious places to look?
>>
>> Are you sure the irq handler is actually called ?
>
> Yes.
>
> [I increment a variable every time the IRQ runs, just to be sure.]

I am attaching a sample program based on one of the Xenomai examples.

The bottom half never executes, although the top half does.

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: tut02-skeleton-drv.c
URL: <http://www.xenomai.org/pipermail/xenomai/attachments/20150219/42055153/attachment.c>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: tut02-skeleton-app.c
URL: <http://www.xenomai.org/pipermail/xenomai/attachments/20150219/42055153/attachment-0001.c>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-19  4:44   ` Lowell Gilbert
  2015-02-19 21:06     ` Lowell Gilbert
@ 2015-02-20 19:38     ` Lowell Gilbert
  2015-02-20 22:57       ` Gilles Chanteperdrix
  1 sibling, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-20 19:38 UTC (permalink / raw)
  To: Gilles Chanteperdrix, xenomai

Lowell Gilbert <kludge@be-well.ilk.org> writes:

> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>
>> On Wed, Feb 18, 2015 at 05:03:33PM -0500, Lowell Gilbert wrote:
>>> Hi.
>>> 
>>> I have a kernel task created with rtdm_task_init(). I can wake it up
>>> from my ioctl handler in non-RT, but not from inside my ISR, which was
>>> hooked with rtdm_irq_request(). I tried it with a semaphore, with an
>>> event, and then with just rtdm_task_unblock(). I'm probably doing
>>> something silly here; are there any obvious places to look?
>>
>> Are you sure the irq handler is actually called ?
>
> Yes.
>
> [I increment a variable every time the IRQ runs, just to be sure.]

The mailing list stripped my code, so I'll attach it inline.

According to the statistics that get printed, he bottom half never
executes, although the top half does.

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: tut02-skeleton-drv.c
URL: <http://www.xenomai.org/pipermail/xenomai/attachments/20150220/6ada56c5/attachment.c>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: tut02-skeleton-app.c
URL: <http://www.xenomai.org/pipermail/xenomai/attachments/20150220/6ada56c5/attachment-0001.c>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-20 19:38     ` Lowell Gilbert
@ 2015-02-20 22:57       ` Gilles Chanteperdrix
  2015-02-24 23:01         ` Lowell Gilbert
  0 siblings, 1 reply; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-20 22:57 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Fri, Feb 20, 2015 at 02:38:12PM -0500, Lowell Gilbert wrote:
> Lowell Gilbert <kludge@be-well.ilk.org> writes:
> 
> > Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> >
> >> On Wed, Feb 18, 2015 at 05:03:33PM -0500, Lowell Gilbert wrote:
> >>> Hi.
> >>> 
> >>> I have a kernel task created with rtdm_task_init(). I can wake it up
> >>> from my ioctl handler in non-RT, but not from inside my ISR, which was
> >>> hooked with rtdm_irq_request(). I tried it with a semaphore, with an
> >>> event, and then with just rtdm_task_unblock(). I'm probably doing
> >>> something silly here; are there any obvious places to look?
> >>
> >> Are you sure the irq handler is actually called ?
> >
> > Yes.
> >
> > [I increment a variable every time the IRQ runs, just to be sure.]
> 
> The mailing list stripped my code, so I'll attach it inline.

It does not strip it, it puts it on a server that can be accessed
with http, so that only the people who want to see it download it,
instead of forcibly sending it to all the subscribers.

Will look at your code later. But at a quick glance I see nothing
wrong.


-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-20 22:57       ` Gilles Chanteperdrix
@ 2015-02-24 23:01         ` Lowell Gilbert
  2015-02-24 23:34           ` Gilles Chanteperdrix
  2015-02-25  8:30           ` Philippe Gerum
  0 siblings, 2 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-24 23:01 UTC (permalink / raw)
  To: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Fri, Feb 20, 2015 at 02:38:12PM -0500, Lowell Gilbert wrote:

>> The mailing list stripped my code, so I'll attach it inline.
>
> It does not strip it, it puts it on a server that can be accessed
> with http, so that only the people who want to see it download it,
> instead of forcibly sending it to all the subscribers.

And if I'd actually *read* the autogenerated text including the link,
I'd have known that...

> Will look at your code later. But at a quick glance I see nothing
> wrong.

That's unfortunate, because I'm kind of stuck on this. If I don't
resolve it soon my colleagues will move the real-time functionality into
hardware, which I really don't want to see.

I thought it might have been something in my kernel set-up, but I get
the same results after I worked my setup back to basics: latest 3.14
kernel, merged in the i-pipe code from the 3.14 branch in the
Xenomai.org repository, checked out the v2.6.4 release of Xenomai, ran
the prepare-kernel script, put the i-pipe TSC code back into smp_twd.c
to get a high-resolution clock. I may try Xenomai 3 if I have time.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-24 23:01         ` Lowell Gilbert
@ 2015-02-24 23:34           ` Gilles Chanteperdrix
  2015-02-25 16:22             ` Lowell Gilbert
  2015-02-25  8:30           ` Philippe Gerum
  1 sibling, 1 reply; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-24 23:34 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Tue, Feb 24, 2015 at 06:01:18PM -0500, Lowell Gilbert wrote:
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> 
> > On Fri, Feb 20, 2015 at 02:38:12PM -0500, Lowell Gilbert wrote:
> 
> >> The mailing list stripped my code, so I'll attach it inline.
> >
> > It does not strip it, it puts it on a server that can be accessed
> > with http, so that only the people who want to see it download it,
> > instead of forcibly sending it to all the subscribers.
> 
> And if I'd actually *read* the autogenerated text including the link,
> I'd have known that...
> 
> > Will look at your code later. But at a quick glance I see nothing
> > wrong.
> 
> That's unfortunate, because I'm kind of stuck on this. If I don't
> resolve it soon my colleagues will move the real-time functionality into
> hardware, which I really don't want to see.
> 
> I thought it might have been something in my kernel set-up, but I get
> the same results after I worked my setup back to basics: latest 3.14
> kernel, merged in the i-pipe code from the 3.14 branch in the
> Xenomai.org repository, checked out the v2.6.4 release of Xenomai, ran
> the prepare-kernel script,

To have a real basic configuration, you should use Xenomai 2.6.4 and
with the latest I-pipe patch for Linux 3.14. Unmodified.

> put the i-pipe TSC code back into smp_twd.c
> to get a high-resolution clock.

The TSC code in smp_twd.c conflicts with Linux global timer driver.
Simply enable the global timer driver if you want to use the global
timer as tsc.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-24 23:01         ` Lowell Gilbert
  2015-02-24 23:34           ` Gilles Chanteperdrix
@ 2015-02-25  8:30           ` Philippe Gerum
  2015-02-25  9:36             ` Philippe Gerum
  1 sibling, 1 reply; 34+ messages in thread
From: Philippe Gerum @ 2015-02-25  8:30 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/25/2015 12:01 AM, Lowell Gilbert wrote:
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> 
>> On Fri, Feb 20, 2015 at 02:38:12PM -0500, Lowell Gilbert wrote:
> 
>>> The mailing list stripped my code, so I'll attach it inline.
>>
>> It does not strip it, it puts it on a server that can be accessed
>> with http, so that only the people who want to see it download it,
>> instead of forcibly sending it to all the subscribers.
> 
> And if I'd actually *read* the autogenerated text including the link,
> I'd have known that...
> 
>> Will look at your code later. But at a quick glance I see nothing
>> wrong.
> 
> That's unfortunate, because I'm kind of stuck on this. If I don't
> resolve it soon my colleagues will move the real-time functionality into
> hardware, which I really don't want to see.
> 
> I thought it might have been something in my kernel set-up, but I get
> the same results after I worked my setup back to basics: latest 3.14
> kernel, merged in the i-pipe code from the 3.14 branch in the
> Xenomai.org repository, checked out the v2.6.4 release of Xenomai, ran
> the prepare-kernel script, put the i-pipe TSC code back into smp_twd.c
> to get a high-resolution clock. I may try Xenomai 3 if I have time.
> 

Looks like the real-time core does not reschedule due to the wrong
status returned by the ISR. Does this patch help?

--- attachment.c~	2015-02-25 09:17:31.391445993 +0100
+++ attachment.c	2015-02-25 09:28:25.379426965 +0100
@@ -59,7 +59,7 @@
 	interrupts++;
 	rtdm_event_signal(&tick_ev);

-	return 0;
+	return RTDM_IRQ_HANDLED;
 }

 rtdm_irq_t irq_handle;	/* device IRQ handle */

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25  8:30           ` Philippe Gerum
@ 2015-02-25  9:36             ` Philippe Gerum
  0 siblings, 0 replies; 34+ messages in thread
From: Philippe Gerum @ 2015-02-25  9:36 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/25/2015 09:30 AM, Philippe Gerum wrote:
> On 02/25/2015 12:01 AM, Lowell Gilbert wrote:
>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>>
>>> On Fri, Feb 20, 2015 at 02:38:12PM -0500, Lowell Gilbert wrote:
>>
>>>> The mailing list stripped my code, so I'll attach it inline.
>>>
>>> It does not strip it, it puts it on a server that can be accessed
>>> with http, so that only the people who want to see it download it,
>>> instead of forcibly sending it to all the subscribers.
>>
>> And if I'd actually *read* the autogenerated text including the link,
>> I'd have known that...
>>
>>> Will look at your code later. But at a quick glance I see nothing
>>> wrong.
>>
>> That's unfortunate, because I'm kind of stuck on this. If I don't
>> resolve it soon my colleagues will move the real-time functionality into
>> hardware, which I really don't want to see.
>>
>> I thought it might have been something in my kernel set-up, but I get
>> the same results after I worked my setup back to basics: latest 3.14
>> kernel, merged in the i-pipe code from the 3.14 branch in the
>> Xenomai.org repository, checked out the v2.6.4 release of Xenomai, ran
>> the prepare-kernel script, put the i-pipe TSC code back into smp_twd.c
>> to get a high-resolution clock. I may try Xenomai 3 if I have time.
>>
> 
> Looks like the real-time core does not reschedule due to the wrong
> status returned by the ISR. Does this patch help?
> 
> --- attachment.c~	2015-02-25 09:17:31.391445993 +0100
> +++ attachment.c	2015-02-25 09:28:25.379426965 +0100
> @@ -59,7 +59,7 @@
>  	interrupts++;
>  	rtdm_event_signal(&tick_ev);
> 
> -	return 0;
> +	return RTDM_IRQ_HANDLED;
>  }
> 
>  rtdm_irq_t irq_handle;	/* device IRQ handle */
> 

Mm, I don't want to rain on the parade but looking closer at the code,
the core should reschedule despite the bogus return value from the ISR.
Anyway, it's worth fixing the driver first, before diving deeper.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-24 23:34           ` Gilles Chanteperdrix
@ 2015-02-25 16:22             ` Lowell Gilbert
  2015-02-25 17:34               ` Philippe Gerum
  0 siblings, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-25 16:22 UTC (permalink / raw)
  To: xenomai

Thanks for the help, even if I'm not making much progress yet.

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Tue, Feb 24, 2015 at 06:01:18PM -0500, Lowell Gilbert wrote:

> To have a real basic configuration, you should use Xenomai 2.6.4 and
> with the latest I-pipe patch for Linux 3.14. Unmodified.

I went ahead and did that, just to be sure. Results were the same.

>> put the i-pipe TSC code back into smp_twd.c
>> to get a high-resolution clock.
>
> The TSC code in smp_twd.c conflicts with Linux global timer driver.
> Simply enable the global timer driver if you want to use the global
> timer as tsc.

Got it. Thanks.


Philippe Gerum <rpm@xenomai.org> writes:

> -	return 0;
> +	return RTDM_IRQ_HANDLED;

Yes, that bug crept in when I was building the simplified test case.
My original driver used the correct return values.


I'm still confused by the fact that the thread wakes up fine if it's
signalled (or unblocked, etc.) from a driver operation. But if the ISR
tries to do the same, it fails. In fact, if I use rtdm_task_unblock(), I
get an error (0) returned to the ISR.


Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25 16:22             ` Lowell Gilbert
@ 2015-02-25 17:34               ` Philippe Gerum
  2015-02-25 18:35                 ` Philippe Gerum
  2015-02-25 20:41                 ` Lowell Gilbert
  0 siblings, 2 replies; 34+ messages in thread
From: Philippe Gerum @ 2015-02-25 17:34 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/25/2015 05:22 PM, Lowell Gilbert wrote:
> Thanks for the help, even if I'm not making much progress yet.
> 
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> 
>> On Tue, Feb 24, 2015 at 06:01:18PM -0500, Lowell Gilbert wrote:
> 
>> To have a real basic configuration, you should use Xenomai 2.6.4 and
>> with the latest I-pipe patch for Linux 3.14. Unmodified.
> 
> I went ahead and did that, just to be sure. Results were the same.
> 
>>> put the i-pipe TSC code back into smp_twd.c
>>> to get a high-resolution clock.
>>
>> The TSC code in smp_twd.c conflicts with Linux global timer driver.
>> Simply enable the global timer driver if you want to use the global
>> timer as tsc.
> 
> Got it. Thanks.
> 
> 
> Philippe Gerum <rpm@xenomai.org> writes:
> 
>> -	return 0;
>> +	return RTDM_IRQ_HANDLED;
> 
> Yes, that bug crept in when I was building the simplified test case.
> My original driver used the correct return values.
> 
> 
> I'm still confused by the fact that the thread wakes up fine if it's
> signalled (or unblocked, etc.) from a driver operation. But if the ISR
> tries to do the same, it fails. In fact, if I use rtdm_task_unblock(), I
> get an error (0) returned to the ISR.
> 
> 

We need to know what happens in the kernel from the ISR then all along
the IRQ exit path. To this end, please enable CONFIG_IPIPE_TRACE in the
kernel configuration. Assuming you run Xenomai 2.6.4, you will need to
patch this snippet into the Xenomai kernel code:

diff --git a/ksrc/nucleus/intr.c b/ksrc/nucleus/intr.c
index ef36036..08ee192 100644
--- a/ksrc/nucleus/intr.c
+++ b/ksrc/nucleus/intr.c
@@ -433,6 +433,9 @@ static inline int xnintr_irq_detach(xnintr_t *intr)

 #endif /* !CONFIG_XENO_OPT_SHIRQ */

+int debug_event_signal;
+EXPORT_SYMBOL(debug_event_signal);
+
 /*
  * Low-level interrupt handler dispatching non-shared ISRs -- Called with
  * interrupts off.
@@ -504,6 +507,9 @@ static void xnintr_irq_handler(unsigned irq, void
*cookie)
 	}

 	trace_mark(xn_nucleus, irq_exit, "irq %u", irq);
+
+	if (debug_event_signal)
+		ipipe_trace_freeze(0xfefefefe);
 }

 int __init xnintr_mount(void)

Then, in your driver code, add this line:

--- attachment.c~	2015-02-25 09:17:31.391445993 +0100
+++ attachment.c	2015-02-25 18:31:34.503299675 +0100
@@ -54,12 +54,15 @@

 }

+extern int debug_event_signal;
+
 int top_half_isr(rtdm_irq_t * handle)
 {
 	interrupts++;
 	rtdm_event_signal(&tick_ev);
+	debug_event_signal = 1;

	return RTDM_IRQ_HANDLED;
 }

 rtdm_irq_t irq_handle;	/* device IRQ handle */

Once the target has booted:

# echo 1024 > /proc/ipipe/trace/back_trace_points

Then run the test. The output of /proc/ipipe/trace/frozen may help in
figuring out what happens.

-- 
Philippe.


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25 17:34               ` Philippe Gerum
@ 2015-02-25 18:35                 ` Philippe Gerum
  2015-02-25 20:41                 ` Lowell Gilbert
  1 sibling, 0 replies; 34+ messages in thread
From: Philippe Gerum @ 2015-02-25 18:35 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/25/2015 06:34 PM, Philippe Gerum wrote:
> On 02/25/2015 05:22 PM, Lowell Gilbert wrote:
>> Thanks for the help, even if I'm not making much progress yet.
>>
>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>>
>>> On Tue, Feb 24, 2015 at 06:01:18PM -0500, Lowell Gilbert wrote:
>>
>>> To have a real basic configuration, you should use Xenomai 2.6.4 and
>>> with the latest I-pipe patch for Linux 3.14. Unmodified.
>>
>> I went ahead and did that, just to be sure. Results were the same.
>>
>>>> put the i-pipe TSC code back into smp_twd.c
>>>> to get a high-resolution clock.
>>>
>>> The TSC code in smp_twd.c conflicts with Linux global timer driver.
>>> Simply enable the global timer driver if you want to use the global
>>> timer as tsc.
>>
>> Got it. Thanks.
>>
>>
>> Philippe Gerum <rpm@xenomai.org> writes:
>>
>>> -	return 0;
>>> +	return RTDM_IRQ_HANDLED;
>>
>> Yes, that bug crept in when I was building the simplified test case.
>> My original driver used the correct return values.
>>
>>
>> I'm still confused by the fact that the thread wakes up fine if it's
>> signalled (or unblocked, etc.) from a driver operation. But if the ISR
>> tries to do the same, it fails. In fact, if I use rtdm_task_unblock(), I
>> get an error (0) returned to the ISR.
>>
>>
> 
> We need to know what happens in the kernel from the ISR then all along
> the IRQ exit path. To this end, please enable CONFIG_IPIPE_TRACE in the
> kernel configuration. Assuming you run Xenomai 2.6.4, you will need to
> patch this snippet into the Xenomai kernel code:
> 
> diff --git a/ksrc/nucleus/intr.c b/ksrc/nucleus/intr.c
> index ef36036..08ee192 100644
> --- a/ksrc/nucleus/intr.c
> +++ b/ksrc/nucleus/intr.c
> @@ -433,6 +433,9 @@ static inline int xnintr_irq_detach(xnintr_t *intr)
> 
>  #endif /* !CONFIG_XENO_OPT_SHIRQ */
> 
> +int debug_event_signal;
> +EXPORT_SYMBOL(debug_event_signal);
> +
>  /*
>   * Low-level interrupt handler dispatching non-shared ISRs -- Called with
>   * interrupts off.
> @@ -504,6 +507,9 @@ static void xnintr_irq_handler(unsigned irq, void
> *cookie)
>  	}
> 
>  	trace_mark(xn_nucleus, irq_exit, "irq %u", irq);
> +
> +	if (debug_event_signal)
> +		ipipe_trace_freeze(0xfefefefe);
>  }
> 
>  int __init xnintr_mount(void)
> 

This patch assumes that CONFIG_XENO_OPT_SHIRQ is _disabled_ in your
Kconfig, which is the default setup.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25 17:34               ` Philippe Gerum
  2015-02-25 18:35                 ` Philippe Gerum
@ 2015-02-25 20:41                 ` Lowell Gilbert
  2015-02-25 21:02                   ` Lowell Gilbert
  1 sibling, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-25 20:41 UTC (permalink / raw)
  To: xenomai

Philippe Gerum <rpm@xenomai.org> writes:

> # echo 1024 > /proc/ipipe/trace/back_trace_points
>
> Then run the test. The output of /proc/ipipe/trace/frozen may help in
> figuring out what happens.

I haven't had a chance to look at it yet, but it's in:

  http://be-well.ilk.org/~lowell/projects/ipipe.trace.output.txt


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25 20:41                 ` Lowell Gilbert
@ 2015-02-25 21:02                   ` Lowell Gilbert
  2015-02-26 11:19                     ` Philippe Gerum
  0 siblings, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-25 21:02 UTC (permalink / raw)
  To: xenomai

Lowell Gilbert <kludge@be-well.ilk.org> writes:

> Philippe Gerum <rpm@xenomai.org> writes:
>
>> # echo 1024 > /proc/ipipe/trace/back_trace_points
>>
>> Then run the test. The output of /proc/ipipe/trace/frozen may help in
>> figuring out what happens.
>
> I haven't had a chance to look at it yet, but it's in:
>
>   http://be-well.ilk.org/~lowell/projects/ipipe.trace.output.txt

I've got another one that seems more helpful:
http://be-well.ilk.org/~lowell/projects/xenomai/ipipe.trace2.output.txt


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-25 21:02                   ` Lowell Gilbert
@ 2015-02-26 11:19                     ` Philippe Gerum
  2015-02-26 16:38                       ` Lowell Gilbert
  0 siblings, 1 reply; 34+ messages in thread
From: Philippe Gerum @ 2015-02-26 11:19 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/25/2015 10:02 PM, Lowell Gilbert wrote:
> Lowell Gilbert <kludge@be-well.ilk.org> writes:
> 
>> Philippe Gerum <rpm@xenomai.org> writes:
>>
>>> # echo 1024 > /proc/ipipe/trace/back_trace_points
>>>
>>> Then run the test. The output of /proc/ipipe/trace/frozen may help in
>>> figuring out what happens.
>>
>> I haven't had a chance to look at it yet, but it's in:
>>
>>   http://be-well.ilk.org/~lowell/projects/ipipe.trace.output.txt
> 
> I've got another one that seems more helpful:
> http://be-well.ilk.org/~lowell/projects/xenomai/ipipe.trace2.output.txt
> 

Your test code in user-space seems to enable the timing IRQ only for a
very short time, between the write and read calls to the driver which
happen in sequence.

Those two interrupts are very close in the time line, i.e. 42 us:

:|   +begin   0x90000000   -68	  0.960  __irq_svc+0x44
(ipipe_unstall_root+0x88)
...
:|  + begin   0x90000000   -26+   1.006  __irq_svc+0x44
(__ipipe_restore_head+0xec)

The trace also reveals a proper wake up and rescheduling sequence from
timestamp -51 and on:

:|  # func                 -51+   1.178  top_half_isr+0x10
[tut02_skeleton_drv] (xnintr_irq_handler+0x158)~
:|  # func                 -50+   1.920  rtdm_event_signal+0x14
(top_half_isr+0x2c [tut02_skeleton_drv])
:|  # func                 -48+   2.139  xnsynch_flush+0x14
(rtdm_event_signal+0x13c)
:|  # func                 -46+   1.324  xnpod_resume_thread+0x14
(xnsynch_flush+0x178)
:|  # [    0] -<?>-    0   -44+   3.774  xnpod_resume_thread+0x140
(xnsynch_flush+0x178)

<snip>

:|  # func                 -32+   1.523  __xnpod_schedule+0x14
(xnintr_irq_handler+0x3a4)
:|  # [    0] -<?>-   -1   -30	  0.953  __xnpod_schedule+0x1d4
(xnintr_irq_handler+0x3a4)

Then a second interrupt, which happens before the RTDM task had a chance
to block on rtdm_event_wait() again, therefore no rescheduling has to
take place since the RTDM task is still in ready state:

:|  # func                 -12	  0.894  top_half_isr+0x10
[tut02_skeleton_drv] (xnintr_irq_handler+0x158)
:|  # func                 -11+   1.251  rtdm_event_signal+0x14
(top_half_isr+0x2c [tut02_skeleton_drv])
:|  # func                 -10+   1.860  xnsynch_flush+0x14
(rtdm_event_signal+0x13c)

Raising the debug flag we have added in the test driver right after
simple_rtdm_read() shuts down the timing interrupt source instead, may
confirm this assumption by looking at the generated traces.

Wouldn't this change make sense in the user code?

--- attachment-0001.c~	2015-02-25 09:17:43.496445640 +0100
+++ attachment-0001.c	2015-02-26 11:43:09.795191765 +0100
@@ -70,8 +70,8 @@
 	}

         rt_dev_write(device, "h", 2);
-        size = rt_dev_read (device, (void *)buf, 1024);
         sleep(1);
+        size = rt_dev_read (device, (void *)buf, 1024);
         printf("%s: '%s'\n", __func__, buf);


-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 11:19                     ` Philippe Gerum
@ 2015-02-26 16:38                       ` Lowell Gilbert
  2015-02-26 17:26                         ` Gilles Chanteperdrix
  2015-02-26 17:56                         ` Philippe Gerum
  0 siblings, 2 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-26 16:38 UTC (permalink / raw)
  To: xenomai

Philippe Gerum <rpm@xenomai.org> writes:

> Your test code in user-space seems to enable the timing IRQ only for a
> very short time, between the write and read calls to the driver which
> happen in sequence.
>
> Those two interrupts are very close in the time line, i.e. 42 us:
>
> :|   +begin   0x90000000   -68	  0.960  __irq_svc+0x44
> (ipipe_unstall_root+0x88)
> ...
> :|  + begin   0x90000000   -26+   1.006  __irq_svc+0x44
> (__ipipe_restore_head+0xec)

My actual interrupts are (typically) 10 us apart. There are some
calculations that need to be made before writing the results into
hardware. Until now, my code had been doing all of that work in the ISR
itself to meet this timing requirement, but the non-real-time part of
the application was unable to keep up with filling the FIFOs into the
real-time portion, presumably because so much time was being spent with
interrupts disabled.

The CPU is a dual-core ARM (Cortex A-9) of which I have reserved one CPU
for the real-time operations. 

> The trace also reveals a proper wake up and rescheduling sequence from
> timestamp -51 and on:
>
> :|  # func                 -51+   1.178  top_half_isr+0x10
> [tut02_skeleton_drv] (xnintr_irq_handler+0x158)~
> :|  # func                 -50+   1.920  rtdm_event_signal+0x14
> (top_half_isr+0x2c [tut02_skeleton_drv])
> :|  # func                 -48+   2.139  xnsynch_flush+0x14
> (rtdm_event_signal+0x13c)
> :|  # func                 -46+   1.324  xnpod_resume_thread+0x14
> (xnsynch_flush+0x178)
> :|  # [    0] -<?>-    0   -44+   3.774  xnpod_resume_thread+0x140
> (xnsynch_flush+0x178)
>
> <snip>
>
> :|  # func                 -32+   1.523  __xnpod_schedule+0x14
> (xnintr_irq_handler+0x3a4)
> :|  # [    0] -<?>-   -1   -30	  0.953  __xnpod_schedule+0x1d4
> (xnintr_irq_handler+0x3a4)
>
> Then a second interrupt, which happens before the RTDM task had a chance
> to block on rtdm_event_wait() again, therefore no rescheduling has to
> take place since the RTDM task is still in ready state:
>
> :|  # func                 -12	  0.894  top_half_isr+0x10
> [tut02_skeleton_drv] (xnintr_irq_handler+0x158)
> :|  # func                 -11+   1.251  rtdm_event_signal+0x14
> (top_half_isr+0x2c [tut02_skeleton_drv])
> :|  # func                 -10+   1.860  xnsynch_flush+0x14
> (rtdm_event_signal+0x13c)
>
> Raising the debug flag we have added in the test driver right after
> simple_rtdm_read() shuts down the timing interrupt source instead, may
> confirm this assumption by looking at the generated traces.

Not quite; there were actually hundreds of interrupts (and corresponding
calls to the top half routine) registered in the short period of time
that the interrupt was enabled.

Having the task busy-wait (for example, with a spinlock) would be
reasonable, but in that case I might as well mask the interrupt entirely
and poll the hardware status. Perhaps I should try that; it would at
least tell me whether locking out interrupts for too long is really my
underlying problem (or not).

> Wouldn't this change make sense in the user code?
>
> --- attachment-0001.c~	2015-02-25 09:17:43.496445640 +0100
> +++ attachment-0001.c	2015-02-26 11:43:09.795191765 +0100
> @@ -70,8 +70,8 @@
>  	}
>
>          rt_dev_write(device, "h", 2);
> -        size = rt_dev_read (device, (void *)buf, 1024);
>          sleep(1);
> +        size = rt_dev_read (device, (void *)buf, 1024);
>          printf("%s: '%s'\n", __func__, buf);

That was the original code for that routine, in fact. The length of the
delay doesn't matter; I never see *any* wakeups of the task.

I had used a counting semaphore (to account for possibly missed
interrupts) in an earlier version of this code before changing it to an
event when I found that the semaphore didn't work. I also tried a direct
call to rtdm_task_unblock(), and that failed also.

Be well.
        Lowell



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 16:38                       ` Lowell Gilbert
@ 2015-02-26 17:26                         ` Gilles Chanteperdrix
  2015-02-26 17:56                         ` Philippe Gerum
  1 sibling, 0 replies; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-26 17:26 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Thu, Feb 26, 2015 at 11:38:30AM -0500, Lowell Gilbert wrote:
> I had used a counting semaphore (to account for possibly missed
> interrupts) in an earlier version of this code before changing it to an
> event when I found that the semaphore didn't work. I also tried a direct
> call to rtdm_task_unblock(), and that failed also.

FWIW rtdm signals work like semaphores.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 16:38                       ` Lowell Gilbert
  2015-02-26 17:26                         ` Gilles Chanteperdrix
@ 2015-02-26 17:56                         ` Philippe Gerum
  2015-02-26 19:25                           ` Lowell Gilbert
  1 sibling, 1 reply; 34+ messages in thread
From: Philippe Gerum @ 2015-02-26 17:56 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
> Philippe Gerum <rpm@xenomai.org> writes:
> 
>> Your test code in user-space seems to enable the timing IRQ only for a
>> very short time, between the write and read calls to the driver which
>> happen in sequence.
>>
>> Those two interrupts are very close in the time line, i.e. 42 us:
>>
>> :|   +begin   0x90000000   -68	  0.960  __irq_svc+0x44
>> (ipipe_unstall_root+0x88)
>> ...
>> :|  + begin   0x90000000   -26+   1.006  __irq_svc+0x44
>> (__ipipe_restore_head+0xec)
> 
> My actual interrupts are (typically) 10 us apart. There are some

Do you intend to run an interrupt-driven work loop at 100Khz on your
A9-based, dual core board? If so, your system is most likely handling
too many interrupts on the CPU running the RTDM task, preventing this
task to run and therefore increment the counter.

Assuming the interrupt controller for your SoC is a GIC, unless you
explicitly set the IRQ affinity, the GIC distributor will dispatch your
timing IRQ to CPU0 by default, like all other SPIs.

The new task will be pinned to the CPU running rtdm_task_init() by
default, which is likely CPU0 as well.

To check this, I would set the global Xenomai affinity to CPU1 before
starting the test, so that your driver task ends up there.

# echo 2 > /proc/xenomai/affinity

At least you would have the timing IRQ and the task on a different CPU,
leaving some cycles to the latter. This said, 10 us between timer shots
is really too fast.

> I had used a counting semaphore (to account for possibly missed
> interrupts) in an earlier version of this code before changing it to an
> event when I found that the semaphore didn't work. I also tried a direct
> call to rtdm_task_unblock(), and that failed also.
> 

If you look at ksrc/drivers/testing/timerbench.c, you will see a typical
use of rtdm events with ISRs, this driver is used when running
latency -t2 for instance. I'm convinced the RTDM event API is not the issue.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 17:56                         ` Philippe Gerum
@ 2015-02-26 19:25                           ` Lowell Gilbert
  2015-02-26 20:11                             ` Gilles Chanteperdrix
  2015-02-26 20:24                             ` Philippe Gerum
  0 siblings, 2 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-26 19:25 UTC (permalink / raw)
  To: xenomai

Philippe Gerum <rpm@xenomai.org> writes:

> On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
> The new task will be pinned to the CPU running rtdm_task_init() by
> default, which is likely CPU0 as well.
>
> To check this, I would set the global Xenomai affinity to CPU1 before
> starting the test, so that your driver task ends up there.
>
> # echo 2 > /proc/xenomai/affinity

Yes, I initialize that already. And give "isolcpus=1" to the kernel so
that Linux will not schedule anything else on CPU1.

> At least you would have the timing IRQ and the task on a different CPU,
> leaving some cycles to the latter. This said, 10 us between timer shots
> is really too fast.

Having enough cycles for this isn't my fundamental problem. Running
everything in the ISR has no trouble keeping up with the 100kHz data
flow. The problem comes in a *non* real-time task, which is pulling data
in from an IP socket and pushing it into a queue for the real-time code
to use synchronously.

If I could run bare-metal on the second CPU, I would have done so.
The real-time behaviour is easily characterized, and the periodic work
can safely be done in 10 us even if all of the data has to be fetched
from external memory.

>> I had used a counting semaphore (to account for possibly missed
>> interrupts) in an earlier version of this code before changing it to an
>> event when I found that the semaphore didn't work. I also tried a direct
>> call to rtdm_task_unblock(), and that failed also.

> If you look at ksrc/drivers/testing/timerbench.c, you will see a typical
> use of rtdm events with ISRs, this driver is used when running
> latency -t2 for instance. I'm convinced the RTDM event API is not the issue.

I think you meant irqbench.c. And yes, I also am quite sure that the
event API is behaving fine.

I think I have two options to investigate. One is to do all of my work
in the ISR, but to somehow re-enable enough interrupts to keep CPU0
doing useful work while the ISR is running on CPU1. The other is to poll
the hardware state rather than using the interrupt.  Do you see anything
else I could do?

Thanks again.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 19:25                           ` Lowell Gilbert
@ 2015-02-26 20:11                             ` Gilles Chanteperdrix
  2015-02-26 21:58                               ` Lowell Gilbert
                                                 ` (3 more replies)
  2015-02-26 20:24                             ` Philippe Gerum
  1 sibling, 4 replies; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-26 20:11 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Thu, Feb 26, 2015 at 02:25:13PM -0500, Lowell Gilbert wrote:
> Philippe Gerum <rpm@xenomai.org> writes:
> 
> > On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
> > The new task will be pinned to the CPU running rtdm_task_init() by
> > default, which is likely CPU0 as well.
> >
> > To check this, I would set the global Xenomai affinity to CPU1 before
> > starting the test, so that your driver task ends up there.
> >
> > # echo 2 > /proc/xenomai/affinity
> 
> Yes, I initialize that already. And give "isolcpus=1" to the kernel so
> that Linux will not schedule anything else on CPU1.
> 
> > At least you would have the timing IRQ and the task on a different CPU,
> > leaving some cycles to the latter. This said, 10 us between timer shots
> > is really too fast.
> 
> Having enough cycles for this isn't my fundamental problem. Running
> everything in the ISR has no trouble keeping up with the 100kHz data
> flow. The problem comes in a *non* real-time task, which is pulling data
> in from an IP socket and pushing it into a queue for the real-time code
> to use synchronously.
> 
> If I could run bare-metal on the second CPU, I would have done so.
> The real-time behaviour is easily characterized, and the periodic work
> can safely be done in 10 us even if all of the data has to be fetched
> from external memory.

Consuming all the time for running ISRs is not normal for OSes like
Linux and Xenomai. Being able to run the ISR in less than 10us does
not mean that there is some time left for the rest of the system;
there is quite some code executed around the ISR, and at this
frequency it stops being negligible. Linux at least needs to run
from time to time for time keeping. If you want to execute something
with this frequency, maybe you could consider using an FIQ. FIQs
have a lower overhead.

So, to be clear, does the ISR run on CPU0 and the thread doing the
reads run on CPU1? If no, does it work if you do it that way? To
know whether the problem comes from the interrupt consuming all the
available time, simply create a periodic task, in addition to the
ISR, with a high priority, and see if it executes from time to time
to increment a counter. If it does not execute, then we have a proof
that the ISR is not letting anything else run.

Another problem may be in handling the /proc/xenomai/affinity, so
could you try without using it? Same for isolcpus. If the ISR runs
on cpu0 and the tasks run on cpu1, an IPI should be sent in
__xnpod_schedule to wake up the task blocked in read, you can check
whether the IPI is sent by using ipipe_trace_special for instance
and checking the tracer trace.

> 
> >> I had used a counting semaphore (to account for possibly missed
> >> interrupts) in an earlier version of this code before changing it to an
> >> event when I found that the semaphore didn't work. I also tried a direct
> >> call to rtdm_task_unblock(), and that failed also.
> 
> > If you look at ksrc/drivers/testing/timerbench.c, you will see a typical
> > use of rtdm events with ISRs, this driver is used when running
> > latency -t2 for instance. I'm convinced the RTDM event API is not the issue.
> 
> I think you meant irqbench.c. And yes, I also am quite sure that the
> event API is behaving fine.

No, irqbench is not use as commonly as the latency test. timerbench,
used by the latency test, uses RTDM events.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 19:25                           ` Lowell Gilbert
  2015-02-26 20:11                             ` Gilles Chanteperdrix
@ 2015-02-26 20:24                             ` Philippe Gerum
  2015-02-26 22:55                               ` Lowell Gilbert
  1 sibling, 1 reply; 34+ messages in thread
From: Philippe Gerum @ 2015-02-26 20:24 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/26/2015 08:25 PM, Lowell Gilbert wrote:
> Philippe Gerum <rpm@xenomai.org> writes:
> 
>> On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
>> The new task will be pinned to the CPU running rtdm_task_init() by
>> default, which is likely CPU0 as well.
>>
>> To check this, I would set the global Xenomai affinity to CPU1 before
>> starting the test, so that your driver task ends up there.
>>
>> # echo 2 > /proc/xenomai/affinity
> 
> Yes, I initialize that already. And give "isolcpus=1" to the kernel so
> that Linux will not schedule anything else on CPU1.
> 
>> At least you would have the timing IRQ and the task on a different CPU,
>> leaving some cycles to the latter. This said, 10 us between timer shots
>> is really too fast.
> 
> Having enough cycles for this isn't my fundamental problem. Running
> everything in the ISR has no trouble keeping up with the 100kHz data
> flow. The problem comes in a *non* real-time task, which is pulling data
> in from an IP socket and pushing it into a queue for the real-time code
> to use synchronously.

Could you determine whether the bottleneck is due to the IP stack being
starved from incoming packets? Or, is the contention observed between
the non rt task and the real-time code consuming it?

Since ethernet IRQs are of the SPI kind, the IP stack is likely
executing over CPU0, assuming that the driver (FEC?) is NAPI-enabled,
the packet processing takes place on behalf of a softirq context, on the
same CPU.

Btw, you mentioned a queue as the IPC between both threads. Which kind
of queue/IPC is it?

> 
> If I could run bare-metal on the second CPU, I would have done so.
> The real-time behaviour is easily characterized, and the periodic work
> can safely be done in 10 us even if all of the data has to be fetched
> from external memory.
>

This is what bothers me. The CPU running the ISR code is likely unable
to handle any regular linux activity in this case.

>>> I had used a counting semaphore (to account for possibly missed
>>> interrupts) in an earlier version of this code before changing it to an
>>> event when I found that the semaphore didn't work. I also tried a direct
>>> call to rtdm_task_unblock(), and that failed also.
> 
>> If you look at ksrc/drivers/testing/timerbench.c, you will see a typical
>> use of rtdm events with ISRs, this driver is used when running
>> latency -t2 for instance. I'm convinced the RTDM event API is not the issue.
> 
> I think you meant irqbench.c. And yes, I also am quite sure that the
> event API is behaving fine.
>

I really meant timerbench.c. The only difference is the use of the pulse
instead of signal interface, which makes no difference internally.

> I think I have two options to investigate. One is to do all of my work
> in the ISR, but to somehow re-enable enough interrupts to keep CPU0
> doing useful work while the ISR is running on CPU1.

If I understand this correctly, interrupt masking may not be the issue
on CPU0, I'd rather think that CPU1 is spending too much time in
real-time activity, preventing the regular kernel to properly
synchronize SMP-wise between CPUs.

e.g. IPIs won't flow to CPU1, if some regular linux activity gets
preempted by the ISR while holding a spinlock, CPU0 could contend on
that lock for as long as the ISR work keeps running on CPU1 and so on.

I'm likely missing important points about your application, but
generally speaking, the regular kernel is not going to be that happy if
one of the CPUs involved in the SMP architecture is not responsive enough.

 The other is to poll
> the hardware state rather than using the interrupt.  Do you see anything
> else I could do?
> 

Nothing that would use the general-purpose CPUs shared with the regular
kernel. I guess that option #2 might be the one with the most chance of
success.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 20:11                             ` Gilles Chanteperdrix
@ 2015-02-26 21:58                               ` Lowell Gilbert
  2015-02-26 22:37                                 ` Gilles Chanteperdrix
  2015-02-26 23:09                               ` Philippe Gerum
                                                 ` (2 subsequent siblings)
  3 siblings, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-26 21:58 UTC (permalink / raw)
  To: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Thu, Feb 26, 2015 at 02:25:13PM -0500, Lowell Gilbert wrote:
>> Philippe Gerum <rpm@xenomai.org> writes:
>> 
>> > On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
>> > The new task will be pinned to the CPU running rtdm_task_init() by
>> > default, which is likely CPU0 as well.
>> >
>> > To check this, I would set the global Xenomai affinity to CPU1 before
>> > starting the test, so that your driver task ends up there.
>> >
>> > # echo 2 > /proc/xenomai/affinity
>> 
>> Yes, I initialize that already. And give "isolcpus=1" to the kernel so
>> that Linux will not schedule anything else on CPU1.
>> 
>> > At least you would have the timing IRQ and the task on a different CPU,
>> > leaving some cycles to the latter. This said, 10 us between timer shots
>> > is really too fast.
>> 
>> Having enough cycles for this isn't my fundamental problem. Running
>> everything in the ISR has no trouble keeping up with the 100kHz data
>> flow. The problem comes in a *non* real-time task, which is pulling data
>> in from an IP socket and pushing it into a queue for the real-time code
>> to use synchronously.
>> 
>> If I could run bare-metal on the second CPU, I would have done so.
>> The real-time behaviour is easily characterized, and the periodic work
>> can safely be done in 10 us even if all of the data has to be fetched
>> from external memory.
>
> Consuming all the time for running ISRs is not normal for OSes like
> Linux and Xenomai. Being able to run the ISR in less than 10us does
> not mean that there is some time left for the rest of the system;
> there is quite some code executed around the ISR, and at this
> frequency it stops being negligible. Linux at least needs to run
> from time to time for time keeping. If you want to execute something
> with this frequency, maybe you could consider using an FIQ. FIQs
> have a lower overhead.

Sure. In a static test, with the ISR being the only thing assigned to
the isolated CPU, an idle task does get run time.

An FIQ doesn't really make sense if, as I suggest, my problem is that
the system is spending too much time with interrupts disabled.

> So, to be clear, does the ISR run on CPU0 and the thread doing the
> reads run on CPU1? If no, does it work if you do it that way? To

The other way around, but that's pretty much what I have set up. CPU0
handles work that doesn't need to be real-time, and CPU1 handles only
the real-time work.  In particular, CPU0 reads a stream of incoming data
from a TCP socket, processes it slightly, and writes it into kernel
memory.

If I preload several seconds worth of data before enabling the
interrupt, the ISR is able to keep up with the workload while a busy-
loop task gets roughly 15-20% of the cycles (on CPU1). On the other
hand, if I delay starting the ISR, my (non-real-time, or at least not
necessarily real time) network task (on CPU0) can load data -- at a
speed several times that which the data will be consumed at. But once I
release the ISR, the network task can't keep up, and the ISR eventually
runs dry.

> know whether the problem comes from the interrupt consuming all the
> available time, simply create a periodic task, in addition to the
> ISR, with a high priority, and see if it executes from time to time
> to increment a counter. If it does not execute, then we have a proof
> that the ISR is not letting anything else run.

I did this with a low-priority task, and with nothing but it, the ISR,
and Linux housekeeping running on CPU1, it did get some time.

> Another problem may be in handling the /proc/xenomai/affinity, so
> could you try without using it? Same for isolcpus.

I can certainly try that, but what should I be looking for? 

> could you try without using it? Same for isolcpus. If the ISR runs
> on cpu0 and the tasks run on cpu1, an IPI should be sent in
> __xnpod_schedule to wake up the task blocked in read, you can check
> whether the IPI is sent by using ipipe_trace_special for instance
> and checking the tracer trace.

Ah, thank you. That may be very helpful.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 21:58                               ` Lowell Gilbert
@ 2015-02-26 22:37                                 ` Gilles Chanteperdrix
  2015-02-26 23:12                                   ` Lowell Gilbert
  0 siblings, 1 reply; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-02-26 22:37 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Thu, Feb 26, 2015 at 04:58:06PM -0500, Lowell Gilbert wrote:
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> > So, to be clear, does the ISR run on CPU0 and the thread doing the
> > reads run on CPU1? If no, does it work if you do it that way? To
> 
> The other way around, but that's pretty much what I have set up. CPU0
> handles work that doesn't need to be real-time, and CPU1 handles only
> the real-time work.

That is the part I do not understand. Since the read method is
real-time as well as the ISR, you mean you run them on the same cpu?
If yes, have you tried running them on different cpus ?

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 20:24                             ` Philippe Gerum
@ 2015-02-26 22:55                               ` Lowell Gilbert
  2015-02-26 23:17                                 ` Daniele Nicolodi
                                                   ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-26 22:55 UTC (permalink / raw)
  To: xenomai

Philippe Gerum <rpm@xenomai.org> writes:

> On 02/26/2015 08:25 PM, Lowell Gilbert wrote:
>> Philippe Gerum <rpm@xenomai.org> writes:
>> 
>>> On 02/26/2015 05:38 PM, Lowell Gilbert wrote:
>>> The new task will be pinned to the CPU running rtdm_task_init() by
>>> default, which is likely CPU0 as well.
>>>
>>> To check this, I would set the global Xenomai affinity to CPU1 before
>>> starting the test, so that your driver task ends up there.
>>>
>>> # echo 2 > /proc/xenomai/affinity
>> 
>> Yes, I initialize that already. And give "isolcpus=1" to the kernel so
>> that Linux will not schedule anything else on CPU1.
>> 
>>> At least you would have the timing IRQ and the task on a different CPU,
>>> leaving some cycles to the latter. This said, 10 us between timer shots
>>> is really too fast.
>> 
>> Having enough cycles for this isn't my fundamental problem. Running
>> everything in the ISR has no trouble keeping up with the 100kHz data
>> flow. The problem comes in a *non* real-time task, which is pulling data
>> in from an IP socket and pushing it into a queue for the real-time code
>> to use synchronously.
>
> Could you determine whether the bottleneck is due to the IP stack being
> starved from incoming packets? Or, is the contention observed between
> the non rt task and the real-time code consuming it?

The socket builds up a large backlog in its receive buffers, so the data
is certainly arriving in RAM but not being processed.

I am assuming that the contention is between the producer and the
consumer. I haven't definitively proven it, but because they are (a) on
different cores and (b) each runs with more than sufficient speed if the
other doesn't, I'm fairly sure.

> Since ethernet IRQs are of the SPI kind, the IP stack is likely
> executing over CPU0, assuming that the driver (FEC?) is NAPI-enabled,
> the packet processing takes place on behalf of a softirq context, on the
> same CPU.

That's why I pushed Xenomai onto CPU1.

> Btw, you mentioned a queue as the IPC between both threads. Which kind
> of queue/IPC is it?

It's a ring buffer utilizing shared memory. I use it much like a kfifo,
but I didn't want to incur the overhead of a system call on every write.

>> If I could run bare-metal on the second CPU, I would have done so.
>> The real-time behaviour is easily characterized, and the periodic work
>> can safely be done in 10 us even if all of the data has to be fetched
>> from external memory.
>>
>
> This is what bothers me. The CPU running the ISR code is likely unable
> to handle any regular linux activity in this case.

There isn't any useful Linux activity happening there. I just need to
keep it from interfering much with the *other* CPU.

>>> If you look at ksrc/drivers/testing/timerbench.c, you will see a typical
>>> use of rtdm events with ISRs, this driver is used when running
>>> latency -t2 for instance. I'm convinced the RTDM event API is not the issue.
>> 
>> I think you meant irqbench.c. And yes, I also am quite sure that the
>> event API is behaving fine.
>>
>
> I really meant timerbench.c. The only difference is the use of the pulse
> instead of signal interface, which makes no difference internally.

I see. I was confused because timerbench.c doesn't use IRQs directly.

I know that the event interface works for me, because I wrote a test
which occasionally sent the events from the context of a userspace
thread, and those events were picked up by the kernel task. But the same
events were being continuously sent by the ISR, and those were never
picked up by the kernel task.

To make sure I'm being clear: I had a system where all of the real-time
work was happening on CPU1 in the ISR, using data fed to it by a task
running on CPU0. This left the real-time work starved for data at times.
I theorized that CPU0 might be having trouble with the amount of time
CPU1 was spending in the ISR (with interrupts turned off); to account
for this, I introduced a new kernel task on CPU1, to do the
data-handling that had previously been in the ISR. The ISR would now
have nothing to do except wake up the kernel task. And it doesn't seem
to do that.

>> I think I have two options to investigate. One is to do all of my work
>> in the ISR, but to somehow re-enable enough interrupts to keep CPU0
>> doing useful work while the ISR is running on CPU1.
>
> If I understand this correctly, interrupt masking may not be the issue
> on CPU0, I'd rather think that CPU1 is spending too much time in
> real-time activity, preventing the regular kernel to properly
> synchronize SMP-wise between CPUs.
>
> e.g. IPIs won't flow to CPU1, if some regular linux activity gets
> preempted by the ISR while holding a spinlock, CPU0 could contend on
> that lock for as long as the ISR work keeps running on CPU1 and so on.
>
> I'm likely missing important points about your application, but
> generally speaking, the regular kernel is not going to be that happy if
> one of the CPUs involved in the SMP architecture is not responsive enough.

I really wouldn't mind letting the kernel get time more frequently; I just
need to be sure that my real time task gets at least 8 microseconds out
of every fifteen. Would I be able to guarantee that in a polled
architecture? What would be involved: short sleeps from a high-priority
realtime task?

Thanks again.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 20:11                             ` Gilles Chanteperdrix
  2015-02-26 21:58                               ` Lowell Gilbert
@ 2015-02-26 23:09                               ` Philippe Gerum
  2015-03-06 22:57                               ` Lowell Gilbert
  2015-03-06 22:58                               ` Lowell Gilbert
  3 siblings, 0 replies; 34+ messages in thread
From: Philippe Gerum @ 2015-02-26 23:09 UTC (permalink / raw)
  To: Gilles Chanteperdrix, Lowell Gilbert; +Cc: xenomai

On 02/26/15 21:11, Gilles Chanteperdrix wrote:

> Another problem may be in handling the /proc/xenomai/affinity, so
> could you try without using it? Same for isolcpus. If the ISR runs
> on cpu0 and the tasks run on cpu1, an IPI should be sent in
> __xnpod_schedule to wake up the task blocked in read,

If the IRQ rate is 100Khz, the amount of sched IPIs won't match the
number of timing IRQs received, unless the RTDM task is able to return
quickly enough to a blocked state, pending on the event. This said,
sched IPIs will add to the overhead for sure.

Maybe the RTDM task should process events generated by the ISR in bulks,
if ever possible for the application logic.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 22:37                                 ` Gilles Chanteperdrix
@ 2015-02-26 23:12                                   ` Lowell Gilbert
  0 siblings, 0 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-02-26 23:12 UTC (permalink / raw)
  To: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Thu, Feb 26, 2015 at 04:58:06PM -0500, Lowell Gilbert wrote:
>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>> > So, to be clear, does the ISR run on CPU0 and the thread doing the
>> > reads run on CPU1? If no, does it work if you do it that way? To
>> 
>> The other way around, but that's pretty much what I have set up. CPU0
>> handles work that doesn't need to be real-time, and CPU1 handles only
>> the real-time work.
>
> That is the part I do not understand. Since the read method is
> real-time as well as the ISR, you mean you run them on the same cpu?
> If yes, have you tried running them on different cpus ?

Sorry, my phrasing was quite misleading there. My network-reading task
is on CPU0 and the IRQ is on CPU1.

The network reading is a real-time task because it accesses the RTDM
device, but it doesn't have any latency requirements. I don't really
care whether it runs in primary mode, as long as it can manage better
than 2 megabytes per second of throughput. It has tens of megabytes of
buffering available to let it get ahead of the real-time work. Also, it
doesn't need to access the device very often. Instead of writing to the
device, it maps in a big chunk of kernel memory and writes to that.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 22:55                               ` Lowell Gilbert
@ 2015-02-26 23:17                                 ` Daniele Nicolodi
  2015-02-26 23:21                                 ` Philippe Gerum
  2015-02-27  7:15                                 ` Tom Evans
  2 siblings, 0 replies; 34+ messages in thread
From: Daniele Nicolodi @ 2015-02-26 23:17 UTC (permalink / raw)
  To: xenomai

Hello Gilbert,

what I'm suggesting is maybe naive, but as the problem seems to origin
from the frequency of the interrupts, and since you have control over
the whole stack, can you try to lower the irq rate?

If things start to magically work at at lower rates, at least you would
have a known working configuration on which base optimization and design
decisions.

On 26/02/15 23:55, Lowell Gilbert wrote:
>> This is what bothers me. The CPU running the ISR code is likely unable
>> to handle any regular linux activity in this case.
> 
> There isn't any useful Linux activity happening there. I just need to
> keep it from interfering much with the *other* CPU.

This has been repeated many times now: even if the Linux kernel is not
doing anything useful, it needs to do some amount of housekeeping to
ensure that things keep working fine.

Cheers,
Daniele



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 22:55                               ` Lowell Gilbert
  2015-02-26 23:17                                 ` Daniele Nicolodi
@ 2015-02-26 23:21                                 ` Philippe Gerum
  2015-02-27  7:15                                 ` Tom Evans
  2 siblings, 0 replies; 34+ messages in thread
From: Philippe Gerum @ 2015-02-26 23:21 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 02/26/15 23:55, Lowell Gilbert wrote:

> I really wouldn't mind letting the kernel get time more frequently; I just
> need to be sure that my real time task gets at least 8 microseconds out
> of every fifteen. Would I be able to guarantee that in a polled
> architecture? What would be involved: short sleeps from a high-priority
> realtime task?
> 

Yes, I'd think so.

I don't have the context switch time figures in mind on a typical Cortex
A9 (assuming 1Ghz or close to), but the main issue boils down to cache
pollution with a dual kernel system in this case. If the real-time task
sleeps/blocks, then linux may run and cause cache lines used by the rt
code to be evicted, raising the latency.

Now, with such a fast rate (assuming 15 us), there would not be much
time for linux to run and pollute the cache anyway. But this is all so
close to the limit that I can't say that linux could run reliably either.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 22:55                               ` Lowell Gilbert
  2015-02-26 23:17                                 ` Daniele Nicolodi
  2015-02-26 23:21                                 ` Philippe Gerum
@ 2015-02-27  7:15                                 ` Tom Evans
  2 siblings, 0 replies; 34+ messages in thread
From: Tom Evans @ 2015-02-27  7:15 UTC (permalink / raw)
  To: Lowell Gilbert, xenomai

On 27/02/15 09:55, Lowell Gilbert wrote:
> There isn't any useful Linux activity happening there. I just need to
> keep it from interfering much with the *other* CPU.

You may wish to check this where I remember a problem that may be related:

https://community.freescale.com/thread/328465

Points to:

http://www.arm.com/files/pdf/cachecoherencywhitepaper_6june2011.pdf

Contains:

     Measurements taken on a dual core Cortex-A9 at 1GHz
     with a 256K L2 cache showed that cache flushing can
     take of the order of 100us.

If one core flushes the L2 cache and the other core has a miss and goes to try 
and read from the L2, then I think it can be stalled for all that period. I 
seem to remember from when I read this that a cache flush can stall the other 
core even when it doesn't need to access L2.

A driver on the Linux core could lock the whole system up.

Tom



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 20:11                             ` Gilles Chanteperdrix
  2015-02-26 21:58                               ` Lowell Gilbert
  2015-02-26 23:09                               ` Philippe Gerum
@ 2015-03-06 22:57                               ` Lowell Gilbert
  2015-03-06 22:58                               ` Lowell Gilbert
  3 siblings, 0 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-03-06 22:57 UTC (permalink / raw)
  To: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> Another problem may be in handling the /proc/xenomai/affinity, so
> could you try without using it? Same for isolcpus. If the ISR runs
> on cpu0 and the tasks run on cpu1, an IPI should be sent in
> __xnpod_schedule to wake up the task blocked in read, you can check
> whether the IPI is sent by using ipipe_trace_special for instance
> and checking the tracer trace.

How would I get a kernel task to run on a specific CPU with using
/proc/xenomai/affinity? rtdm_task_init() specifically calls out
ALL_CPUS.

Yes, I could change that code, but it seems like the sort of thing that
exists -- I just can't find it.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-02-26 20:11                             ` Gilles Chanteperdrix
                                                 ` (2 preceding siblings ...)
  2015-03-06 22:57                               ` Lowell Gilbert
@ 2015-03-06 22:58                               ` Lowell Gilbert
  2015-03-08 15:52                                 ` Gilles Chanteperdrix
  3 siblings, 1 reply; 34+ messages in thread
From: Lowell Gilbert @ 2015-03-06 22:58 UTC (permalink / raw)
  To: xenomai

I meant *without* using /proc/xenomai/affinity. Fixed below:

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> Another problem may be in handling the /proc/xenomai/affinity, so
> could you try without using it? Same for isolcpus. If the ISR runs
> on cpu0 and the tasks run on cpu1, an IPI should be sent in
> __xnpod_schedule to wake up the task blocked in read, you can check
> whether the IPI is sent by using ipipe_trace_special for instance
> and checking the tracer trace.

How would I get a kernel task to run on a specific CPU without using
/proc/xenomai/affinity? rtdm_task_init() specifically calls out
ALL_CPUS.

Yes, I could change that code, but it seems like the sort of thing that
exists -- I just can't find it.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-03-06 22:58                               ` Lowell Gilbert
@ 2015-03-08 15:52                                 ` Gilles Chanteperdrix
  2015-03-09 13:28                                   ` Lowell Gilbert
  0 siblings, 1 reply; 34+ messages in thread
From: Gilles Chanteperdrix @ 2015-03-08 15:52 UTC (permalink / raw)
  To: Lowell Gilbert; +Cc: xenomai

On Fri, Mar 06, 2015 at 05:58:25PM -0500, Lowell Gilbert wrote:
> I meant *without* using /proc/xenomai/affinity. Fixed below:
> 
> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
> 
> > Another problem may be in handling the /proc/xenomai/affinity, so
> > could you try without using it? Same for isolcpus. If the ISR runs
> > on cpu0 and the tasks run on cpu1, an IPI should be sent in
> > __xnpod_schedule to wake up the task blocked in read, you can check
> > whether the IPI is sent by using ipipe_trace_special for instance
> > and checking the tracer trace.
> 
> How would I get a kernel task to run on a specific CPU without using
> /proc/xenomai/affinity? rtdm_task_init() specifically calls out
> ALL_CPUS.
> 
> Yes, I could change that code, but it seems like the sort of thing that
> exists -- I just can't find it.

The idea is to stop forcing the task to run on a specific CPU to see
whether the problem you observe comes from there.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Xenomai] interrupt service
  2015-03-08 15:52                                 ` Gilles Chanteperdrix
@ 2015-03-09 13:28                                   ` Lowell Gilbert
  0 siblings, 0 replies; 34+ messages in thread
From: Lowell Gilbert @ 2015-03-09 13:28 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai

Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:

> On Fri, Mar 06, 2015 at 05:58:25PM -0500, Lowell Gilbert wrote:
>> I meant *without* using /proc/xenomai/affinity. Fixed below:
>> 
>> Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> writes:
>> 
>> > Another problem may be in handling the /proc/xenomai/affinity, so
>> > could you try without using it? Same for isolcpus. If the ISR runs
>> > on cpu0 and the tasks run on cpu1, an IPI should be sent in
>> > __xnpod_schedule to wake up the task blocked in read, you can check
>> > whether the IPI is sent by using ipipe_trace_special for instance
>> > and checking the tracer trace.
>> 
>> How would I get a kernel task to run on a specific CPU without using
>> /proc/xenomai/affinity? rtdm_task_init() specifically calls out
>> ALL_CPUS.
>> 
>> Yes, I could change that code, but it seems like the sort of thing that
>> exists -- I just can't find it.
>
> The idea is to stop forcing the task to run on a specific CPU to see
> whether the problem you observe comes from there.

I guess I read more into your statement than was intended.

Yes, I have done that, and the behaviour is similar.

Be well.


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2015-03-09 13:28 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-18 22:03 [Xenomai] interrupt service Lowell Gilbert
2015-02-18 22:08 ` Gilles Chanteperdrix
2015-02-19  4:44   ` Lowell Gilbert
2015-02-19 21:06     ` Lowell Gilbert
2015-02-20 19:38     ` Lowell Gilbert
2015-02-20 22:57       ` Gilles Chanteperdrix
2015-02-24 23:01         ` Lowell Gilbert
2015-02-24 23:34           ` Gilles Chanteperdrix
2015-02-25 16:22             ` Lowell Gilbert
2015-02-25 17:34               ` Philippe Gerum
2015-02-25 18:35                 ` Philippe Gerum
2015-02-25 20:41                 ` Lowell Gilbert
2015-02-25 21:02                   ` Lowell Gilbert
2015-02-26 11:19                     ` Philippe Gerum
2015-02-26 16:38                       ` Lowell Gilbert
2015-02-26 17:26                         ` Gilles Chanteperdrix
2015-02-26 17:56                         ` Philippe Gerum
2015-02-26 19:25                           ` Lowell Gilbert
2015-02-26 20:11                             ` Gilles Chanteperdrix
2015-02-26 21:58                               ` Lowell Gilbert
2015-02-26 22:37                                 ` Gilles Chanteperdrix
2015-02-26 23:12                                   ` Lowell Gilbert
2015-02-26 23:09                               ` Philippe Gerum
2015-03-06 22:57                               ` Lowell Gilbert
2015-03-06 22:58                               ` Lowell Gilbert
2015-03-08 15:52                                 ` Gilles Chanteperdrix
2015-03-09 13:28                                   ` Lowell Gilbert
2015-02-26 20:24                             ` Philippe Gerum
2015-02-26 22:55                               ` Lowell Gilbert
2015-02-26 23:17                                 ` Daniele Nicolodi
2015-02-26 23:21                                 ` Philippe Gerum
2015-02-27  7:15                                 ` Tom Evans
2015-02-25  8:30           ` Philippe Gerum
2015-02-25  9:36             ` Philippe Gerum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.