All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
@ 2012-06-30 10:39 ali hagigat
  2012-06-30 10:44 ` Gilles Chanteperdrix
  0 siblings, 1 reply; 10+ messages in thread
From: ali hagigat @ 2012-06-30 10:39 UTC (permalink / raw)
  To: xenomai

Does this kernel config variable,  CONFIG_XENO_OPT_TIMING_SCHEDLAT,
indicate the scheduling latency? If i specify 1 nano second, so all
the real time tasks will be scheduled below 1 nano second?

It seems impossible!

If it is the time when a timer interrupt comes and preempts a real
time task till that real time task is scheduled again ( is this the
definition of the scheduling latency?). Again this definitions seems
not OK because it can not be 1 nano second!! It is too fast to do any
thing.

This configuration variable does not seem to have any range....


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-06-30 10:39 [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT ali hagigat
@ 2012-06-30 10:44 ` Gilles Chanteperdrix
  2012-06-30 16:52   ` Christophe Blaess
  0 siblings, 1 reply; 10+ messages in thread
From: Gilles Chanteperdrix @ 2012-06-30 10:44 UTC (permalink / raw)
  To: ali hagigat; +Cc: xenomai

On 06/30/2012 12:39 PM, ali hagigat wrote:
> Does this kernel config variable,  CONFIG_XENO_OPT_TIMING_SCHEDLAT,
> indicate the scheduling latency? If i specify 1 nano second, so all
> the real time tasks will be scheduled below 1 nano second?
> 
> It seems impossible!
> 
> If it is the time when a timer interrupt comes and preempts a real
> time task till that real time task is scheduled again ( is this the
> definition of the scheduling latency?). Again this definitions seems
> not OK because it can not be 1 nano second!! It is too fast to do any
> thing.
> 
> This configuration variable does not seem to have any range....

CONFIG_XENO_OPT_TIMING_SCHEDLAT is the value of /proc/xenomai/latency at
boot time.

/proc/xenomai/latency is an estimation of the minimum scheduling latency
on your system. In order to know what to put there you should do:

echo 0 > /proc/xenomai/latency

Run latency under load for several hour

echo minimum_latency > /proc/xenomai/latency

The value put there is then subtracted to timers deadline, so that
timers wake up a little bit early to compensate for the time it takes to
return to user-space.

-- 
                                                                Gilles.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-06-30 10:44 ` Gilles Chanteperdrix
@ 2012-06-30 16:52   ` Christophe Blaess
  2012-06-30 17:04     ` Gilles Chanteperdrix
  0 siblings, 1 reply; 10+ messages in thread
From: Christophe Blaess @ 2012-06-30 16:52 UTC (permalink / raw)
  To: xenomai

On 30/06/2012 12:44, Gilles Chanteperdrix wrote:
>
>> /proc/xenomai/latency is an estimation of the minimum scheduling latency
>> on your system. In order to know what to put there you should do:
>>
>> echo 0 > /proc/xenomai/latency
>>
>> Run latency under load for several hour

Is it necessary to run the test under heavy load?
We're looking for the minimal latency, not the worst.

I thought that in this case a normal, even light load would be sufficient.

>>
>> echo minimum_latency > /proc/xenomai/latency
>>
>> The value put there is then subtracted to timers deadline, so that
>> timers wake up a little bit early to compensate for the time it takes to
>> return to user-space.
>>




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-06-30 16:52   ` Christophe Blaess
@ 2012-06-30 17:04     ` Gilles Chanteperdrix
  2012-07-01 19:00       ` Christophe Blaess
  0 siblings, 1 reply; 10+ messages in thread
From: Gilles Chanteperdrix @ 2012-06-30 17:04 UTC (permalink / raw)
  To: Christophe Blaess; +Cc: xenomai

On 06/30/2012 06:52 PM, Christophe Blaess wrote:
> On 30/06/2012 12:44, Gilles Chanteperdrix wrote:
>>
>>> /proc/xenomai/latency is an estimation of the minimum scheduling latency
>>> on your system. In order to know what to put there you should do:
>>>
>>> echo 0 > /proc/xenomai/latency
>>>
>>> Run latency under load for several hour
> 
> Is it necessary to run the test under heavy load?
> We're looking for the minimal latency, not the worst.
> 
> I thought that in this case a normal, even light load would be sufficient.

It is not obvious which path is the shortest. For instance, having to
wake up from a "wait for interrupt" state to handle the timer interrupt
may induce a certain latency, and so may not be the shortest path. So,
running the test for a long time, and with a lot of different activities
is an empiric way to try many paths so that extreme paths have more
chances to be closest to the extrema paths.

And anyway, you usually want to know the worst case latency for your
system as well.

-- 
                                                                Gilles.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-06-30 17:04     ` Gilles Chanteperdrix
@ 2012-07-01 19:00       ` Christophe Blaess
  2012-07-01 19:09         ` Gilles Chanteperdrix
  2012-07-01 19:33         ` Gilles Chanteperdrix
  0 siblings, 2 replies; 10+ messages in thread
From: Christophe Blaess @ 2012-07-01 19:00 UTC (permalink / raw)
  Cc: xenomai

On 30/06/2012 19:04, Gilles Chanteperdrix wrote:
> On 06/30/2012 06:52 PM, Christophe Blaess wrote:
>> On 30/06/2012 12:44, Gilles Chanteperdrix wrote:
>>>
>>>> /proc/xenomai/latency is an estimation of the minimum scheduling latency
>>>> on your system. In order to know what to put there you should do:
>>>>
>>>> echo 0 > /proc/xenomai/latency
>>>>
>>>> Run latency under load for several hour
>> Is it necessary to run the test under heavy load?
>> We're looking for the minimal latency, not the worst.
>>
>> I thought that in this case a normal, even light load would be sufficient.
> It is not obvious which path is the shortest. For instance, having to
> wake up from a "wait for interrupt" state to handle the timer interrupt
> may induce a certain latency, and so may not be the shortest path. So,
> running the test for a long time, and with a lot of different activities
> is an empiric way to try many paths so that extreme paths have more
> chances to be closest to the extrema paths.
>
> And anyway, you usually want to know the worst case latency for your
> system as well.
>

I ran two 6-hours latency tests on a Pandaboard. After echoing 0 in 
/proc/xenomai/latency

For the first one the system load was very weak. I ran the test on the 
second core because most of the interrupts are processed on the first 
core. So the latency process was almost alone.

#*/usr/xenomai/bin/latency -p 100 -c 1 -T 21600*
[...]
RTS|      2.388|      3.134|     18.706|       0|     0|    06:00:00/06:00:00


For the second test, the system was under a very high load. I used a 
shell script very close to dohell and sent a lot of external interrupts 
(ping flood, GPIO interrupt...). This time latency ran on the first 
core, and was subject to a lot of preemptions from interrupt requests.

#*/usr/xenomai/bin/latency -p 100 -c 0 -T 21600*
[...]
RTS|      2.908|      7.797|     51.579|       0|     0|    06:00:00/06:00:00

I would recommend, as you said, for long time tests to alternate high 
and low pressure stages to get the min and max values.


Apart from that, on my board (Panda with Xenomai 2.6.0), I note that 
writing in /proc/xenomai/latency such as

# echo 2388 > /proc/xenomai/latency


gives a segmentation fault, even if the value is indeed written. Here is 
the dmesg log :

[65108.190551] Unable to handle kernel paging request at virtual address 00004a5a
[65108.190551] pgd = ef138000
[65108.190551] [00004a5a] *pgd=af101831, *pte=00000000, *ppte=00000000
[65108.190582] Internal error: Oops: 80000007 [#1] PREEMPT SMP
[65108.190582] last sysfs file: /sys/devices/virtual/vc/vcs2/dev
[65108.190612] CPU: 0    Not tainted  (2.6.38.8-xenomai-cpb #2)
[65108.190612] PC is at 0x4a5a
[65108.190643] LR is at simple_strtoul+0x8/0xc
[65108.190643] pc : [<00004a5a>]    lr : [<c024ebf4>]    psr: 60000033
[65108.190643] sp : ef06df08  ip : 00000002  fp : 00000100
[65108.190673] r10: 401ae600  r9 : efbffe48  r8 : 00000fff
[65108.190673] r7 : 00000fff  r6 : 1b207478  r5 : 742e4a5b  r4 : 1b262074
[65108.190673] r3 : 00000002  r2 : c0456918  r1 : 00000000  r0 : ffffffea
[65108.190704] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
[65108.190704] Control: 10c53c7d  Table: af13804a  DAC: 00000015
[65108.190704] Process sh (pid: 86, stack limit = 0xef06c2f8)
[65108.190734] Stack: (0xef06df08 to 0xef06e000)
[65108.190734] df00:                   00000000 00000000 00000000 00000000 00000000 00000000
[65108.190734] df20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190765] df40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190765] df60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190795] df80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190795] dfa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190795] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190826] dfe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[65108.190826] [<c024ebf4>] (simple_strtoul+0x8/0xc) from [<00000000>] (  (null))
[65108.190856] Code: bad PC value
[65108.190979] ---[ end trace 29e0859033dc848c ]---



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-07-01 19:00       ` Christophe Blaess
@ 2012-07-01 19:09         ` Gilles Chanteperdrix
  2012-07-01 19:33         ` Gilles Chanteperdrix
  1 sibling, 0 replies; 10+ messages in thread
From: Gilles Chanteperdrix @ 2012-07-01 19:09 UTC (permalink / raw)
  To: Christophe Blaess; +Cc: xenomai

On 07/01/2012 09:00 PM, Christophe Blaess wrote:
> On 30/06/2012 19:04, Gilles Chanteperdrix wrote:
>> On 06/30/2012 06:52 PM, Christophe Blaess wrote:
>>> On 30/06/2012 12:44, Gilles Chanteperdrix wrote:
>>>>
>>>>> /proc/xenomai/latency is an estimation of the minimum scheduling latency
>>>>> on your system. In order to know what to put there you should do:
>>>>>
>>>>> echo 0 > /proc/xenomai/latency
>>>>>
>>>>> Run latency under load for several hour
>>> Is it necessary to run the test under heavy load?
>>> We're looking for the minimal latency, not the worst.
>>>
>>> I thought that in this case a normal, even light load would be sufficient.
>> It is not obvious which path is the shortest. For instance, having to
>> wake up from a "wait for interrupt" state to handle the timer interrupt
>> may induce a certain latency, and so may not be the shortest path. So,
>> running the test for a long time, and with a lot of different activities
>> is an empiric way to try many paths so that extreme paths have more
>> chances to be closest to the extrema paths.
>>
>> And anyway, you usually want to know the worst case latency for your
>> system as well.
>>
> 
> I ran two 6-hours latency tests on a Pandaboard. After echoing 0 in 
> /proc/xenomai/latency
> 
> For the first one the system load was very weak. I ran the test on the 
> second core because most of the interrupts are processed on the first 
> core. So the latency process was almost alone.
> 
> #*/usr/xenomai/bin/latency -p 100 -c 1 -T 21600*
> [...]
> RTS|      2.388|      3.134|     18.706|       0|     0|    06:00:00/06:00:00
> 
> 
> For the second test, the system was under a very high load. I used a 
> shell script very close to dohell and sent a lot of external interrupts 
> (ping flood, GPIO interrupt...). This time latency ran on the first 
> core, and was subject to a lot of preemptions from interrupt requests.
> 
> #*/usr/xenomai/bin/latency -p 100 -c 0 -T 21600*
> [...]
> RTS|      2.908|      7.797|     51.579|       0|     0|    06:00:00/06:00:00
> 
> I would recommend, as you said, for long time tests to alternate high 
> and low pressure stages to get the min and max values.
> 
> 
> Apart from that, on my board (Panda with Xenomai 2.6.0), I note that 
> writing in /proc/xenomai/latency such as
> 
> # echo 2388 > /proc/xenomai/latency
> 
> 
> gives a segmentation fault, even if the value is indeed written. Here is 
> the dmesg log :
> 
> [65108.190551] Unable to handle kernel paging request at virtual address 00004a5a
> [65108.190551] pgd = ef138000
> [65108.190551] [00004a5a] *pgd=af101831, *pte=00000000, *ppte=00000000
> [65108.190582] Internal error: Oops: 80000007 [#1] PREEMPT SMP
> [65108.190582] last sysfs file: /sys/devices/virtual/vc/vcs2/dev
> [65108.190612] CPU: 0    Not tainted  (2.6.38.8-xenomai-cpb #2)
> [65108.190612] PC is at 0x4a5a
> [65108.190643] LR is at simple_strtoul+0x8/0xc
> [65108.190643] pc : [<00004a5a>]    lr : [<c024ebf4>]    psr: 60000033
> [65108.190643] sp : ef06df08  ip : 00000002  fp : 00000100
> [65108.190673] r10: 401ae600  r9 : efbffe48  r8 : 00000fff
> [65108.190673] r7 : 00000fff  r6 : 1b207478  r5 : 742e4a5b  r4 : 1b262074
> [65108.190673] r3 : 00000002  r2 : c0456918  r1 : 00000000  r0 : ffffffea
> [65108.190704] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
> [65108.190704] Control: 10c53c7d  Table: af13804a  DAC: 00000015
> [65108.190704] Process sh (pid: 86, stack limit = 0xef06c2f8)
> [65108.190734] Stack: (0xef06df08 to 0xef06e000)
> [65108.190734] df00:                   00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190734] df20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190765] df40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190765] df60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] df80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] dfa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190826] dfe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190826] [<c024ebf4>] (simple_strtoul+0x8/0xc) from [<00000000>] (  (null))
> [65108.190856] Code: bad PC value
> [65108.190979] ---[ end trace 29e0859033dc848c ]---
> 

I can not reproduce that. Could you try again with the current head of
xenomai-2.6 git? If you can reproduce it, can you tell us which I-pipe
patch you are using, and send your kernel configuration?

-- 
                                                                Gilles.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-07-01 19:00       ` Christophe Blaess
  2012-07-01 19:09         ` Gilles Chanteperdrix
@ 2012-07-01 19:33         ` Gilles Chanteperdrix
  2012-07-02  6:12           ` Christophe Blaess
  1 sibling, 1 reply; 10+ messages in thread
From: Gilles Chanteperdrix @ 2012-07-01 19:33 UTC (permalink / raw)
  To: Christophe Blaess; +Cc: xenomai

On 07/01/2012 09:00 PM, Christophe Blaess wrote:
> On 30/06/2012 19:04, Gilles Chanteperdrix wrote:
>> On 06/30/2012 06:52 PM, Christophe Blaess wrote:
>>> On 30/06/2012 12:44, Gilles Chanteperdrix wrote:
>>>>
>>>>> /proc/xenomai/latency is an estimation of the minimum scheduling latency
>>>>> on your system. In order to know what to put there you should do:
>>>>>
>>>>> echo 0 > /proc/xenomai/latency
>>>>>
>>>>> Run latency under load for several hour
>>> Is it necessary to run the test under heavy load?
>>> We're looking for the minimal latency, not the worst.
>>>
>>> I thought that in this case a normal, even light load would be sufficient.
>> It is not obvious which path is the shortest. For instance, having to
>> wake up from a "wait for interrupt" state to handle the timer interrupt
>> may induce a certain latency, and so may not be the shortest path. So,
>> running the test for a long time, and with a lot of different activities
>> is an empiric way to try many paths so that extreme paths have more
>> chances to be closest to the extrema paths.
>>
>> And anyway, you usually want to know the worst case latency for your
>> system as well.
>>
> 
> I ran two 6-hours latency tests on a Pandaboard. After echoing 0 in 
> /proc/xenomai/latency
> 
> For the first one the system load was very weak. I ran the test on the 
> second core because most of the interrupts are processed on the first 
> core. So the latency process was almost alone.
> 
> #*/usr/xenomai/bin/latency -p 100 -c 1 -T 21600*
> [...]
> RTS|      2.388|      3.134|     18.706|       0|     0|    06:00:00/06:00:00
> 
> 
> For the second test, the system was under a very high load. I used a 
> shell script very close to dohell and sent a lot of external interrupts 
> (ping flood, GPIO interrupt...). This time latency ran on the first 
> core, and was subject to a lot of preemptions from interrupt requests.
> 
> #*/usr/xenomai/bin/latency -p 100 -c 0 -T 21600*
> [...]
> RTS|      2.908|      7.797|     51.579|       0|     0|    06:00:00/06:00:00
> 
> I would recommend, as you said, for long time tests to alternate high 
> and low pressure stages to get the min and max values.
> 
> 
> Apart from that, on my board (Panda with Xenomai 2.6.0), I note that 
> writing in /proc/xenomai/latency such as
> 
> # echo 2388 > /proc/xenomai/latency
> 
> 
> gives a segmentation fault, even if the value is indeed written. Here is 
> the dmesg log :
> 
> [65108.190551] Unable to handle kernel paging request at virtual address 00004a5a
> [65108.190551] pgd = ef138000
> [65108.190551] [00004a5a] *pgd=af101831, *pte=00000000, *ppte=00000000
> [65108.190582] Internal error: Oops: 80000007 [#1] PREEMPT SMP
> [65108.190582] last sysfs file: /sys/devices/virtual/vc/vcs2/dev
> [65108.190612] CPU: 0    Not tainted  (2.6.38.8-xenomai-cpb #2)
> [65108.190612] PC is at 0x4a5a
> [65108.190643] LR is at simple_strtoul+0x8/0xc
> [65108.190643] pc : [<00004a5a>]    lr : [<c024ebf4>]    psr: 60000033
> [65108.190643] sp : ef06df08  ip : 00000002  fp : 00000100
> [65108.190673] r10: 401ae600  r9 : efbffe48  r8 : 00000fff
> [65108.190673] r7 : 00000fff  r6 : 1b207478  r5 : 742e4a5b  r4 : 1b262074
> [65108.190673] r3 : 00000002  r2 : c0456918  r1 : 00000000  r0 : ffffffea
> [65108.190704] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment user
> [65108.190704] Control: 10c53c7d  Table: af13804a  DAC: 00000015
> [65108.190704] Process sh (pid: 86, stack limit = 0xef06c2f8)
> [65108.190734] Stack: (0xef06df08 to 0xef06e000)
> [65108.190734] df00:                   00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190734] df20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190765] df40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190765] df60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] df80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] dfa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190795] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190826] dfe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [65108.190826] [<c024ebf4>] (simple_strtoul+0x8/0xc) from [<00000000>] (  (null))
> [65108.190856] Code: bad PC value
> [65108.190979] ---[ end trace 29e0859033dc848c ]---

It looks like a buffer overflow with a buffer on stack. Please try the
following patch:

diff --git a/ksrc/nucleus/vfile.c b/ksrc/nucleus/vfile.c
index 5928aef..a6ad363 100644
--- a/ksrc/nucleus/vfile.c
+++ b/ksrc/nucleus/vfile.c
@@ -811,7 +811,7 @@ ssize_t xnvfile_get_blob(struct xnvfile_input *input,
 {
        ssize_t nbytes = input->size;

-       if (nbytes < size)
+       if (nbytes > size)
                nbytes = size;

        if (nbytes > 0 && copy_from_user(data, input->u_buf, nbytes))
@@ -904,7 +904,7 @@ ssize_t xnvfile_get_integer(struct xnvfile_input
*input, long *valp)
        ssize_t nbytes;
        long val;

-       nbytes = xnvfile_get_blob(input, buf, sizeof(buf));
+       nbytes = xnvfile_get_blob(input, buf, sizeof(buf) - 1);
        if (nbytes < 0)
                return nbytes;


-- 
                                                                Gilles.


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-07-01 19:33         ` Gilles Chanteperdrix
@ 2012-07-02  6:12           ` Christophe Blaess
  2012-07-02  7:11             ` Christophe Blaess
  0 siblings, 1 reply; 10+ messages in thread
From: Christophe Blaess @ 2012-07-02  6:12 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai

On 01/07/2012 21:33, Gilles Chanteperdrix wrote:
> It looks like a buffer overflow with a buffer on stack. Please try the
> following patch:
>
> diff --git a/ksrc/nucleus/vfile.c b/ksrc/nucleus/vfile.c
> index 5928aef..a6ad363 100644
> --- a/ksrc/nucleus/vfile.c
> +++ b/ksrc/nucleus/vfile.c
> @@ -811,7 +811,7 @@ ssize_t xnvfile_get_blob(struct xnvfile_input *input,
>   {
>          ssize_t nbytes = input->size;
>
> -       if (nbytes < size)
> +       if (nbytes > size)
>                  nbytes = size;
>
>          if (nbytes > 0 && copy_from_user(data, input->u_buf, nbytes))
> @@ -904,7 +904,7 @@ ssize_t xnvfile_get_integer(struct xnvfile_input
> *input, long *valp)
>          ssize_t nbytes;
>          long val;
>
> -       nbytes = xnvfile_get_blob(input, buf, sizeof(buf));
> +       nbytes = xnvfile_get_blob(input, buf, sizeof(buf) - 1);
>          if (nbytes < 0)
>                  return nbytes;
>

The patch is ok, I do not have any segfault.

But, there's still something weird (I run on a stock xenomai 2.6.0 with 
adeos-ipipe-2.6.38.8-arm-1.18-04.patch, I'll try on a 2.6 git kernel)

[Panda]#*echo 2388 > /proc/xenomai/latency*
[Panda]#*cat /proc/xenomai/latency*
2386
[Panda]#*echo 2386 > /proc/xenomai/latency*
[Panda]#*cat /proc/xenomai/latency*
2384
[Panda]#*echo 2384 > /proc/xenomai/latency*
[Panda]#*cat /proc/xenomai/latency*
2382
[Panda]#

I suspect something wrong in xnarch_tsc_to_ns()/xnarch_ns_to_tsc() maybe 
in xnarch_llimd(). I'll investigate more this afternoon.




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-07-02  6:12           ` Christophe Blaess
@ 2012-07-02  7:11             ` Christophe Blaess
  2012-07-02  8:47               ` Gilles Chanteperdrix
  0 siblings, 1 reply; 10+ messages in thread
From: Christophe Blaess @ 2012-07-02  7:11 UTC (permalink / raw)
  To: xenomai

On 02/07/2012 08:12, Christophe Blaess wrote:
>
> But, there's still something weird (I run on a stock xenomai 2.6.0 
> with adeos-ipipe-2.6.38.8-arm-1.18-04.patch, I'll try on a 2.6 git 
> kernel)
>
> [Panda]#*echo 2388 > /proc/xenomai/latency*
> [Panda]#*cat /proc/xenomai/latency*
> 2386
> [Panda]#*echo 2386 > /proc/xenomai/latency*
> [Panda]#*cat /proc/xenomai/latency*
> 2384
> [Panda]#*echo 2384 > /proc/xenomai/latency*
> [Panda]#*cat /proc/xenomai/latency*
> 2382
> [Panda]#
>
> I suspect something wrong in xnarch_tsc_to_ns()/xnarch_ns_to_tsc() 
> maybe in xnarch_llimd(). I'll investigate more this afternoon.
>
>

I see the same behaviour with xenomai 2.6 from git source, using 
adeos-ipipe-2.6.38.8-arm-1.18-08.patch




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT
  2012-07-02  7:11             ` Christophe Blaess
@ 2012-07-02  8:47               ` Gilles Chanteperdrix
  0 siblings, 0 replies; 10+ messages in thread
From: Gilles Chanteperdrix @ 2012-07-02  8:47 UTC (permalink / raw)
  To: Christophe Blaess; +Cc: xenomai

On 07/02/2012 09:11 AM, Christophe Blaess wrote:
> On 02/07/2012 08:12, Christophe Blaess wrote:
>>
>> But, there's still something weird (I run on a stock xenomai 2.6.0 
>> with adeos-ipipe-2.6.38.8-arm-1.18-04.patch, I'll try on a 2.6 git 
>> kernel)
>>
>> [Panda]#*echo 2388 > /proc/xenomai/latency*
>> [Panda]#*cat /proc/xenomai/latency*
>> 2386
>> [Panda]#*echo 2386 > /proc/xenomai/latency*
>> [Panda]#*cat /proc/xenomai/latency*
>> 2384
>> [Panda]#*echo 2384 > /proc/xenomai/latency*
>> [Panda]#*cat /proc/xenomai/latency*
>> 2382
>> [Panda]#
>>
>> I suspect something wrong in xnarch_tsc_to_ns()/xnarch_ns_to_tsc() 
>> maybe in xnarch_llimd(). I'll investigate more this afternoon.
>>
>>
> 
> I see the same behaviour with xenomai 2.6 from git source, using 
> adeos-ipipe-2.6.38.8-arm-1.18-08.patch

This is because xnarch_tsc_to_ns and xnarch_ns_to_tsc both round toward
0, and the tsc frequency is not a round number of ticks. I do not think
trying to fix an issue for a difference of 2ns is worth the trouble.

-- 
					    Gilles.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-07-02  8:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-30 10:39 [Xenomai] question: XENO_OPT_TIMING_SCHEDLAT ali hagigat
2012-06-30 10:44 ` Gilles Chanteperdrix
2012-06-30 16:52   ` Christophe Blaess
2012-06-30 17:04     ` Gilles Chanteperdrix
2012-07-01 19:00       ` Christophe Blaess
2012-07-01 19:09         ` Gilles Chanteperdrix
2012-07-01 19:33         ` Gilles Chanteperdrix
2012-07-02  6:12           ` Christophe Blaess
2012-07-02  7:11             ` Christophe Blaess
2012-07-02  8:47               ` Gilles Chanteperdrix

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.