All of lore.kernel.org
 help / color / mirror / Atom feed
* several questions about porting latmus
@ 2021-02-01  4:36 Chen, Hongzhan
  2021-02-01  9:31 ` Philippe Gerum
  0 siblings, 1 reply; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-01  4:36 UTC (permalink / raw)
  To: xenomai, Philippe Gerum

Hi Philippe

When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
and need your suggestions.

1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
    can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
    cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
    finish all functions  of rtdm_task_init?

2. Regarding replacement of evl_get_xbuf of latmus.c,  I would first call 
     rtdm_socket (AF_RTIPC, SOCK_DGRAM, IPCPROTO_XDDP); to create socket and then call rtdm_bind  to bind
     it to XDDP_PORT=xfd that passed from user space  after call rtdm_setsockopt to do corresponding operation like
      demo/posix/cobalt/xddp-stream.c.
     Of course , I would replace evl_write_xbuf with rtdm_sendto  and corresponding xbuf variable type in related 
     structure.  In addition , on userspace of latmus , I would  replace evl_create_xbuf with open ("/dev/rtp$xfd ",..) and then
     read by returned handler to get data from socket.  Is such design OK for you?
     Actually , I can not find any instance about rtdm_socket but description in doc/asciidoc/MIGRATION.adoc, I do not know
    If such design is feasible. Please comment.


Regards

Hongzhan Chen



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: several questions about porting latmus
  2021-02-01  4:36 several questions about porting latmus Chen, Hongzhan
@ 2021-02-01  9:31 ` Philippe Gerum
  2021-02-05  1:47   ` Chen, Hongzhan
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Gerum @ 2021-02-01  9:31 UTC (permalink / raw)
  To: Chen, Hongzhan; +Cc: xenomai


Hi Hongzhan,

Chen, Hongzhan <hongzhan.chen@intel.com> writes:

> Hi Philippe
>
> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
> and need your suggestions.
>
> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>     finish all functions  of rtdm_task_init?
>

We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
is a desirable feature which should be part of the CXP. Other ways to
pin the new kthread are fairly ugly ATM, ranging from pinning the parent
to the destination CPU before creating the child thread, or open coding
the spawning sequence based on the internal interface (xnthread_init,
xnthread_start). Please submit a patch for review of that change
specifically, prior to submitting any latmus-related bits.

> 2. Regarding replacement of evl_get_xbuf of latmus.c,  I would first call 
>      rtdm_socket (AF_RTIPC, SOCK_DGRAM, IPCPROTO_XDDP); to create socket and then call rtdm_bind  to bind
>      it to XDDP_PORT=xfd that passed from user space  after call rtdm_setsockopt to do corresponding operation like
>       demo/posix/cobalt/xddp-stream.c.
>      Of course , I would replace evl_write_xbuf with rtdm_sendto  and corresponding xbuf variable type in related 
>      structure.  In addition , on userspace of latmus , I would  replace evl_create_xbuf with open ("/dev/rtp$xfd ",..) and then
>      read by returned handler to get data from socket.  Is such design OK for you?
>      Actually , I can not find any instance about rtdm_socket but description in doc/asciidoc/MIGRATION.adoc, I do not know
>     If such design is feasible. Please comment.
>

There is a general consensus that the kernel->kernel RTDM API is now on
its way out for the 3.x series, and won't be part of the CXP in any case
[1], so we should not rely on the rtdm_* kernel interface for new code.

I would suggest to use the low-level message pipe interface available
from the core instead. That stuff creates a rt-kernel <-> nrt-user
channel, allowing to exchange messages of arbitrary size between those
peers. The kernel should create its own endpoint first, before the
application in user-space can open its side of the channel. Therefore,
the logic flow could be as follows:

1. on some ioctl() request from the application (TBD), the driver
creates the kernel endpoint of a message pipe by a call to
xnpipe_connect(-1, &mpipe_ops, arg). This call returns a pipe "minor"
number, which is the identifier of the new channel. This minor value is
passed back to the application as a result of the ioctl() request.

      Regarding xnpipe_connect() --

      -1 means "find and pick the next free minor for me" to the core.
     
      mpipe_ops is an operation descriptor which can be zeroed for the
      most part, asking the core for default values, except
      .free_obuf. This particular handler will be called by the core for
      releasing every outgoing buffer, i.e. every buffer the driver will
      send via xnpipe_send(), which the application should read(2) (See
      below). Typically, if the data buffer you passed to xnpipe_send()
      was obtained from xnmalloc(), then your .free_obuf handler would
      dispose of it by calling xnfree().

      'arg' is an opaque handle you can use to pass your own "extended
      context" information, which handlers in mpipe_ops will receive as
      the 'xstate' argument.

      There is an example of such use in drivers/ipc/xddp.c, although it
      is more complex.

2. the application creates its endpoint for the same channel by a call
to open("/dev/rtp$minor", ...), obtaining a regular file descriptor
'pfd'.

loop-until-eof {

3. the kernel sends data bulks to the application using xnpipe_send().

4. the application reads those bulks using read(2).

}

5. to terminate the channel, the kernel side issues
xnpipe_disconnect(minor), the application issues close(pfd).

[1] https://xenomai.org/pipermail/xenomai/2020-December/043931.html

-- 
Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: several questions about porting latmus
  2021-02-01  9:31 ` Philippe Gerum
@ 2021-02-05  1:47   ` Chen, Hongzhan
  2021-02-07 16:20     ` Philippe Gerum
  0 siblings, 1 reply; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-05  1:47 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

>-----Original Message-----
>From: Philippe Gerum <rpm@xenomai.org> 
>Sent: Monday, February 1, 2021 5:31 PM
>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>Cc: xenomai@xenomai.org
>Subject: Re: several questions about porting latmus
>
>
>Hi Hongzhan,
>
>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>
>> Hi Philippe
>>
>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>> and need your suggestions.
>>
>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>     finish all functions  of rtdm_task_init?
>>
>
>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>is a desirable feature which should be part of the CXP. Other ways to
>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>to the destination CPU before creating the child thread, or open coding
>the spawning sequence based on the internal interface (xnthread_init,
>xnthread_start). Please submit a patch for review of that change
>specifically, prior to submitting any latmus-related bits.
>

OK.  I have finished latmus driver porting so far and built it successfully with linux.
In the following , I would  start to port latmus application. After latmus application is done,
I would validate all of them and then will try to submit patches to review after validation 
is successful. 

Thanks for your suggestions.

Regards

Hongzhan chen

>> 2. Regarding replacement of evl_get_xbuf of latmus.c,  I would first call 
>>      rtdm_socket (AF_RTIPC, SOCK_DGRAM, IPCPROTO_XDDP); to create socket and then call rtdm_bind  to bind
>>      it to XDDP_PORT=xfd that passed from user space  after call rtdm_setsockopt to do corresponding operation like
>>       demo/posix/cobalt/xddp-stream.c.
>>      Of course , I would replace evl_write_xbuf with rtdm_sendto  and corresponding xbuf variable type in related 
>>      structure.  In addition , on userspace of latmus , I would  replace evl_create_xbuf with open ("/dev/rtp$xfd ",..) and then
>>      read by returned handler to get data from socket.  Is such design OK for you?
>>      Actually , I can not find any instance about rtdm_socket but description in doc/asciidoc/MIGRATION.adoc, I do not know
>>     If such design is feasible. Please comment.
>>
>
>There is a general consensus that the kernel->kernel RTDM API is now on
>its way out for the 3.x series, and won't be part of the CXP in any case
>[1], so we should not rely on the rtdm_* kernel interface for new code.
>
>I would suggest to use the low-level message pipe interface available
>from the core instead. That stuff creates a rt-kernel <-> nrt-user
>channel, allowing to exchange messages of arbitrary size between those
>peers. The kernel should create its own endpoint first, before the
>application in user-space can open its side of the channel. Therefore,
>the logic flow could be as follows:
>
>1. on some ioctl() request from the application (TBD), the driver
>creates the kernel endpoint of a message pipe by a call to
>xnpipe_connect(-1, &mpipe_ops, arg). This call returns a pipe "minor"
>number, which is the identifier of the new channel. This minor value is
>passed back to the application as a result of the ioctl() request.
>
>      Regarding xnpipe_connect() --
>
>      -1 means "find and pick the next free minor for me" to the core.
>     
>      mpipe_ops is an operation descriptor which can be zeroed for the
>      most part, asking the core for default values, except
>      .free_obuf. This particular handler will be called by the core for
>      releasing every outgoing buffer, i.e. every buffer the driver will
>      send via xnpipe_send(), which the application should read(2) (See
>      below). Typically, if the data buffer you passed to xnpipe_send()
>      was obtained from xnmalloc(), then your .free_obuf handler would
>      dispose of it by calling xnfree().
>
>      'arg' is an opaque handle you can use to pass your own "extended
>      context" information, which handlers in mpipe_ops will receive as
>      the 'xstate' argument.
>
>      There is an example of such use in drivers/ipc/xddp.c, although it
>      is more complex.
>
>2. the application creates its endpoint for the same channel by a call
>to open("/dev/rtp$minor", ...), obtaining a regular file descriptor
>'pfd'.
>
>loop-until-eof {
>
>3. the kernel sends data bulks to the application using xnpipe_send().
>
>4. the application reads those bulks using read(2).
>
>}
>
>5. to terminate the channel, the kernel side issues
>xnpipe_disconnect(minor), the application issues close(pfd).
>
>[1] https://xenomai.org/pipermail/xenomai/2020-December/043931.html
>
>-- 
>Philippe.
>
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: several questions about porting latmus
  2021-02-05  1:47   ` Chen, Hongzhan
@ 2021-02-07 16:20     ` Philippe Gerum
  2021-02-08  6:36       ` Chen, Hongzhan
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Gerum @ 2021-02-07 16:20 UTC (permalink / raw)
  To: Chen, Hongzhan; +Cc: xenomai


Chen, Hongzhan <hongzhan.chen@intel.com> writes:

>>-----Original Message-----
>>From: Philippe Gerum <rpm@xenomai.org> 
>>Sent: Monday, February 1, 2021 5:31 PM
>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>Cc: xenomai@xenomai.org
>>Subject: Re: several questions about porting latmus
>>
>>
>>Hi Hongzhan,
>>
>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>
>>> Hi Philippe
>>>
>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>> and need your suggestions.
>>>
>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>     finish all functions  of rtdm_task_init?
>>>
>>
>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>is a desirable feature which should be part of the CXP. Other ways to
>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>to the destination CPU before creating the child thread, or open coding
>>the spawning sequence based on the internal interface (xnthread_init,
>>xnthread_start). Please submit a patch for review of that change
>>specifically, prior to submitting any latmus-related bits.
>>
>
> OK.  I have finished latmus driver porting so far and built it successfully with linux.
> In the following , I would  start to port latmus application. After latmus application is done,
> I would validate all of them and then will try to submit patches to review after validation 
> is successful. 
>

With respect to the timer responder test, the latmus application is
based on EVL's built-in timerfd [1] feature, which is very close to the
Cobalt/POSIX equivalent, so the port should be straightforward.

Things may be a little trickier with the GPIO responder test, as Cobalt
needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
common GPIOLIB for this [2], so do not look for any specific driver
here). It depends on the GPIO controller you will test on. You will
certainly need to add support for it to kernel/drivers/gpio.

Which hardware do you plan to use?

[1] https://evlproject.org/core/user-api/timer/
[2] http://evlproject.org/core/oob-drivers/gpio/

-- 
Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: several questions about porting latmus
  2021-02-07 16:20     ` Philippe Gerum
@ 2021-02-08  6:36       ` Chen, Hongzhan
  2021-02-08  8:17         ` Philippe Gerum
  0 siblings, 1 reply; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-08  6:36 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

-----Original Message-----
>From: Philippe Gerum <rpm@xenomai.org> 
>Sent: Monday, February 8, 2021 12:21 AM
>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>Cc: xenomai@xenomai.org
>Subject: Re: several questions about porting latmus
>
>
>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>
>>>-----Original Message-----
>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>Sent: Monday, February 1, 2021 5:31 PM
>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>Cc: xenomai@xenomai.org
>>>Subject: Re: several questions about porting latmus
>>>
>>>
>>>Hi Hongzhan,
>>>
>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>
>>>> Hi Philippe
>>>>
>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>> and need your suggestions.
>>>>
>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>     finish all functions  of rtdm_task_init?
>>>>
>>>
>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>is a desirable feature which should be part of the CXP. Other ways to
>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>to the destination CPU before creating the child thread, or open coding
>>>the spawning sequence based on the internal interface (xnthread_init,
>>>xnthread_start). Please submit a patch for review of that change
>>>specifically, prior to submitting any latmus-related bits.
>>>
>>
>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>> In the following , I would  start to port latmus application. After latmus application is done,
>> I would validate all of them and then will try to submit patches to review after validation 
>> is successful. 
>>
>
>With respect to the timer responder test, the latmus application is
>based on EVL's built-in timerfd [1] feature, which is very close to the
>Cobalt/POSIX equivalent, so the port should be straightforward.
>
>Things may be a little trickier with the GPIO responder test, as Cobalt
>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>common GPIOLIB for this [2], so do not look for any specific driver
>here). It depends on the GPIO controller you will test on. You will
>certainly need to add support for it to kernel/drivers/gpio.
>
>Which hardware do you plan to use?

Currently , I am working on up xtream Lite board which is based on
Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
under kernel/drivers/gpio for my board after further investigated, thanks 
for your soft reminder. 

I have almost finished latmus application porting and validated that latmus driver is 
working but I still have not got Freedom-K64F so far.  So the gpio test
environment can not be setup in short time because of lack of hardware on my side.

>
>[1] https://evlproject.org/core/user-api/timer/
>[2] http://evlproject.org/core/oob-drivers/gpio/
>
>-- 
>Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: several questions about porting latmus
  2021-02-08  6:36       ` Chen, Hongzhan
@ 2021-02-08  8:17         ` Philippe Gerum
  2021-02-08 12:39           ` Chen, Hongzhan
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Gerum @ 2021-02-08  8:17 UTC (permalink / raw)
  To: Chen, Hongzhan; +Cc: xenomai


Chen, Hongzhan <hongzhan.chen@intel.com> writes:

> -----Original Message-----
>>From: Philippe Gerum <rpm@xenomai.org> 
>>Sent: Monday, February 8, 2021 12:21 AM
>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>Cc: xenomai@xenomai.org
>>Subject: Re: several questions about porting latmus
>>
>>
>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>
>>>>-----Original Message-----
>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>Cc: xenomai@xenomai.org
>>>>Subject: Re: several questions about porting latmus
>>>>
>>>>
>>>>Hi Hongzhan,
>>>>
>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>
>>>>> Hi Philippe
>>>>>
>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>> and need your suggestions.
>>>>>
>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>     finish all functions  of rtdm_task_init?
>>>>>
>>>>
>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>to the destination CPU before creating the child thread, or open coding
>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>xnthread_start). Please submit a patch for review of that change
>>>>specifically, prior to submitting any latmus-related bits.
>>>>
>>>
>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>> In the following , I would  start to port latmus application. After latmus application is done,
>>> I would validate all of them and then will try to submit patches to review after validation 
>>> is successful. 
>>>
>>
>>With respect to the timer responder test, the latmus application is
>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>
>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>common GPIOLIB for this [2], so do not look for any specific driver
>>here). It depends on the GPIO controller you will test on. You will
>>certainly need to add support for it to kernel/drivers/gpio.
>>
>>Which hardware do you plan to use?
>
> Currently , I am working on up xtream Lite board which is based on
> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
> under kernel/drivers/gpio for my board after further investigated, thanks 
> for your soft reminder. 
>
> I have almost finished latmus application porting and validated that latmus driver is 
> working but I still have not got Freedom-K64F so far.  So the gpio test
> environment can not be setup in short time because of lack of hardware on my side.
>

There is also the option of making benchmarks/zephyr/latmon a Xenomai
application, which would act as the latency monitor running on a
separate Linux board. Xenomai would then help testing Xenomai which
might not be optimal at first glance, however this should be ok
nevertheless provided that such monitoring board runs a known to be
stable I-pipe configuration.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: several questions about porting latmus
  2021-02-08  8:17         ` Philippe Gerum
@ 2021-02-08 12:39           ` Chen, Hongzhan
  2021-02-08 18:39             ` Philippe Gerum
  0 siblings, 1 reply; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-08 12:39 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>
>> -----Original Message-----
>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>Sent: Monday, February 8, 2021 12:21 AM
>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>Cc: xenomai@xenomai.org
>>>Subject: Re: several questions about porting latmus
>>>
>>>
>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>
>>>>>-----Original Message-----
>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>Cc: xenomai@xenomai.org
>>>>>Subject: Re: several questions about porting latmus
>>>>>
>>>>>
>>>>>Hi Hongzhan,
>>>>>
>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>
>>>>>> Hi Philippe
>>>>>>
>>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>>> and need your suggestions.
>>>>>>
>>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>>     finish all functions  of rtdm_task_init?
>>>>>>
>>>>>
>>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>>to the destination CPU before creating the child thread, or open coding
>>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>>xnthread_start). Please submit a patch for review of that change
>>>>>specifically, prior to submitting any latmus-related bits.
>>>>>
>>>>
>>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>>> In the following , I would  start to port latmus application. After latmus application is done,
>>>> I would validate all of them and then will try to submit patches to review after validation 
>>>> is successful. 
>>>>
>>>
>>>With respect to the timer responder test, the latmus application is
>>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>>
>>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>>common GPIOLIB for this [2], so do not look for any specific driver
>>>here). It depends on the GPIO controller you will test on. You will
>>>certainly need to add support for it to kernel/drivers/gpio.
>>>
>>>Which hardware do you plan to use?
>>
>> Currently , I am working on up xtream Lite board which is based on
>> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
>> under kernel/drivers/gpio for my board after further investigated, thanks 
>> for your soft reminder. 
>>
>> I have almost finished latmus application porting and validated that latmus driver is 
>> working but I still have not got Freedom-K64F so far.  So the gpio test
>> environment can not be setup in short time because of lack of hardware on my side.
>>
>
>There is also the option of making benchmarks/zephyr/latmon a Xenomai
>application, which would act as the latency monitor running on a
>separate Linux board. Xenomai would then help testing Xenomai which
>might not be optimal at first glance, however this should be ok
>nevertheless provided that such monitoring board runs a known to be
>stable I-pipe configuration.
>

One more question,  I saw that rtdm gpio driver can call rtdm_gpiochip_scan_of
to do init currently but actually rtdm_gpiochip_scan_of do call of_find_compatible_node
to find device node, It is workable for those devices registered through ACPI table?
If not , does that means we need to add new API to implement analogous function for those
gpio pinctrl devices registered by ACPI?

>-- 
>Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: several questions about porting latmus
  2021-02-08 12:39           ` Chen, Hongzhan
@ 2021-02-08 18:39             ` Philippe Gerum
  2021-02-09  8:42               ` Chen, Hongzhan
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Gerum @ 2021-02-08 18:39 UTC (permalink / raw)
  To: Chen, Hongzhan; +Cc: xenomai


Chen, Hongzhan <hongzhan.chen@intel.com> writes:

> Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>
>>> -----Original Message-----
>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>Sent: Monday, February 8, 2021 12:21 AM
>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>Cc: xenomai@xenomai.org
>>>>Subject: Re: several questions about porting latmus
>>>>
>>>>
>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>
>>>>>>-----Original Message-----
>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>Cc: xenomai@xenomai.org
>>>>>>Subject: Re: several questions about porting latmus
>>>>>>
>>>>>>
>>>>>>Hi Hongzhan,
>>>>>>
>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>
>>>>>>> Hi Philippe
>>>>>>>
>>>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>>>> and need your suggestions.
>>>>>>>
>>>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>>>     finish all functions  of rtdm_task_init?
>>>>>>>
>>>>>>
>>>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>>>to the destination CPU before creating the child thread, or open coding
>>>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>>>xnthread_start). Please submit a patch for review of that change
>>>>>>specifically, prior to submitting any latmus-related bits.
>>>>>>
>>>>>
>>>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>>>> In the following , I would  start to port latmus application. After latmus application is done,
>>>>> I would validate all of them and then will try to submit patches to review after validation 
>>>>> is successful. 
>>>>>
>>>>
>>>>With respect to the timer responder test, the latmus application is
>>>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>>>
>>>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>>>common GPIOLIB for this [2], so do not look for any specific driver
>>>>here). It depends on the GPIO controller you will test on. You will
>>>>certainly need to add support for it to kernel/drivers/gpio.
>>>>
>>>>Which hardware do you plan to use?
>>>
>>> Currently , I am working on up xtream Lite board which is based on
>>> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
>>> under kernel/drivers/gpio for my board after further investigated, thanks 
>>> for your soft reminder. 
>>>
>>> I have almost finished latmus application porting and validated that latmus driver is 
>>> working but I still have not got Freedom-K64F so far.  So the gpio test
>>> environment can not be setup in short time because of lack of hardware on my side.
>>>
>>
>>There is also the option of making benchmarks/zephyr/latmon a Xenomai
>>application, which would act as the latency monitor running on a
>>separate Linux board. Xenomai would then help testing Xenomai which
>>might not be optimal at first glance, however this should be ok
>>nevertheless provided that such monitoring board runs a known to be
>>stable I-pipe configuration.
>>
>
> One more question,  I saw that rtdm gpio driver can call rtdm_gpiochip_scan_of
> to do init currently but actually rtdm_gpiochip_scan_of do call of_find_compatible_node
> to find device node, It is workable for those devices registered through ACPI table?
> If not , does that means we need to add new API to implement analogous function for those
> gpio pinctrl devices registered by ACPI?
>

gpiochip_find() is available to non-OF systems as well; what gets
enumerated by the platform code ends up being registered in the gpiolib
device list.

The idea is to match by name the type of controller which is expected on
your platform with the gpio chips enumerated by gpiochip_find(). You may
want to add some generic helper to gpio-core.c doing that for non-OF
platforms, which you would call from additional controller-specific RTDM
modules (those which would define more GPIO device subclasses,
i.e. RTDM_SUBCLASS_xx).

-- 
Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: several questions about porting latmus
  2021-02-08 18:39             ` Philippe Gerum
@ 2021-02-09  8:42               ` Chen, Hongzhan
  2021-02-10  6:46                 ` Philippe Gerum
  0 siblings, 1 reply; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-09  8:42 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai

>> Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>
>>>> -----Original Message-----
>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>Sent: Monday, February 8, 2021 12:21 AM
>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>Cc: xenomai@xenomai.org
>>>>>Subject: Re: several questions about porting latmus
>>>>>
>>>>>
>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>
>>>>>>>-----Original Message-----
>>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>>Cc: xenomai@xenomai.org
>>>>>>>Subject: Re: several questions about porting latmus
>>>>>>>
>>>>>>>
>>>>>>>Hi Hongzhan,
>>>>>>>
>>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>>
>>>>>>>> Hi Philippe
>>>>>>>>
>>>>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>>>>> and need your suggestions.
>>>>>>>>
>>>>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>>>>     finish all functions  of rtdm_task_init?
>>>>>>>>
>>>>>>>
>>>>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>>>>to the destination CPU before creating the child thread, or open coding
>>>>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>>>>xnthread_start). Please submit a patch for review of that change
>>>>>>>specifically, prior to submitting any latmus-related bits.
>>>>>>>
>>>>>>
>>>>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>>>>> In the following , I would  start to port latmus application. After latmus application is done,
>>>>>> I would validate all of them and then will try to submit patches to review after validation 
>>>>>> is successful. 
>>>>>>
>>>>>
>>>>>With respect to the timer responder test, the latmus application is
>>>>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>>>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>>>>
>>>>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>>>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>>>>common GPIOLIB for this [2], so do not look for any specific driver
>>>>>here). It depends on the GPIO controller you will test on. You will
>>>>>certainly need to add support for it to kernel/drivers/gpio.
>>>>>
>>>>>Which hardware do you plan to use?
>>>>
>>>> Currently , I am working on up xtream Lite board which is based on
>>>> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
>>>> under kernel/drivers/gpio for my board after further investigated, thanks 
>>>> for your soft reminder. 
>>>>
>>>> I have almost finished latmus application porting and validated that latmus driver is 
>>>> working but I still have not got Freedom-K64F so far.  So the gpio test
>>>> environment can not be setup in short time because of lack of hardware on my side.
>>>>
>>>
>>>There is also the option of making benchmarks/zephyr/latmon a Xenomai
>>>application, which would act as the latency monitor running on a
>>>separate Linux board. Xenomai would then help testing Xenomai which
>>>might not be optimal at first glance, however this should be ok
>>>nevertheless provided that such monitoring board runs a known to be
>>>stable I-pipe configuration.
>>>
>>
>> One more question,  I saw that rtdm gpio driver can call rtdm_gpiochip_scan_of
>> to do init currently but actually rtdm_gpiochip_scan_of do call of_find_compatible_node
>> to find device node, It is workable for those devices registered through ACPI table?
>> If not , does that means we need to add new API to implement analogous function for those
>> gpio pinctrl devices registered by ACPI?
>>
>
>gpiochip_find() is available to non-OF systems as well; what gets
>enumerated by the platform code ends up being registered in the gpiolib
>device list.
>
>The idea is to match by name the type of controller which is expected on
>your platform with the gpio chips enumerated by gpiochip_find(). You may
>want to add some generic helper to gpio-core.c doing that for non-OF
>platforms, which you would call from additional controller-specific RTDM
>modules (those which would define more GPIO device subclasses,
>i.e. RTDM_SUBCLASS_xx).
>
>-- 
>Philippe.

Thanks  for your suggestions. I have finished implementing generic helper in gpio-core.c and
also developed new gpio rtdm driver for my board. After build successfully and run xenomai 
on my board, bother latmus and gpio chip device file can be found under /dev/rtdm folder.
But when I try to run /usr/bin/latmus , it returned with error "measurement setup failed: Resource
Temporarily unavailable". After further debug, I found that actually it wrongly call ioctl_rt handler of
Latmus driver. I saw that libevl have explicit oob_ioctl for calling such realtime handler but how 
cobalt differentiate nrt and rt when call same ioctl?

Regards

Hongzhan Chen


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: several questions about porting latmus
  2021-02-09  8:42               ` Chen, Hongzhan
@ 2021-02-10  6:46                 ` Philippe Gerum
  2021-02-10  6:57                   ` Chen, Hongzhan
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Gerum @ 2021-02-10  6:46 UTC (permalink / raw)
  To: Chen, Hongzhan; +Cc: xenomai


Chen, Hongzhan <hongzhan.chen@intel.com> writes:

>>> Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>
>>>>> -----Original Message-----
>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>Sent: Monday, February 8, 2021 12:21 AM
>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>Cc: xenomai@xenomai.org
>>>>>>Subject: Re: several questions about porting latmus
>>>>>>
>>>>>>
>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>
>>>>>>>>-----Original Message-----
>>>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>>>Cc: xenomai@xenomai.org
>>>>>>>>Subject: Re: several questions about porting latmus
>>>>>>>>
>>>>>>>>
>>>>>>>>Hi Hongzhan,
>>>>>>>>
>>>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>>>
>>>>>>>>> Hi Philippe
>>>>>>>>>
>>>>>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>>>>>> and need your suggestions.
>>>>>>>>>
>>>>>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>>>>>     finish all functions  of rtdm_task_init?
>>>>>>>>>
>>>>>>>>
>>>>>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>>>>>to the destination CPU before creating the child thread, or open coding
>>>>>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>>>>>xnthread_start). Please submit a patch for review of that change
>>>>>>>>specifically, prior to submitting any latmus-related bits.
>>>>>>>>
>>>>>>>
>>>>>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>>>>>> In the following , I would  start to port latmus application. After latmus application is done,
>>>>>>> I would validate all of them and then will try to submit patches to review after validation 
>>>>>>> is successful. 
>>>>>>>
>>>>>>
>>>>>>With respect to the timer responder test, the latmus application is
>>>>>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>>>>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>>>>>
>>>>>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>>>>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>>>>>common GPIOLIB for this [2], so do not look for any specific driver
>>>>>>here). It depends on the GPIO controller you will test on. You will
>>>>>>certainly need to add support for it to kernel/drivers/gpio.
>>>>>>
>>>>>>Which hardware do you plan to use?
>>>>>
>>>>> Currently , I am working on up xtream Lite board which is based on
>>>>> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
>>>>> under kernel/drivers/gpio for my board after further investigated, thanks 
>>>>> for your soft reminder. 
>>>>>
>>>>> I have almost finished latmus application porting and validated that latmus driver is 
>>>>> working but I still have not got Freedom-K64F so far.  So the gpio test
>>>>> environment can not be setup in short time because of lack of hardware on my side.
>>>>>
>>>>
>>>>There is also the option of making benchmarks/zephyr/latmon a Xenomai
>>>>application, which would act as the latency monitor running on a
>>>>separate Linux board. Xenomai would then help testing Xenomai which
>>>>might not be optimal at first glance, however this should be ok
>>>>nevertheless provided that such monitoring board runs a known to be
>>>>stable I-pipe configuration.
>>>>
>>>
>>> One more question,  I saw that rtdm gpio driver can call rtdm_gpiochip_scan_of
>>> to do init currently but actually rtdm_gpiochip_scan_of do call of_find_compatible_node
>>> to find device node, It is workable for those devices registered through ACPI table?
>>> If not , does that means we need to add new API to implement analogous function for those
>>> gpio pinctrl devices registered by ACPI?
>>>
>>
>>gpiochip_find() is available to non-OF systems as well; what gets
>>enumerated by the platform code ends up being registered in the gpiolib
>>device list.
>>
>>The idea is to match by name the type of controller which is expected on
>>your platform with the gpio chips enumerated by gpiochip_find(). You may
>>want to add some generic helper to gpio-core.c doing that for non-OF
>>platforms, which you would call from additional controller-specific RTDM
>>modules (those which would define more GPIO device subclasses,
>>i.e. RTDM_SUBCLASS_xx).
>>
>>-- 
>>Philippe.
>
> Thanks  for your suggestions. I have finished implementing generic helper in gpio-core.c and
> also developed new gpio rtdm driver for my board. After build successfully and run xenomai 
> on my board, bother latmus and gpio chip device file can be found under /dev/rtdm folder.
> But when I try to run /usr/bin/latmus , it returned with error "measurement setup failed: Resource
> Temporarily unavailable". After further debug, I found that actually it wrongly call ioctl_rt handler of
> Latmus driver. I saw that libevl have explicit oob_ioctl for calling such realtime handler but how 
> cobalt differentiate nrt and rt when call same ioctl?
>

The driver itself differentiates this, not Cobalt per se. For Xenomai
tasks, ioctl requests from libcobalt are always passed to the ioctl_rt
handler first.

When this handler detects that such request should be processed by the
converse ioctl_nrt handler instead, it should simply return with -ENOSYS
to the caller.

As a result, Cobalt would switch the current task to secondary mode,
then call the ioctl_nrt handler with the same request code and arg.

-- 
Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: several questions about porting latmus
  2021-02-10  6:46                 ` Philippe Gerum
@ 2021-02-10  6:57                   ` Chen, Hongzhan
  0 siblings, 0 replies; 11+ messages in thread
From: Chen, Hongzhan @ 2021-02-10  6:57 UTC (permalink / raw)
  To: Philippe Gerum; +Cc: xenomai


>>>> Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>
>>>>>> -----Original Message-----
>>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>>Sent: Monday, February 8, 2021 12:21 AM
>>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>>Cc: xenomai@xenomai.org
>>>>>>>Subject: Re: several questions about porting latmus
>>>>>>>
>>>>>>>
>>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>>
>>>>>>>>>-----Original Message-----
>>>>>>>>>From: Philippe Gerum <rpm@xenomai.org> 
>>>>>>>>>Sent: Monday, February 1, 2021 5:31 PM
>>>>>>>>>To: Chen, Hongzhan <hongzhan.chen@intel.com>
>>>>>>>>>Cc: xenomai@xenomai.org
>>>>>>>>>Subject: Re: several questions about porting latmus
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Hi Hongzhan,
>>>>>>>>>
>>>>>>>>>Chen, Hongzhan <hongzhan.chen@intel.com> writes:
>>>>>>>>>
>>>>>>>>>> Hi Philippe
>>>>>>>>>>
>>>>>>>>>> When I was trying to port latmus from evl to xenomai 3.2,  I met several issues that block porting
>>>>>>>>>> and need your suggestions.
>>>>>>>>>>
>>>>>>>>>> 1. When I tried to replace function evl_run_kthread_on_cpu of latmus.c driver ,  I found that only rtdm_task_init  
>>>>>>>>>>     can meet our requirements mostly  but we still cannot pass cpu affinity through it to pin task to required
>>>>>>>>>>     cpu. Do we need to implement new API so that we can  pass cpu affinity to pin task to required cpu but
>>>>>>>>>>     finish all functions  of rtdm_task_init?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>We should probably introduce rtdm_task_init_on_cpu() in 3.2, since this
>>>>>>>>>is a desirable feature which should be part of the CXP. Other ways to
>>>>>>>>>pin the new kthread are fairly ugly ATM, ranging from pinning the parent
>>>>>>>>>to the destination CPU before creating the child thread, or open coding
>>>>>>>>>the spawning sequence based on the internal interface (xnthread_init,
>>>>>>>>>xnthread_start). Please submit a patch for review of that change
>>>>>>>>>specifically, prior to submitting any latmus-related bits.
>>>>>>>>>
>>>>>>>>
>>>>>>>> OK.  I have finished latmus driver porting so far and built it successfully with linux.
>>>>>>>> In the following , I would  start to port latmus application. After latmus application is done,
>>>>>>>> I would validate all of them and then will try to submit patches to review after validation 
>>>>>>>> is successful. 
>>>>>>>>
>>>>>>>
>>>>>>>With respect to the timer responder test, the latmus application is
>>>>>>>based on EVL's built-in timerfd [1] feature, which is very close to the
>>>>>>>Cobalt/POSIX equivalent, so the port should be straightforward.
>>>>>>>
>>>>>>>Things may be a little trickier with the GPIO responder test, as Cobalt
>>>>>>>needs a specific RTDM driver to operate the GPIO lines (EVL reuses the
>>>>>>>common GPIOLIB for this [2], so do not look for any specific driver
>>>>>>>here). It depends on the GPIO controller you will test on. You will
>>>>>>>certainly need to add support for it to kernel/drivers/gpio.
>>>>>>>
>>>>>>>Which hardware do you plan to use?
>>>>>>
>>>>>> Currently , I am working on up xtream Lite board which is based on
>>>>>> Intel Whiskey Lake.  Yes,  I need to add new GPIO controller rtdm driver
>>>>>> under kernel/drivers/gpio for my board after further investigated, thanks 
>>>>>> for your soft reminder. 
>>>>>>
>>>>>> I have almost finished latmus application porting and validated that latmus driver is 
>>>>>> working but I still have not got Freedom-K64F so far.  So the gpio test
>>>>>> environment can not be setup in short time because of lack of hardware on my side.
>>>>>>
>>>>>
>>>>>There is also the option of making benchmarks/zephyr/latmon a Xenomai
>>>>>application, which would act as the latency monitor running on a
>>>>>separate Linux board. Xenomai would then help testing Xenomai which
>>>>>might not be optimal at first glance, however this should be ok
>>>>>nevertheless provided that such monitoring board runs a known to be
>>>>>stable I-pipe configuration.
>>>>>
>>>>
>>>> One more question,  I saw that rtdm gpio driver can call rtdm_gpiochip_scan_of
>>>> to do init currently but actually rtdm_gpiochip_scan_of do call of_find_compatible_node
>>>> to find device node, It is workable for those devices registered through ACPI table?
>>>> If not , does that means we need to add new API to implement analogous function for those
>>>> gpio pinctrl devices registered by ACPI?
>>>>
>>>
>>>gpiochip_find() is available to non-OF systems as well; what gets
>>>enumerated by the platform code ends up being registered in the gpiolib
>>>device list.
>>>
>>>The idea is to match by name the type of controller which is expected on
>>>your platform with the gpio chips enumerated by gpiochip_find(). You may
>>>want to add some generic helper to gpio-core.c doing that for non-OF
>>>platforms, which you would call from additional controller-specific RTDM
>>>modules (those which would define more GPIO device subclasses,
>>>i.e. RTDM_SUBCLASS_xx).
>>>
>>>-- 
>>>Philippe.
>>
>> Thanks  for your suggestions. I have finished implementing generic helper in gpio-core.c and
>> also developed new gpio rtdm driver for my board. After build successfully and run xenomai 
>> on my board, bother latmus and gpio chip device file can be found under /dev/rtdm folder.
>> But when I try to run /usr/bin/latmus , it returned with error "measurement setup failed: Resource
>> Temporarily unavailable". After further debug, I found that actually it wrongly call ioctl_rt handler of
>> Latmus driver. I saw that libevl have explicit oob_ioctl for calling such realtime handler but how 
>> cobalt differentiate nrt and rt when call same ioctl?
>>
>
>The driver itself differentiates this, not Cobalt per se. For Xenomai
>tasks, ioctl requests from libcobalt are always passed to the ioctl_rt
>handler first.
>
>When this handler detects that such request should be processed by the
>converse ioctl_nrt handler instead, it should simply return with -ENOSYS
>to the caller.
>
>As a result, Cobalt would switch the current task to secondary mode,
>then call the ioctl_nrt handler with the same request code and arg.
>

Thanks , I already found that there as reply in history email by Jan to answer similar question 
 https://xenomai.org/pipermail/xenomai/2019-September/041664.html
I  was trying the fixing. There is something wrong with my latmus driver after porting from evl and need
to be adjusted accordingly.

>-- 
>Philippe.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-02-10  6:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-01  4:36 several questions about porting latmus Chen, Hongzhan
2021-02-01  9:31 ` Philippe Gerum
2021-02-05  1:47   ` Chen, Hongzhan
2021-02-07 16:20     ` Philippe Gerum
2021-02-08  6:36       ` Chen, Hongzhan
2021-02-08  8:17         ` Philippe Gerum
2021-02-08 12:39           ` Chen, Hongzhan
2021-02-08 18:39             ` Philippe Gerum
2021-02-09  8:42               ` Chen, Hongzhan
2021-02-10  6:46                 ` Philippe Gerum
2021-02-10  6:57                   ` Chen, Hongzhan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.