All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-core] Enhanced RTDM device closure
@ 2007-02-21  8:43 Jan Kiszka
  2007-02-21  8:56 ` Gilles Chanteperdrix
  2007-02-25 15:36 ` Jan Kiszka
  0 siblings, 2 replies; 8+ messages in thread
From: Jan Kiszka @ 2007-02-21  8:43 UTC (permalink / raw)
  To: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 1020 bytes --]

Hi,

a few changes of the RTDM layer were committed to trunk recently. They
make handling of RTDM file descriptors more handy:

 o rt_dev_close/POSIX-close now polls as long as the underlying device
   reports -EAGAIN. No more looping inside the application is required.
   This applies to the usual non-RT invocation of close, the corner
   case "close from RT context" can still return EAGAIN.

 o Automatic cleanup of open file descriptors has been implemented. This
   is not yet the perfect design (*), but a straightforward approach to
   ease the cleanup after application crashes or other unexpected
   terminations.

The code is still young, so testers are welcome.

Jan


(*) Actually, I would like to see generic per-process file descriptor
tables one day, used by both the POSIX and the RTDM skin. The FD table
should be obtained via xnshadow_ppd_get(). But first this requires
lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
overhead limited. Yet another story.


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  8:43 [Xenomai-core] Enhanced RTDM device closure Jan Kiszka
@ 2007-02-21  8:56 ` Gilles Chanteperdrix
  2007-02-21  9:11   ` Jan Kiszka
  2007-02-25 15:36 ` Jan Kiszka
  1 sibling, 1 reply; 8+ messages in thread
From: Gilles Chanteperdrix @ 2007-02-21  8:56 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

Jan Kiszka wrote:
> Hi,
> 
> a few changes of the RTDM layer were committed to trunk recently. They
> make handling of RTDM file descriptors more handy:
> 
>  o rt_dev_close/POSIX-close now polls as long as the underlying device
>    reports -EAGAIN. No more looping inside the application is required.
>    This applies to the usual non-RT invocation of close, the corner
>    case "close from RT context" can still return EAGAIN.
> 
>  o Automatic cleanup of open file descriptors has been implemented. This
>    is not yet the perfect design (*), but a straightforward approach to
>    ease the cleanup after application crashes or other unexpected
>    terminations.
> 
> The code is still young, so testers are welcome.
> 
> Jan
> 
> 
> (*) Actually, I would like to see generic per-process file descriptor
> tables one day, used by both the POSIX and the RTDM skin. The FD table
> should be obtained via xnshadow_ppd_get().

I agree for the file descriptor table, but I do not see why it should be
bound to xnshadow_ppd_get. The file descriptor table could be
implemented in an object like fashion, where the caller is responsible
to pass the same pointer to the creation, use and desctruction routines.
This would allow, for example, to have a descriptor table for
kernel-space threads. Another feature that would be interesting for the
posix skin would be to have a callback called at process fork time in
order to duplicate the fd table.


 But first this requires
> lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
> overhead limited. Yet another story.

xnshadow_ppd_get is already lockless, usual callers have to hold the
nklock for other reasons anyway.

-- 
                                                 Gilles Chanteperdrix


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  8:56 ` Gilles Chanteperdrix
@ 2007-02-21  9:11   ` Jan Kiszka
  2007-02-21  9:40     ` Gilles Chanteperdrix
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Kiszka @ 2007-02-21  9:11 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 2704 bytes --]

Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> a few changes of the RTDM layer were committed to trunk recently. They
>> make handling of RTDM file descriptors more handy:
>>
>>  o rt_dev_close/POSIX-close now polls as long as the underlying device
>>    reports -EAGAIN. No more looping inside the application is required.
>>    This applies to the usual non-RT invocation of close, the corner
>>    case "close from RT context" can still return EAGAIN.
>>
>>  o Automatic cleanup of open file descriptors has been implemented. This
>>    is not yet the perfect design (*), but a straightforward approach to
>>    ease the cleanup after application crashes or other unexpected
>>    terminations.
>>
>> The code is still young, so testers are welcome.
>>
>> Jan
>>
>>
>> (*) Actually, I would like to see generic per-process file descriptor
>> tables one day, used by both the POSIX and the RTDM skin. The FD table
>> should be obtained via xnshadow_ppd_get().
> 
> I agree for the file descriptor table, but I do not see why it should be
> bound to xnshadow_ppd_get. The file descriptor table could be
> implemented in an object like fashion, where the caller is responsible
> to pass the same pointer to the creation, use and desctruction routines.

But where to get this pointer from when I enter, say, rtdm_ioctl on
behalf of some process? The caller just passes an integer, the file
descriptor.

> This would allow, for example, to have a descriptor table for
> kernel-space threads. Another feature that would be interesting for the

I don't see the need to offer kernel threads private fd tables. They can
perfectly continue to use a common, then kernel-only table. There are
too few of those threads, and there is no clear concept of a process
boundary in kernel space.

> posix skin would be to have a callback called at process fork time in
> order to duplicate the fd table.

Ack. IIRC, this callback could also serve to solve the only consistency
issue of the ipipe_get_ptd() approach.

> 
> 
>  But first this requires
>> lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
>> overhead limited. Yet another story.
> 
> xnshadow_ppd_get is already lockless, usual callers have to hold the
> nklock for other reasons anyway.
> 

OK, depends on the POV :). Mine is that the related RTDM services do not
hold nklock and will never have to. Moreover, there is no need for
locking design-wise, because per-process data cannot vanish under the
caller unless the caller vanishes. The need currently only comes from
the hashing-based lookup (reminds me of the WCET issues kernel futexes
have...).

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  9:11   ` Jan Kiszka
@ 2007-02-21  9:40     ` Gilles Chanteperdrix
  2007-02-21  9:57       ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Gilles Chanteperdrix @ 2007-02-21  9:40 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
> 
>>Jan Kiszka wrote:
>>
>>>Hi,
>>>
>>>a few changes of the RTDM layer were committed to trunk recently. They
>>>make handling of RTDM file descriptors more handy:
>>>
>>> o rt_dev_close/POSIX-close now polls as long as the underlying device
>>>   reports -EAGAIN. No more looping inside the application is required.
>>>   This applies to the usual non-RT invocation of close, the corner
>>>   case "close from RT context" can still return EAGAIN.
>>>
>>> o Automatic cleanup of open file descriptors has been implemented. This
>>>   is not yet the perfect design (*), but a straightforward approach to
>>>   ease the cleanup after application crashes or other unexpected
>>>   terminations.
>>>
>>>The code is still young, so testers are welcome.
>>>
>>>Jan
>>>
>>>
>>>(*) Actually, I would like to see generic per-process file descriptor
>>>tables one day, used by both the POSIX and the RTDM skin. The FD table
>>>should be obtained via xnshadow_ppd_get().
>>
>>I agree for the file descriptor table, but I do not see why it should be
>>bound to xnshadow_ppd_get. The file descriptor table could be
>>implemented in an object like fashion, where the caller is responsible
>>to pass the same pointer to the creation, use and desctruction routines.
> 
> 
> But where to get this pointer from when I enter, say, rtdm_ioctl on
> behalf of some process? The caller just passes an integer, the file
> descriptor.

Yes, the pointer would be obtained via xnshadow_ppd_get, but it does not
have to be built-in the nucleus, this can be done by the skins.

> 
> 
>>This would allow, for example, to have a descriptor table for
>>kernel-space threads. Another feature that would be interesting for the
> 
> 
> I don't see the need to offer kernel threads private fd tables. They can
> perfectly continue to use a common, then kernel-only table. There are
> too few of those threads, and there is no clear concept of a process
> boundary in kernel space.

I mean having one descriptor table for the kernel space as a whole, but
the kernel space descriptor table does not have to be of a different
type from the user-space descriptor tables.

> 
> 
>>posix skin would be to have a callback called at process fork time in
>>order to duplicate the fd table.
> 
> 
> Ack. IIRC, this callback could also serve to solve the only consistency
> issue of the ipipe_get_ptd() approach.
> 
> 
>>
>> But first this requires
>>
>>>lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
>>>overhead limited. Yet another story.
>>
>>xnshadow_ppd_get is already lockless, usual callers have to hold the
>>nklock for other reasons anyway.
>>
> 
> 
> OK, depends on the POV :). Mine is that the related RTDM services do not
> hold nklock and will never have to. Moreover, there is no need for
> locking design-wise, because per-process data cannot vanish under the
> caller unless the caller vanishes. The need currently only comes from
> the hashing-based lookup (reminds me of the WCET issues kernel futexes
> have...).

I have to have a closer look at the code. But you are right, since the
ppd cannot vanish under our feet, maybe is it possible to call
xnshadow_ppd_get without holding the nklock at all. We "only" have to
suppose that the lists manipulation routines will never set the list to
an inconsistent state.

Something else that I would like is that the fd table be bound to the
nucleus registry. This would allow to factor the registry implementation.

-- 
                                                 Gilles Chanteperdrix


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  9:40     ` Gilles Chanteperdrix
@ 2007-02-21  9:57       ` Jan Kiszka
  2007-02-21 10:29         ` Gilles Chanteperdrix
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Kiszka @ 2007-02-21  9:57 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 4373 bytes --]

Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Gilles Chanteperdrix wrote:
>>
>>> Jan Kiszka wrote:
>>>
>>>> Hi,
>>>>
>>>> a few changes of the RTDM layer were committed to trunk recently. They
>>>> make handling of RTDM file descriptors more handy:
>>>>
>>>> o rt_dev_close/POSIX-close now polls as long as the underlying device
>>>>   reports -EAGAIN. No more looping inside the application is required.
>>>>   This applies to the usual non-RT invocation of close, the corner
>>>>   case "close from RT context" can still return EAGAIN.
>>>>
>>>> o Automatic cleanup of open file descriptors has been implemented. This
>>>>   is not yet the perfect design (*), but a straightforward approach to
>>>>   ease the cleanup after application crashes or other unexpected
>>>>   terminations.
>>>>
>>>> The code is still young, so testers are welcome.
>>>>
>>>> Jan
>>>>
>>>>
>>>> (*) Actually, I would like to see generic per-process file descriptor
>>>> tables one day, used by both the POSIX and the RTDM skin. The FD table
>>>> should be obtained via xnshadow_ppd_get().
>>> I agree for the file descriptor table, but I do not see why it should be
>>> bound to xnshadow_ppd_get. The file descriptor table could be
>>> implemented in an object like fashion, where the caller is responsible
>>> to pass the same pointer to the creation, use and desctruction routines.
>>
>> But where to get this pointer from when I enter, say, rtdm_ioctl on
>> behalf of some process? The caller just passes an integer, the file
>> descriptor.
> 
> Yes, the pointer would be obtained via xnshadow_ppd_get, but it does not
> have to be built-in the nucleus, this can be done by the skins.
> 
>>
>>> This would allow, for example, to have a descriptor table for
>>> kernel-space threads. Another feature that would be interesting for the
>>
>> I don't see the need to offer kernel threads private fd tables. They can
>> perfectly continue to use a common, then kernel-only table. There are
>> too few of those threads, and there is no clear concept of a process
>> boundary in kernel space.
> 
> I mean having one descriptor table for the kernel space as a whole, but
> the kernel space descriptor table does not have to be of a different
> type from the user-space descriptor tables.
> 
>>
>>> posix skin would be to have a callback called at process fork time in
>>> order to duplicate the fd table.
>>
>> Ack. IIRC, this callback could also serve to solve the only consistency
>> issue of the ipipe_get_ptd() approach.
>>
>>
>>> But first this requires
>>>
>>>> lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
>>>> overhead limited. Yet another story.
>>> xnshadow_ppd_get is already lockless, usual callers have to hold the
>>> nklock for other reasons anyway.
>>>
>>
>> OK, depends on the POV :). Mine is that the related RTDM services do not
>> hold nklock and will never have to. Moreover, there is no need for
>> locking design-wise, because per-process data cannot vanish under the
>> caller unless the caller vanishes. The need currently only comes from
>> the hashing-based lookup (reminds me of the WCET issues kernel futexes
>> have...).
> 
> I have to have a closer look at the code. But you are right, since the
> ppd cannot vanish under our feet, maybe is it possible to call
> xnshadow_ppd_get without holding the nklock at all. We "only" have to
> suppose that the lists manipulation routines will never set the list to
> an inconsistent state.

As long as process A's ppd can take a place in the same list as process
B's one, you need locking (or RCU :-/). That's my point about the hash
chain approach.

I can only advertise the idea again to maintain the ppd pointers as an
I-pipe task_struct key. On fork/clone, you just have to make sure that
the child either gets a copy of the parent's pointer when it will share
the mm, or its key is NULL'ified, or automatic Xenomai skin binding is
triggered to generate in a new ppd.

> 
> Something else that I would like is that the fd table be bound to the
> nucleus registry. This would allow to factor the registry implementation.
> 

Ack, that's what I had in mind as well. We need to make this fd table
stuff a generic service, maybe even the foundation of any object
descriptor in user-space.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  9:57       ` Jan Kiszka
@ 2007-02-21 10:29         ` Gilles Chanteperdrix
  2007-02-21 10:48           ` Jan Kiszka
  0 siblings, 1 reply; 8+ messages in thread
From: Gilles Chanteperdrix @ 2007-02-21 10:29 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: xenomai-core

Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>>I have to have a closer look at the code. But you are right, since the
>>ppd cannot vanish under our feet, maybe is it possible to call
>>xnshadow_ppd_get without holding the nklock at all. We "only" have to
>>suppose that the lists manipulation routines will never set the list to
>>an inconsistent state.
> 
> 
> As long as process A's ppd can take a place in the same list as process
> B's one, you need locking (or RCU :-/). That's my point about the hash
> chain approach.
> 
> I can only advertise the idea again to maintain the ppd pointers as an
> I-pipe task_struct key. On fork/clone, you just have to make sure that
> the child either gets a copy of the parent's pointer when it will share
> the mm, or its key is NULL'ified, or automatic Xenomai skin binding is
> triggered to generate in a new ppd.

I agree with the idea of the ptd. Nevertheless, I think it is possible
to access an xnqueue in a lockless fashion. Concurrent insertions and
deletions only matter if they take place before (in list order) the
target. When we are walking the list, only the "next" pointers matters.
Now, if we look at the "next" pointers in the insertion routine, we see:

   holder->next = head->next;
   head->next = holder;

So, maybe we just need to add a compiler barrier, but it looks like we
can never see a wrong pointer when walking the list.

-- 
                                                 Gilles Chanteperdrix


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21 10:29         ` Gilles Chanteperdrix
@ 2007-02-21 10:48           ` Jan Kiszka
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Kiszka @ 2007-02-21 10:48 UTC (permalink / raw)
  To: Gilles Chanteperdrix; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 1655 bytes --]

Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Gilles Chanteperdrix wrote:
>>> I have to have a closer look at the code. But you are right, since the
>>> ppd cannot vanish under our feet, maybe is it possible to call
>>> xnshadow_ppd_get without holding the nklock at all. We "only" have to
>>> suppose that the lists manipulation routines will never set the list to
>>> an inconsistent state.
>>
>> As long as process A's ppd can take a place in the same list as process
>> B's one, you need locking (or RCU :-/). That's my point about the hash
>> chain approach.
>>
>> I can only advertise the idea again to maintain the ppd pointers as an
>> I-pipe task_struct key. On fork/clone, you just have to make sure that
>> the child either gets a copy of the parent's pointer when it will share
>> the mm, or its key is NULL'ified, or automatic Xenomai skin binding is
>> triggered to generate in a new ppd.
> 
> I agree with the idea of the ptd. Nevertheless, I think it is possible
> to access an xnqueue in a lockless fashion. Concurrent insertions and
> deletions only matter if they take place before (in list order) the
> target. When we are walking the list, only the "next" pointers matters.
> Now, if we look at the "next" pointers in the insertion routine, we see:
> 
>    holder->next = head->next;
>    head->next = holder;
> 
> So, maybe we just need to add a compiler barrier, but it looks like we
> can never see a wrong pointer when walking the list.
> 

But not having to walk some chain, even if it's lock-less then, can also
save us from potential cache misses on accessing those memory chunks... :)


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Xenomai-core] Enhanced RTDM device closure
  2007-02-21  8:43 [Xenomai-core] Enhanced RTDM device closure Jan Kiszka
  2007-02-21  8:56 ` Gilles Chanteperdrix
@ 2007-02-25 15:36 ` Jan Kiszka
  1 sibling, 0 replies; 8+ messages in thread
From: Jan Kiszka @ 2007-02-25 15:36 UTC (permalink / raw)
  To: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 988 bytes --]

Jan Kiszka wrote:
> Hi,
> 
> a few changes of the RTDM layer were committed to trunk recently. They
> make handling of RTDM file descriptors more handy:
> 
>  o rt_dev_close/POSIX-close now polls as long as the underlying device
>    reports -EAGAIN. No more looping inside the application is required.
>    This applies to the usual non-RT invocation of close, the corner
>    case "close from RT context" can still return EAGAIN.
> 
>  o Automatic cleanup of open file descriptors has been implemented. This
>    is not yet the perfect design (*), but a straightforward approach to
>    ease the cleanup after application crashes or other unexpected
>    terminations.

 o Report file descriptor owner via /proc:

   # cat /proc/xenomai/rtdm/open_fildes
   Index   Locked  Device          Owner [PID]
   0       0       rttest0         latency [973]
   1       0       rtser0          cross-link [981]
   2       0       rtser1          cross-link [981]

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-02-25 15:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-21  8:43 [Xenomai-core] Enhanced RTDM device closure Jan Kiszka
2007-02-21  8:56 ` Gilles Chanteperdrix
2007-02-21  9:11   ` Jan Kiszka
2007-02-21  9:40     ` Gilles Chanteperdrix
2007-02-21  9:57       ` Jan Kiszka
2007-02-21 10:29         ` Gilles Chanteperdrix
2007-02-21 10:48           ` Jan Kiszka
2007-02-25 15:36 ` Jan Kiszka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.