All of lore.kernel.org
 help / color / mirror / Atom feed
* [libgpiod] libgpiod v2.0 API discussion
@ 2020-11-03 20:42 Bartosz Golaszewski
  2020-11-04  7:47 ` Helmut Grohne
  2020-11-05  3:09 ` Kent Gibson
  0 siblings, 2 replies; 7+ messages in thread
From: Bartosz Golaszewski @ 2020-11-03 20:42 UTC (permalink / raw)
  To: Kent Gibson, Linus Walleij
  Cc: linux-gpio, Andy Shevchenko, Arnd Bergmann, Jack Winch,
	Helmut Grohne, Geert Uytterhoeven

Hello!

I'd like to start a discussion on the new API in libgpiod v2.0. Below
are my ideas for different parts starting from the top of
include/gpiod.h. I'm open for any other ideas.

Arnd: I'm Cc'ing you mostly for your knowledge on the year 2036
problem: please take a look at the part about reading line events and
whether we should use struct timespec in userspace or if it's
discouraged.

* Context-less API

I think we can entirely remove it. As I mentioned in another thread: I
have never received a single question about it - as opposed to all the
other interfaces - so I suppose it's just needless cruft.

* Chip properties and handling

I think we're mostly good on that front. V2 has not altered that part.

* Line watch

Some preliminary work was done by me for V1 watch but it never got
into mainline libgpiod. Since the v2 of line watch uAPI mostly follows
what I did for v1 (except for a different lineinfo struct) I think we
can mostly adapt the code from my development branch[1].

New tool: gpiowatch will be provided in tools/.

* Line properties

We're mostly fine here, but I'd still remove
gpiod_line_is_open_source/drain() in favor of an enum defining drive
flags and a function called gpiod_line_drive(). Also:
gpiod_line_active_state() should probably become
gpiod_line_is_active_low() and we can drop the corresponding enum.

* Line bulks

Already submitted a patch that makes gpiod_line_bulk opaque across the
entire codebase.

* Line lookup by name

One thing that was omitted in the first version was the fact that GPIO
line names are not unique in the kernel. Functions that do name lookup
from a single chip should return bulk objects. Functions that can
lookup several lines with different names should probably be removed
because it's difficult to design an elegant API that would take this
non-uniqueness into account. Returning arrays of bulks sounds like a
bad idea.

For global line lookup: the core C library should probably offer some
function to which a callback would be passed. This would then be
called every time a matching line was found and would take the
matching line and the owning chip as arguments.

gpiofind tool would need to be updated and display multiple lines if
more lines are matching.

Testing of this part will only be possible once we have the new
configfs-based gpio-mockup module on which I'm working currently. It
will provide a way to flexibly name separate dummy lines.

* Iterators

Kent suggests we ditch iterators entirely. I agree for line bulks -
seems like the overhead of allocating an iterator is not worth it
(except for bindings where we use the built-in language features of
C++ and Python).

Kent also pointed out that the chip iterator will fail if opening any
of the chips fails (which can happen for instance if the user only has
permissions to access certain chips, not all of them) and this needs
addressing.

I'd still prefer to keep some way of detecting chips in the system -
be it an iterator or a loop function taking a callback as argument.
Now that we have the is_gpiochip_cdev() function in the library, I
think we don't have to limit ourselves to only checking devices whose
names start with 'gpiochip' - we can check all files in /dev and
verify in /sys if they're GPIO chips or not. Then the iterator would
only iterate over chips we managed to access. Let me know what you
think.

I think that is_gpiochip_cdev() could be renamed to
gpiod_is_gpiochip_cdev() and exported so that users can verify on
their own if given device file is a GPIO chip.

I think Kent suggested returning an array of paths to GPIO chips too.

I'm considering dropping the line iterator as we can simply call
gpiod_chip_get_all_lines() and loop over the returned bulk.

* Reading line events

Since we now can have a single file descriptor for multiple monitored
lines - reading of the events needs to be modified. Event structure
needs to reference the line object with which it's associated. Adding
additional fields for the seq numbers is a no-brainer of course.

I'm wondering if we should still be using struct timespec for
timestamps. On one hand it's used everywhere in libc but it's also
subject to ABI change due to year 2036 problem. Maybe we should switch
to nanoseconds in 64-bit integer just like what we get from the kernel
but then user-space won't have standard tools to deal with such
timestamps.

Some other general design issues I'd like to discuss too:

* Opaque data types

Kent suggested we make all user-facing data types opaque. While I
agree for almost all structs - I think that line & watch events as
well as request config should remain visible to users (although with
padding this time - I missed this in v1.x).

For events the reason is simple: with opaque types we'd face an
overhead with malloc() and I think it's important that we can read
them as fast as possible.

For config: I believe an opaque request config structure will require
a lot of getter/setter boiler plate code for not much profit.

Let's discuss!

Best Regards,
Bartosz Golaszewski

[1] https://github.com/brgl/libgpiod/commits/topic/line-watch

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-03 20:42 [libgpiod] libgpiod v2.0 API discussion Bartosz Golaszewski
@ 2020-11-04  7:47 ` Helmut Grohne
  2020-11-05  3:09 ` Kent Gibson
  1 sibling, 0 replies; 7+ messages in thread
From: Helmut Grohne @ 2020-11-04  7:47 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: Kent Gibson, Linus Walleij, linux-gpio, Andy Shevchenko,
	Arnd Bergmann, Jack Winch, Geert Uytterhoeven

On Tue, Nov 03, 2020 at 09:42:23PM +0100, Bartosz Golaszewski wrote:
> * Line lookup by name
> 
> One thing that was omitted in the first version was the fact that GPIO
> line names are not unique in the kernel. Functions that do name lookup
> from a single chip should return bulk objects. Functions that can
> lookup several lines with different names should probably be removed
> because it's difficult to design an elegant API that would take this
> non-uniqueness into account. Returning arrays of bulks sounds like a
> bad idea.

While there is no guarantuee that line names are unique in general, that
tends to be the case in a number of practical cases. Notably, embedded
system developers (which seem like a big part of the intended audience)
are in control of line names and can ensure their uniqueness. It
certainly makes sense to offer an API that deals with non-unique line
names.  However, I think that keeping a less flexible convenience helper
for looking up a line by supposedly unique name would be good. In case
the uniqueness is violated, I'd expect an error.

There is ENOTUNIQ. It is rarely used and while the description says that
it is network-related, it is used for inifiband, nvme, virtualbox guest
functionality and cifs beyond decnet and 802.11.

> * Reading line events
> 
> Since we now can have a single file descriptor for multiple monitored
> lines - reading of the events needs to be modified. Event structure
> needs to reference the line object with which it's associated. Adding
> additional fields for the seq numbers is a no-brainer of course.
> 
> I'm wondering if we should still be using struct timespec for
> timestamps. On one hand it's used everywhere in libc but it's also
> subject to ABI change due to year 2036 problem. Maybe we should switch
> to nanoseconds in 64-bit integer just like what we get from the kernel
> but then user-space won't have standard tools to deal with such
> timestamps.

Aside: It's 2038, not 2036. The key is that you thought of this. Thanks.

The story for timespec is a little difficult for some time to come:
 * On 64bit architectures and some 32bit architectures (e.g. x32), using
   timespec doesn't pose a 2038 problem.
 * On the kernel side, using timespec has become hard. However there now
   is a timespec64 which is not encumbered with 2038 problems on any
   architecture on the kernel ABI side.
 * On the userspace side, opting into timespec64 is still quite hard. I
   think that the idea was to follow the process of widening off_t to
   64 bits. So userspace won't get a timespec64, but instead will be
   able to choose between 2038-buggy timespec and timespec64 using a
   macro. glibc will do the translation (truncation) for many syscalls
   to retain backwards compatibility. Unless I am mistaken, the macro is
   supposed to be _TIME_BITS=64, but that doesn't seem to have
   progressed very far.

A real implication here is that unless all of your users enable this
extra macro, they'll be affected by the problem on most 32bit
architectures when you use timespec even if you use timespec64 on the
kernel side.

On the C++ side, ktime_t values can be stored in a suitable std::chrono
clock without the need for a conversion. The released libgpiod however
transforms it to timespec and then reconstructs the original value for
std::chrono usage. In doing so it introduces a 2038 problem entirely on
the userspace side besides incurring integer arithmetic that can be slow
on certain arm processors. Please consider offering a libgpiod C-API
that allows using the kernel time represntation without conversion.

> * Opaque data types
> 
> For events the reason is simple: with opaque types we'd face an
> overhead with malloc() and I think it's important that we can read
> them as fast as possible.

+1 to this part

Helmut

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-03 20:42 [libgpiod] libgpiod v2.0 API discussion Bartosz Golaszewski
  2020-11-04  7:47 ` Helmut Grohne
@ 2020-11-05  3:09 ` Kent Gibson
  2020-11-05  7:32   ` Helmut Grohne
  2020-11-10 15:42   ` Bartosz Golaszewski
  1 sibling, 2 replies; 7+ messages in thread
From: Kent Gibson @ 2020-11-05  3:09 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: Linus Walleij, linux-gpio, Andy Shevchenko, Arnd Bergmann,
	Jack Winch, Helmut Grohne, Geert Uytterhoeven

On Tue, Nov 03, 2020 at 09:42:23PM +0100, Bartosz Golaszewski wrote:
> Hello!
> 
> I'd like to start a discussion on the new API in libgpiod v2.0. Below
> are my ideas for different parts starting from the top of
> include/gpiod.h. I'm open for any other ideas.
> 
> Arnd: I'm Cc'ing you mostly for your knowledge on the year 2036
> problem: please take a look at the part about reading line events and
> whether we should use struct timespec in userspace or if it's
> discouraged.
> 
> * Context-less API
> 
> I think we can entirely remove it. As I mentioned in another thread: I
> have never received a single question about it - as opposed to all the
> other interfaces - so I suppose it's just needless cruft.
> 
> * Chip properties and handling
> 
> I think we're mostly good on that front. V2 has not altered that part.
> 
> * Line watch
> 
> Some preliminary work was done by me for V1 watch but it never got
> into mainline libgpiod. Since the v2 of line watch uAPI mostly follows
> what I did for v1 (except for a different lineinfo struct) I think we
> can mostly adapt the code from my development branch[1].
> 
> New tool: gpiowatch will be provided in tools/.
> 
> * Line properties
> 
> We're mostly fine here, but I'd still remove
> gpiod_line_is_open_source/drain() in favor of an enum defining drive
> flags and a function called gpiod_line_drive(). Also:
> gpiod_line_active_state() should probably become
> gpiod_line_is_active_low() and we can drop the corresponding enum.
> 
> * Line bulks
> 
> Already submitted a patch that makes gpiod_line_bulk opaque across the
> entire codebase.
> 
> * Line lookup by name
> 
> One thing that was omitted in the first version was the fact that GPIO
> line names are not unique in the kernel. Functions that do name lookup
> from a single chip should return bulk objects. Functions that can
> lookup several lines with different names should probably be removed
> because it's difficult to design an elegant API that would take this
> non-uniqueness into account. Returning arrays of bulks sounds like a
> bad idea.
> 
> For global line lookup: the core C library should probably offer some
> function to which a callback would be passed. This would then be
> called every time a matching line was found and would take the
> matching line and the owning chip as arguments.
> 
> gpiofind tool would need to be updated and display multiple lines if
> more lines are matching.
> 
> Testing of this part will only be possible once we have the new
> configfs-based gpio-mockup module on which I'm working currently. It
> will provide a way to flexibly name separate dummy lines.
> 
> * Iterators
> 
> Kent suggests we ditch iterators entirely. I agree for line bulks -
> seems like the overhead of allocating an iterator is not worth it
> (except for bindings where we use the built-in language features of
> C++ and Python).
> 

No problem with adding iterators in the bindings to provide an API best
suited for that language, but I definitely question their existence in
the C API.

> Kent also pointed out that the chip iterator will fail if opening any
> of the chips fails (which can happen for instance if the user only has
> permissions to access certain chips, not all of them) and this needs
> addressing.
> 
> I'd still prefer to keep some way of detecting chips in the system -
> be it an iterator or a loop function taking a callback as argument.
> Now that we have the is_gpiochip_cdev() function in the library, I
> think we don't have to limit ourselves to only checking devices whose
> names start with 'gpiochip' - we can check all files in /dev and
> verify in /sys if they're GPIO chips or not. Then the iterator would
> only iterate over chips we managed to access. Let me know what you
> think.
> 
> I think that is_gpiochip_cdev() could be renamed to
> gpiod_is_gpiochip_cdev() and exported so that users can verify on
> their own if given device file is a GPIO chip.
> 
> I think Kent suggested returning an array of paths to GPIO chips too.
> 
> I'm considering dropping the line iterator as we can simply call
> gpiod_chip_get_all_lines() and loop over the returned bulk.
> 
> * Reading line events
> 
> Since we now can have a single file descriptor for multiple monitored
> lines - reading of the events needs to be modified. Event structure
> needs to reference the line object with which it's associated. Adding
> additional fields for the seq numbers is a no-brainer of course.
> 

Why not just return the offset instead of a reference to the line?
Provide a helper to get the line with a given offset from a line
bulk/line request.

I'm also wondering if line bulk can be replaced with an analog of the
uAPI line_request.  As the line bulk now only applies to a set of lines
on one chip it has taken another step towards the line request, and
cases where linke bulks are accepted or returned could be replaced with
either a line request or a set of offsets on a particular request.

Sorry to be throwing half-baked ideas out - I'd like to spend some
time looking at changing the libgpiod object model to more closely
parallel the uAPI, but haven't had a chance yet and probably wont
until next week, so half-baked is as good as it gets for now...

> I'm wondering if we should still be using struct timespec for
> timestamps. On one hand it's used everywhere in libc but it's also
> subject to ABI change due to year 2036 problem. Maybe we should switch
> to nanoseconds in 64-bit integer just like what we get from the kernel
> but then user-space won't have standard tools to deal with such
> timestamps.
> 

Return the 64bit timestamp unchanged from the uAPI and provide helpers
to convert to other types if such conversions are not already available
in the standard libraries.  Or perhaps alternate accessors on the event
that perform the appropriate conversion?

Note that the source clock for the event timestamp is not available in
the event itself, so context free conversion is dangerous.
Hmmm, perhaps we should add a flag to the event as part of the realtime
clock patch??

> Some other general design issues I'd like to discuss too:
> 
> * Opaque data types
> 
> Kent suggested we make all user-facing data types opaque. While I
> agree for almost all structs - I think that line & watch events as
> well as request config should remain visible to users (although with
> padding this time - I missed this in v1.x).
> 
> For events the reason is simple: with opaque types we'd face an
> overhead with malloc() and I think it's important that we can read
> them as fast as possible.
> 

Opaque does not imply that libgpiod has to perform a malloc() - you can
provide helpers to alloc/init an event buffer that the event_wait() can
write the received events into, and accessors for the event buffer as
well as the event itself.

Wrt "fast as possible", what are you optimising for - minimal latency or
maximal throughput?  For minimal latency you want to read just the next
event.  For maximal throughput you want to minimise the number of
ioctls/syscalls and so dump the kernel event fifo into a userspace
buffer.

The library should allow the user to select what they are optimising
for, so you may want to consider two event_read() flavours, one that gets
just the next event and another that fills a buffer.  The single form may
just call the buffer form with n=1, or it may be able to be slightly
optimised as it could avoid buffer range checks.
Both should accept the line request fd, rather than the line or line bulk
that they currently do.
And in both cases the user should provide the space for the event(s) to
be written to.

Maybe other helper versions that wait on the fd, or malloc the space for
the events on the fly - but only if there is some user demand for them. 

Also consider that if the user wants best possible performance then they
should write a kernel driver rather than using the GPIO uAPI.

> For config: I believe an opaque request config structure will require
> a lot of getter/setter boiler plate code for not much profit.
> 

The profit being a guarantee of ABI stability and backward compatibility,
while still being free to completely rework the implementation
internally?

I still prefer opaque, with some helpers to create common cases, similar
to the existing gpiod_line_request_XXX variants, but also with a full set
of mutators so they can be tweaked for special cases.

Wrt config per-line, I'm assuming the line bulk/request will provide a
function to allow the setting of config for a given line, as well as the
default for the request, and that that would be mapped to the uAPI
attributes as part of executing the line request.

Cheers,
Kent.

> Let's discuss!
> 
> Best Regards,
> Bartosz Golaszewski
> 
> [1] https://github.com/brgl/libgpiod/commits/topic/line-watch

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-05  3:09 ` Kent Gibson
@ 2020-11-05  7:32   ` Helmut Grohne
  2020-11-05 13:50     ` Kent Gibson
  2020-11-10 15:42   ` Bartosz Golaszewski
  1 sibling, 1 reply; 7+ messages in thread
From: Helmut Grohne @ 2020-11-05  7:32 UTC (permalink / raw)
  To: Kent Gibson
  Cc: Bartosz Golaszewski, Linus Walleij, linux-gpio, Andy Shevchenko,
	Arnd Bergmann, Jack Winch, Geert Uytterhoeven

On Thu, Nov 05, 2020 at 04:09:13AM +0100, Kent Gibson wrote:
> Wrt "fast as possible", what are you optimising for - minimal latency or
> maximal throughput?  For minimal latency you want to read just the next
> event.  For maximal throughput you want to minimise the number of
> ioctls/syscalls and so dump the kernel event fifo into a userspace
> buffer.

I believe that the target actually is both as you suggest later.

> The library should allow the user to select what they are optimising
> for, so you may want to consider two event_read() flavours, one that gets
> just the next event and another that fills a buffer.  The single form may
> just call the buffer form with n=1, or it may be able to be slightly
> optimised as it could avoid buffer range checks.
> Both should accept the line request fd, rather than the line or line bulk
> that they currently do.
> And in both cases the user should provide the space for the event(s) to
> be written to.

The C-API does allow this in the released version. It provides
gpiod_line_event_read with a user-supplied buffer for single-event reads
and it provides gpiod_line_event_read_multiple with a user-supplied
array for minimizing the syscall count. Bindings such as C++ also expose
these (even though the C++ one performs the allocation).

Of course keeping these properties would be good for the reasons you
detailed.

> > For config: I believe an opaque request config structure will require
> > a lot of getter/setter boiler plate code for not much profit.
> > 
> 
> The profit being a guarantee of ABI stability and backward compatibility,
> while still being free to completely rework the implementation
> internally?

I think you can gain this profit without opaqueness.  A user can only
use those aspects of a config that did exist a the time the code is
developed. For that reason, there can be multiple versions of a
configuration type and the consumer can use whatever type they happend
to see when the code was compiled. The current configuration type can be
typedefd to a standard name and only needs to be API compatible with
previous config types (i.e. only add new fields with default
initializers). In C++, one can leverage function overloads to implement
that. In C, you can use symbol versioning or do poor-mans-symbol-
versioning by #defineing the consuming function names to versioned ones.

The only downside of this seems to be that libgpiod would have to keep
the old types and overloads for all eternity.

When doing this, it is important that consumers do not pass around
pointers to a config to code compiled with a different libgpiod as these
types may be different and have a different storage layout. For the
config, this seems fairly unlikely.

Helmut

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-05  7:32   ` Helmut Grohne
@ 2020-11-05 13:50     ` Kent Gibson
  2020-11-05 14:29       ` Bartosz Golaszewski
  0 siblings, 1 reply; 7+ messages in thread
From: Kent Gibson @ 2020-11-05 13:50 UTC (permalink / raw)
  To: Helmut Grohne
  Cc: Bartosz Golaszewski, Linus Walleij, linux-gpio, Andy Shevchenko,
	Arnd Bergmann, Jack Winch, Geert Uytterhoeven

On Thu, Nov 05, 2020 at 08:32:48AM +0100, Helmut Grohne wrote:
> On Thu, Nov 05, 2020 at 04:09:13AM +0100, Kent Gibson wrote:
> > Wrt "fast as possible", what are you optimising for - minimal latency or
> > maximal throughput?  For minimal latency you want to read just the next
> > event.  For maximal throughput you want to minimise the number of
> > ioctls/syscalls and so dump the kernel event fifo into a userspace
> > buffer.
> 
> I believe that the target actually is both as you suggest later.
> 
> > The library should allow the user to select what they are optimising
> > for, so you may want to consider two event_read() flavours, one that gets
> > just the next event and another that fills a buffer.  The single form may
> > just call the buffer form with n=1, or it may be able to be slightly
> > optimised as it could avoid buffer range checks.
> > Both should accept the line request fd, rather than the line or line bulk
> > that they currently do.
> > And in both cases the user should provide the space for the event(s) to
> > be written to.
> 
> The C-API does allow this in the released version. It provides
> gpiod_line_event_read with a user-supplied buffer for single-event reads
> and it provides gpiod_line_event_read_multiple with a user-supplied
> array for minimizing the syscall count. Bindings such as C++ also expose
> these (even though the C++ one performs the allocation).
> 
> Of course keeping these properties would be good for the reasons you
> detailed.
> 

So it does - in fact it has a whole family of variants of event_wait()
and event_read().  What I may've worked out here by going off on a
tangent is which one or two of those form the core API.  The rest are
just helper wrappers around those - and can be considered cruft unless
there is a clear use case for them.

As I think I've mentioned elsewhere, my preferred approach for
rethinking the libgpiod API for v2 is to shrink the API to the core
functions, rethink that set wrt how it maps onto and exposes the
features available in uAPI v2, and then expand that revised set with
helpers as appropriate, and then rethink the bindings.

The full libgpiod API as it stands is a bit of a beast.

> > > For config: I believe an opaque request config structure will require
> > > a lot of getter/setter boiler plate code for not much profit.
> > > 
> > 
> > The profit being a guarantee of ABI stability and backward compatibility,
> > while still being free to completely rework the implementation
> > internally?
> 
> I think you can gain this profit without opaqueness.  A user can only
> use those aspects of a config that did exist a the time the code is
> developed. For that reason, there can be multiple versions of a
> configuration type and the consumer can use whatever type they happend
> to see when the code was compiled. The current configuration type can be
> typedefd to a standard name and only needs to be API compatible with
> previous config types (i.e. only add new fields with default
> initializers). In C++, one can leverage function overloads to implement
> that. In C, you can use symbol versioning or do poor-mans-symbol-
> versioning by #defineing the consuming function names to versioned ones.
> 
> The only downside of this seems to be that libgpiod would have to keep
> the old types and overloads for all eternity.
> 

You underestimate the power of the downside...

How do you go about testing all that?
And propagating those multiple versions through the bindings?
And testing those?

I think I'd prefer the boiler plate.

Cheers,
Kent.

> When doing this, it is important that consumers do not pass around
> pointers to a config to code compiled with a different libgpiod as these
> types may be different and have a different storage layout. For the
> config, this seems fairly unlikely.
> 
> Helmut

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-05 13:50     ` Kent Gibson
@ 2020-11-05 14:29       ` Bartosz Golaszewski
  0 siblings, 0 replies; 7+ messages in thread
From: Bartosz Golaszewski @ 2020-11-05 14:29 UTC (permalink / raw)
  To: Kent Gibson
  Cc: Helmut Grohne, Bartosz Golaszewski, Linus Walleij, linux-gpio,
	Andy Shevchenko, Arnd Bergmann, Jack Winch, Geert Uytterhoeven

On Thu, Nov 5, 2020 at 2:50 PM Kent Gibson <warthog618@gmail.com> wrote:
>

[snip]

>
> As I think I've mentioned elsewhere, my preferred approach for
> rethinking the libgpiod API for v2 is to shrink the API to the core
> functions, rethink that set wrt how it maps onto and exposes the
> features available in uAPI v2, and then expand that revised set with
> helpers as appropriate, and then rethink the bindings.
>

What I understood from your previous email is your preference for
going into the direction of exposing more of the underlying kernel API
to users or maybe even: mapping the kernel interface more strictly in
the library. I don't think we should go this way. I want to keep the
<chip, line, set of lines> model which I think is convenient and
represents the underlying hierarchy well. I also want to shield the
library users from ioctls, file descriptors (unless it's to pass them
to poll()) and even HW offsets of GPIO lines as much as possible.

This is why I'd prefer to return pointers to line objects rather than
HW offsets in event structures. Same with line requests: I'm not sure
what the advantage of having a separate request structure would be in
the library (except for representing the file descriptor received from
the kernel upon request - about which the user doesn't need to know
unless using external event loops).

I agree that there's a lot of cruft to remove but I wouldn't be so
quick to remove helpers for simplified use-cases that we have now. For
instance: if we decide to go with an opaque config structure which
needs to be allocated dynamically, then users would most likely
appreciate having helper wrappers allowing them to request lines in a
single call rather than having to deal with resource management.

[snip]

> > > > For config: I believe an opaque request config structure will require
> > > > a lot of getter/setter boiler plate code for not much profit.
> > > >
> > >
> > > The profit being a guarantee of ABI stability and backward compatibility,
> > > while still being free to completely rework the implementation
> > > internally?
> >
> > I think you can gain this profit without opaqueness.  A user can only
> > use those aspects of a config that did exist a the time the code is
> > developed. For that reason, there can be multiple versions of a
> > configuration type and the consumer can use whatever type they happend
> > to see when the code was compiled. The current configuration type can be
> > typedefd to a standard name and only needs to be API compatible with
> > previous config types (i.e. only add new fields with default
> > initializers). In C++, one can leverage function overloads to implement
> > that. In C, you can use symbol versioning or do poor-mans-symbol-
> > versioning by #defineing the consuming function names to versioned ones.
> >
> > The only downside of this seems to be that libgpiod would have to keep
> > the old types and overloads for all eternity.
> >
>
> You underestimate the power of the downside...
>
> How do you go about testing all that?
> And propagating those multiple versions through the bindings?
> And testing those?
>
> I think I'd prefer the boiler plate.
>

The scope of the library is well defined and unlikely to be expanded a
lot. To me it seems that user-facing structures for events and request
config should be fine (as long as we leave some padding for future
extensions). I don't see why would we even need multiple types of
configuration and all this nasty casting Helmut mentions? Frankly: I
don't even understand how it would work. In the end: it's not much
different than the situation with data types used in ioctls - just
have a structure with some space of expansion.

Elsewhere Kent mentioned users providing buffers for opaque event
structures. I think this is the worst of both worlds. Users still need
to know the size of the event structure and its internal
implementation can no longer change arbitrarily because this exported
size would change. Users would never be able to use static buffers
when the size can change from one version of the library to another.
Not to mention the unnecessarily complicated code this approach would
require.

Best Regards,
Bartosz Golaszewski

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [libgpiod] libgpiod v2.0 API discussion
  2020-11-05  3:09 ` Kent Gibson
  2020-11-05  7:32   ` Helmut Grohne
@ 2020-11-10 15:42   ` Bartosz Golaszewski
  1 sibling, 0 replies; 7+ messages in thread
From: Bartosz Golaszewski @ 2020-11-10 15:42 UTC (permalink / raw)
  To: Kent Gibson
  Cc: Bartosz Golaszewski, Linus Walleij, linux-gpio, Andy Shevchenko,
	Arnd Bergmann, Jack Winch, Helmut Grohne, Geert Uytterhoeven

On Thu, Nov 5, 2020 at 4:09 AM Kent Gibson <warthog618@gmail.com> wrote:
>

[snip]

>
> I'm also wondering if line bulk can be replaced with an analog of the
> uAPI line_request.  As the line bulk now only applies to a set of lines
> on one chip it has taken another step towards the line request, and
> cases where linke bulks are accepted or returned could be replaced with
> either a line request or a set of offsets on a particular request.

I was initially reluctant to take this into account but after thinking
about it over the weekend I figured out this may be a good idea.

What I have in mind is a new object: struct gpiod_line_request. This
would be returned by any request function and be internally associated
with the file descriptor returned by the request ioctl(). It would
also reference one or more lines for which the request was completed.
Unlike a bulk it would not allow users to modify the lines it
references. All value/event operations would take this as argument.
Maybe we could even drop the whole gpiod_line_bulk struct eventually.

I think it's a good approach going forward. Thanks for the suggestion Kent!

Bartosz

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-11-10 15:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03 20:42 [libgpiod] libgpiod v2.0 API discussion Bartosz Golaszewski
2020-11-04  7:47 ` Helmut Grohne
2020-11-05  3:09 ` Kent Gibson
2020-11-05  7:32   ` Helmut Grohne
2020-11-05 13:50     ` Kent Gibson
2020-11-05 14:29       ` Bartosz Golaszewski
2020-11-10 15:42   ` Bartosz Golaszewski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.