All of lore.kernel.org
 help / color / mirror / Atom feed
* Touch processing on host CPU
@ 2014-10-17 10:42 Nick Dyer
  2014-10-17 16:33 ` Jonathan Cameron
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Nick Dyer @ 2014-10-17 10:42 UTC (permalink / raw)
  To: Dmitry Torokhov, Greg KH, Jonathan Cameron; +Cc: linux-input, linux-kernel

Hi-

I'm trying to find out which subsystem maintainer I should be talking to -
apologies if I'm addressing the wrong people.

There is a model for doing touch processing where the touch controller
becomes a much simpler device which sends out raw acquisitions (over SPI 
at up to 1Mbps + protocol overheads). All touch processing is then done in
user space by the host CPU. An example of this is NVIDIA DirectTouch - see:
http://blogs.nvidia.com/blog/2012/02/24/industry-adopts-nvidia-directtouch/

In the spirit of "upstream first", I'm trying to figure out how to get a
driver accepted. Obviously it's not an input device in the normal sense. Is
it acceptable just to send the raw touch data out via a char device? Is
there another subsystem which is a good match (eg IIO)? Does the protocol
(there is ancillary/control data as well) need to be documented?

cheers

-- 
Nick Dyer
Senior Software Engineer, ITDev
Fully Managed Technology Design Services
+44 (0)23 80988855  -  http://www.itdev.co.uk


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-17 10:42 Touch processing on host CPU Nick Dyer
@ 2014-10-17 16:33 ` Jonathan Cameron
  2014-10-17 17:17 ` Dmitry Torokhov
  2014-10-21 11:01 ` Pavel Machek
  2 siblings, 0 replies; 8+ messages in thread
From: Jonathan Cameron @ 2014-10-17 16:33 UTC (permalink / raw)
  To: Nick Dyer, Dmitry Torokhov, Greg KH; +Cc: linux-input, linux-kernel



On October 17, 2014 11:42:10 AM GMT+01:00, Nick Dyer <nick.dyer@itdev.co.uk> wrote:
>Hi-
>
>I'm trying to find out which subsystem maintainer I should be talking
>to -
>apologies if I'm addressing the wrong people.
>
>There is a model for doing touch processing where the touch controller
>becomes a much simpler device which sends out raw acquisitions (over
>SPI 
>at up to 1Mbps + protocol overheads). All touch processing is then done
>in
>user space by the host CPU. An example of this is NVIDIA DirectTouch -
>see:
>http://blogs.nvidia.com/blog/2012/02/24/industry-adopts-nvidia-directtouch/
>
>In the spirit of "upstream first", I'm trying to figure out how to get
>a
>driver accepted. Obviously it's not an input device in the normal
>sense. Is
>it acceptable just to send the raw touch data out via a char device? Is
>there another subsystem which is a good match (eg IIO)?

Possibly... 
> Does the
>protocol
>(there is ancillary/control data as well) need to be documented?

Do you know of a suitable ADC frontend?  Preferably with docs. Interesting bit is the
 data format and these ancillary parts.
>
>cheers

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-17 10:42 Touch processing on host CPU Nick Dyer
  2014-10-17 16:33 ` Jonathan Cameron
@ 2014-10-17 17:17 ` Dmitry Torokhov
  2014-10-21 12:22   ` One Thousand Gnomes
  2014-10-22 21:15   ` Andrew de los Reyes
  2014-10-21 11:01 ` Pavel Machek
  2 siblings, 2 replies; 8+ messages in thread
From: Dmitry Torokhov @ 2014-10-17 17:17 UTC (permalink / raw)
  To: Nick Dyer; +Cc: Greg KH, Jonathan Cameron, linux-input, linux-kernel

Hi Nick,

On Fri, Oct 17, 2014 at 11:42:10AM +0100, Nick Dyer wrote:
> Hi-
> 
> I'm trying to find out which subsystem maintainer I should be talking to -
> apologies if I'm addressing the wrong people.
> 
> There is a model for doing touch processing where the touch controller
> becomes a much simpler device which sends out raw acquisitions (over SPI 
> at up to 1Mbps + protocol overheads). All touch processing is then done in
> user space by the host CPU. An example of this is NVIDIA DirectTouch - see:
> http://blogs.nvidia.com/blog/2012/02/24/industry-adopts-nvidia-directtouch/
> 
> In the spirit of "upstream first", I'm trying to figure out how to get a
> driver accepted. Obviously it's not an input device in the normal sense. Is
> it acceptable just to send the raw touch data out via a char device? Is
> there another subsystem which is a good match (eg IIO)? Does the protocol
> (there is ancillary/control data as well) need to be documented?

I'd really think *long* and *hard* about this. Even if you will have the
touch process open source you have 2 options: route it back into the
kernel through uinput, thus adding latency (which might be OK, need to
measure and decide), or go back about 10 years where we had
device-specific drivers in XFree86 and re-create them again, and also do
the same for Wayland, Chrome, Android, etc.

If you will have touch processing in a binary blob, you'll also be going
to ages "Works with Ubuntu 12.04 on x86_32!" (and nothing else), or
"Android 5.1.2 on Tegra Blah (build 78912KT)" (and nothing else).

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-17 10:42 Touch processing on host CPU Nick Dyer
  2014-10-17 16:33 ` Jonathan Cameron
  2014-10-17 17:17 ` Dmitry Torokhov
@ 2014-10-21 11:01 ` Pavel Machek
  2 siblings, 0 replies; 8+ messages in thread
From: Pavel Machek @ 2014-10-21 11:01 UTC (permalink / raw)
  To: Nick Dyer
  Cc: Dmitry Torokhov, Greg KH, Jonathan Cameron, linux-input, linux-kernel

Hi!


> I'm trying to find out which subsystem maintainer I should be talking to -
> apologies if I'm addressing the wrong people.
> 
> There is a model for doing touch processing where the touch controller
> becomes a much simpler device which sends out raw acquisitions (over SPI 
> at up to 1Mbps + protocol overheads). All touch processing is then done in
> user space by the host CPU. An example of this is NVIDIA DirectTouch - see:
> http://blogs.nvidia.com/blog/2012/02/24/industry-adopts-nvidia-directtouch/

Would it be possible to do processing in kernel space?

> In the spirit of "upstream first", I'm trying to figure out how to get a
> driver accepted. Obviously it's not an input device in the normal sense. Is
> it acceptable just to send the raw touch data out via a char device?
> Is

Char device would be best option if not.

> there another subsystem which is a good match (eg IIO)? Does the protocol
> (there is ancillary/control data as well) need to be documented?

It really depends. If you have driver for serial port, you don't need
to describe what goes over the serial port. But documentation would be
nice.

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-17 17:17 ` Dmitry Torokhov
@ 2014-10-21 12:22   ` One Thousand Gnomes
  2014-10-21 16:47     ` Nick Dyer
  2014-10-22 21:15   ` Andrew de los Reyes
  1 sibling, 1 reply; 8+ messages in thread
From: One Thousand Gnomes @ 2014-10-21 12:22 UTC (permalink / raw)
  To: Dmitry Torokhov
  Cc: Nick Dyer, Greg KH, Jonathan Cameron, linux-input, linux-kernel

> If you will have touch processing in a binary blob, you'll also be going
> to ages "Works with Ubuntu 12.04 on x86_32!" (and nothing else), or
> "Android 5.1.2 on Tegra Blah (build 78912KT)" (and nothing else).

As well as not going upstream because there is no way anyone else can
test changes to the code, or support it. Plus of course there are those
awkward questions around derivative work boundaries that it is best the
base kernel keeps well clear of.

If the data format is documented to the point someone can go write their
own touch processor for the bitstream then that really deals with it
anyway. Given the number of these things starting to pop up it would
probably be good if someone did produce an open source processing engine
for touch sensor streams as it shouldn't take long before its better than
all the non-free ones 8).

Given how latency sensitive touch is and the continual data stream I
would be inclined to think that the basic processing might be better in
kernel and then as an input device - providing it can be simple enough to
want to put kernel side.

Otherwise I'd say your bitstream is probably something like ADC data and
belongs in IIO (which should also help people to have one processing
agent for multiple designs of touch, SPI controllers etc)

Alan






^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-21 12:22   ` One Thousand Gnomes
@ 2014-10-21 16:47     ` Nick Dyer
  2014-10-22 13:20       ` One Thousand Gnomes
  0 siblings, 1 reply; 8+ messages in thread
From: Nick Dyer @ 2014-10-21 16:47 UTC (permalink / raw)
  To: One Thousand Gnomes, Dmitry Torokhov, Jonathan Cameron
  Cc: Greg KH, linux-input, linux-kernel

On 21/10/14 13:22, One Thousand Gnomes wrote:
>> If you will have touch processing in a binary blob, you'll also be going
>> to ages "Works with Ubuntu 12.04 on x86_32!" (and nothing else), or
>> "Android 5.1.2 on Tegra Blah (build 78912KT)" (and nothing else).
> 
> As well as not going upstream because there is no way anyone else can
> test changes to the code, or support it. Plus of course there are those
> awkward questions around derivative work boundaries that it is best the
> base kernel keeps well clear of.
> 
> If the data format is documented to the point someone can go write their
> own touch processor for the bitstream then that really deals with it
> anyway. Given the number of these things starting to pop up it would
> probably be good if someone did produce an open source processing engine
> for touch sensor streams as it shouldn't take long before its better than
> all the non-free ones 8).

Thank you for this input, I will feed it back.

> Given how latency sensitive touch is and the continual data stream I
> would be inclined to think that the basic processing might be better in
> kernel and then as an input device - providing it can be simple enough to
> want to put kernel side.

I would think that a touch processing algorithm (including aspects such as
noise and false touch suppression, etc) would be too complex to live
in-kernel. Getting decent performance on a particular device requires a lot
of tuning/customisation.

> Otherwise I'd say your bitstream is probably something like ADC data and
> belongs in IIO (which should also help people to have one processing
> agent for multiple designs of touch, SPI controllers etc)

This sounds promising. The only sticking point I can see is that a touch
frontend has many more channels (possibly thousands), which would seem to
impose a lot of overhead when put into the IIO framework. I will certainly
take a closer look at it.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-21 16:47     ` Nick Dyer
@ 2014-10-22 13:20       ` One Thousand Gnomes
  0 siblings, 0 replies; 8+ messages in thread
From: One Thousand Gnomes @ 2014-10-22 13:20 UTC (permalink / raw)
  To: Nick Dyer
  Cc: Dmitry Torokhov, Jonathan Cameron, Greg KH, linux-input, linux-kernel

> This sounds promising. The only sticking point I can see is that a touch
> frontend has many more channels (possibly thousands), which would seem to
> impose a lot of overhead when put into the IIO framework. I will certainly
> take a closer look at it.

If that is the case then it may not be the right match - but it might
also be a good argument for fixing the IIO layer so it isn't ?

Alan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Touch processing on host CPU
  2014-10-17 17:17 ` Dmitry Torokhov
  2014-10-21 12:22   ` One Thousand Gnomes
@ 2014-10-22 21:15   ` Andrew de los Reyes
  1 sibling, 0 replies; 8+ messages in thread
From: Andrew de los Reyes @ 2014-10-22 21:15 UTC (permalink / raw)
  To: Dmitry Torokhov
  Cc: Nick Dyer, Greg KH, Jonathan Cameron, linux-input, linux-kernel

On Fri, Oct 17, 2014 at 10:17 AM, Dmitry Torokhov
<dmitry.torokhov@gmail.com> wrote:
> Hi Nick,
>
> On Fri, Oct 17, 2014 at 11:42:10AM +0100, Nick Dyer wrote:
>> Hi-
>>
>> I'm trying to find out which subsystem maintainer I should be talking to -
>> apologies if I'm addressing the wrong people.
>>
>> There is a model for doing touch processing where the touch controller
>> becomes a much simpler device which sends out raw acquisitions (over SPI
>> at up to 1Mbps + protocol overheads). All touch processing is then done in
>> user space by the host CPU. An example of this is NVIDIA DirectTouch - see:
>> http://blogs.nvidia.com/blog/2012/02/24/industry-adopts-nvidia-directtouch/
>>
>> In the spirit of "upstream first", I'm trying to figure out how to get a
>> driver accepted. Obviously it's not an input device in the normal sense. Is
>> it acceptable just to send the raw touch data out via a char device? Is
>> there another subsystem which is a good match (eg IIO)? Does the protocol
>> (there is ancillary/control data as well) need to be documented?
>
> I'd really think *long* and *hard* about this. Even if you will have the
> touch process open source you have 2 options: route it back into the
> kernel through uinput, thus adding latency (which might be OK, need to
> measure and decide), or go back about 10 years where we had
> device-specific drivers in XFree86 and re-create them again, and also do
> the same for Wayland, Chrome, Android, etc.
>
> If you will have touch processing in a binary blob, you'll also be going
> to ages "Works with Ubuntu 12.04 on x86_32!" (and nothing else), or
> "Android 5.1.2 on Tegra Blah (build 78912KT)" (and nothing else).

I think we have some interest on the Chrome OS team. We've often had
issues on touch devices with centroiding problems like split/merge,
and have thought it might be nice to have lower level access to be
able to actually solve these problems, rather than just complain to
the touch vendor. Historically, however, raw touch heatmaps have not
been available, making this idea unfeasible. Maybe this is starting to
change with the push from Nvidia!

I would agree with Dmitry that we would want a consistent unified
interface that could work with different touch vendors. I am assuming
that Nick's model is roughly 60-120 frames/sec, where each frame is
NxMx16bits. Pixels would generally be ~4mm on a side for touch sensors
today, but they may get significantly smaller if stylus becomes
popular. Nick, is that roughly what you have in mind? Also, touch
sensors generally have a baseline image that is subtracted from each
recorded raw image to form the delta image (ie, raw - baseline =
delta). Nick, do you envision sending raw or delta up to userspace? I
would assume delta, b/c I think the touch controller will need to
compute delta internally to see if there's a touch. It would be quite
wasteful to invoke the kernel/userspace on an image with no touches.
That said, there may be some situations (ie, factory validation) where
getting raw images is preferred.

Also, what about self-cap scan? I know some controllers will do
self-cap when there's just one finger. I'm guessing the data for such
a frame would be (N+M)x16bits.

In order to support X, Wayland, Android, etc, I would assume that
parsed frames would be injected back into the kernel in a format
similar (identical?) to MT-B. As far as the availability of an
open-source user-space driver to convert heat-map into MT-B, maybe I'm
overly optimistic, but I would guess we could start with something
simple like center-of-mass, and the community will help make it more
robust.

Sorry I have more questions than suggestions. Hopefully Nick can shed
more light on what type of interface he would like.

-andrew

>
> Thanks.
>
> --
> Dmitry
> --
> To unsubscribe from this list: send the line "unsubscribe linux-input" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-10-22 21:15 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-17 10:42 Touch processing on host CPU Nick Dyer
2014-10-17 16:33 ` Jonathan Cameron
2014-10-17 17:17 ` Dmitry Torokhov
2014-10-21 12:22   ` One Thousand Gnomes
2014-10-21 16:47     ` Nick Dyer
2014-10-22 13:20       ` One Thousand Gnomes
2014-10-22 21:15   ` Andrew de los Reyes
2014-10-21 11:01 ` Pavel Machek

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.