All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthias Klumpp <matthias@tenstral.net>
To: Jonathan Cameron <jic23@kernel.org>
Cc: linux-iio@vger.kernel.org, Lars-Peter Clausen <lars@metafoo.de>
Subject: Re: Using IIO for high-speed DAQ
Date: Sat, 14 Jan 2017 18:25:52 +0100	[thread overview]
Message-ID: <CAKNHny87kWbYW4jqkU2Jfsv2i3mk5bG1dewD548yQ8wV1uPTnA@mail.gmail.com> (raw)
In-Reply-To: <d1c324a3-17c9-ed66-3946-ccf9a0bbce5d@kernel.org>

2017-01-14 17:12 GMT+01:00 Jonathan Cameron <jic23@kernel.org>:
> On 13/01/17 14:13, Matthias Klumpp wrote:
>> Hello!
>> I would like to use IIO for high-speed data acquisition. In order to
>> do that I implemented a new IIO driver for the ADC chip I am using
>> (MAX1133, maximum sampling frequency 200ksps).
>>
>> The initial approach with triggered buffers was way too slow,
>> achieving only a maximum sampling frequency of 4ksps.
> Any idea where the bottle neck was specifically?  I'm guessing it
> might have been on the userspace side, but not certain.

The userspace side shouldn't be a problem, it is incredibly fast
(written to meet realtime deadlines even).

> [...]
>
> When you say use DMA, this part is an SPI device - so are you hand
> rolling DMA transfers from the spi controller?
>
> We've discussed in the past (long time ago now!) how to use spi
> controllers that support streaming modes but nothing has really come
> of it yet. What SPI controller are you using?
>
> Last time I was trying to do similar things (was a while ago now)
> I fairly quickly hit the limitation that the round trip time on SPI
> transfers even when using DMA was a lot longer than the theoretical.
> Any idea where you will be limited with that?

This is the problem - for each data acquisition, I make one spi_sync
call at time and I am really just hand-rolling SPI transfers from the
controller.
I think the gain in speed I saw came purely from the fact that I got
rid of the timer trigger, not so much from the DMA...

>> [...]
> Lars would indeed be the person I'd ask about this.  I've cc'd
> him directly as often emails get buried in the list and missed
> by people for at least a little while.
>
> Keep us informed of how you get on. Will be useful info for others.

Thank you a lot for your reply and pointers! This project I am working
on started quite small and now became way bigger than expected, with
me touching a lot of areas I have less knowledge of, so any feedback
has tremendous value.

Kind regards,
    Matthias

      parent reply	other threads:[~2017-01-14 17:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-13 14:13 Using IIO for high-speed DAQ Matthias Klumpp
2017-01-14 16:12 ` Jonathan Cameron
2017-01-14 17:03   ` Lars-Peter Clausen
2017-01-14 17:18     ` Matthias Klumpp
2017-01-14 17:49       ` Lars-Peter Clausen
2017-01-16 14:23         ` Matthias Klumpp
2017-01-16 21:40           ` Lars-Peter Clausen
2017-01-14 17:25   ` Matthias Klumpp [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKNHny87kWbYW4jqkU2Jfsv2i3mk5bG1dewD548yQ8wV1uPTnA@mail.gmail.com \
    --to=matthias@tenstral.net \
    --cc=jic23@kernel.org \
    --cc=lars@metafoo.de \
    --cc=linux-iio@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.