All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lars-Peter Clausen <lars@metafoo.de>
To: Jonathan Cameron <jic23@kernel.org>,
	Matthias Klumpp <matthias@tenstral.net>,
	linux-iio@vger.kernel.org
Subject: Re: Using IIO for high-speed DAQ
Date: Sat, 14 Jan 2017 18:03:14 +0100	[thread overview]
Message-ID: <07a0ebde-dffe-ba6a-5fac-eade517b5a21@metafoo.de> (raw)
In-Reply-To: <d1c324a3-17c9-ed66-3946-ccf9a0bbce5d@kernel.org>

On 01/14/2017 05:12 PM, Jonathan Cameron wrote:
> On 13/01/17 14:13, Matthias Klumpp wrote:
>> Hello!
>> I would like to use IIO for high-speed data acquisition. In order to
>> do that I implemented a new IIO driver for the ADC chip I am using
>> (MAX1133, maximum sampling frequency 200ksps).
>>
>> The initial approach with triggered buffers was way too slow,
>> achieving only a maximum sampling frequency of 4ksps.
> Any idea where the bottle neck was specifically?  I'm guessing it
> might have been on the userspace side, but not certain.

This is less of a software restriction and more of a hardware design issue.
Devices like the MAX1133 are not designed to be interfaced to a general
purpose operating system and be able to operate at datasheet performance.
The device requires to do a SPI transfer for each sample that is
transferred. In a interrupt driven environment the context switch overhead
that is introduced by this approach makes it impractical to be used at
higher sampling rates.

You either need to dedicate one full CPU to the capture process that does
nothing else but configuring the SPI controller and spinning on the SPI
transfer completion event. The data can then be transfered through a mailbox
like system to a different CPU running the application.

Or you need offloading support in hardware somewhere between the device and
the applications processor. The offloading block needs to be able to handle
flow control and group multiple sample into a larger block which is then in
bulk transferred to the application processor. This can for example be
implemented with a small FPGA or CPLD between the converter and the
application processor. In rare cases the SPI controller built into the
application processor SoC is capable of doing hardware flow control.

The IIO framework itself does not impose a limit on the maximum sampling
rate. It's all a matter of what the hardware is capable of handling. We have
systems where we do 3-digit MSPS continuous transfers and GSPS oneshot
transfers.

- Lars

  reply	other threads:[~2017-01-14 17:03 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-13 14:13 Using IIO for high-speed DAQ Matthias Klumpp
2017-01-14 16:12 ` Jonathan Cameron
2017-01-14 17:03   ` Lars-Peter Clausen [this message]
2017-01-14 17:18     ` Matthias Klumpp
2017-01-14 17:49       ` Lars-Peter Clausen
2017-01-16 14:23         ` Matthias Klumpp
2017-01-16 21:40           ` Lars-Peter Clausen
2017-01-14 17:25   ` Matthias Klumpp

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=07a0ebde-dffe-ba6a-5fac-eade517b5a21@metafoo.de \
    --to=lars@metafoo.de \
    --cc=jic23@kernel.org \
    --cc=linux-iio@vger.kernel.org \
    --cc=matthias@tenstral.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.