linux-spi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mark Brown <broonie@kernel.org>
To: David Jander <david@protonic.nl>
Cc: Marc Kleine-Budde <mkl@pengutronix.de>,
	linux-spi@vger.kernel.org, Oleksij Rempel <ore@pengutronix.de>
Subject: Re: [RFC] A new SPI API for fast, low-latency regmap peripheral access
Date: Tue, 17 May 2022 12:57:18 +0100	[thread overview]
Message-ID: <YoONngxX/jdTjSOH@sirena.org.uk> (raw)
In-Reply-To: <20220517122439.744cf30c@erd992>

[-- Attachment #1: Type: text/plain, Size: 3109 bytes --]

On Tue, May 17, 2022 at 12:24:39PM +0200, David Jander wrote:

> (mainly in spi.c for now). Time interrupt line stays low:

>  1. Kernel 5.18-rc1 with only polling patches from spi-next: 135us

>  2. #if 0 around all stats and accounting calls: 100us

>  3. The _fast API of my original RFC: 55us

> This shows that the accounting code is a bit less than half of the dispensable
> overhead for my use-case. Indeed an easy target.

Good.

> on, so I wonder whether there is something to gain if one could just call
> spi_bus_lock() at the start of several such small sync transfers and use
> non-locking calls (skip the queue lock and io_mutex)? Not sure that would have
> a meaningful impact, but to get an idea, I replaced the bus_lock_spinlock and
> queue_lock in __spi_sync() and __spi_queued_transfer() with the bare code in
> __spi_queued_transfer(), since it won't submit work to the queue in this case
> anyway. The resulting interrupt-active time decreased by another 4us, which is
> approximately 5% of the dispensable overhead. For the record, that's 2us per
> spinlock lock/unlock pair.

I do worry about how this might perform under different loads where
there are things coming in from more than one thread.

> > One thing that might be useful would be if we could start the initial
> > status read message from within the hard interrupt handler of the client
> > driver with the goal that by the time it's threaded interrupt handler
> > runs we might have the data available.  That could go wrong on a lightly
> > loaded system where we might end up running the threaded handler while
> > the transfer is still running, OTOH if it's lightly loaded that might
> > not matter.  Or perhaps just use a completion from the SPI operation and
> > not bother with the threaded handler at all.

> You mean ("ctx" == context switch):

>  1. hard-IRQ, queue msg --ctx--> SPI worker, call msg->complete() which does
>  thread IRQ work (but can only do additional sync xfers from this context).

> vs.

>  2. hard-IRQ, queue msg --ctx--> SPI worker, call completion --ctx--> IRQ
>  thread wait for completion and does more xfers...

> vs (and this was my idea).

>  3. hard-IRQ, pump FIFO (if available) --ctx--> IRQ thread, poll FIFO, do more
>  sync xfers...

Roughly 1, but with a lot of overlap with option 3.  I'm unclear what
you mean by "queue message" here.

> Option 3 would require a separation of spi_sync_transfer into two halves. One
> half just activates CS (non-sleep GPIO api!) and fills the FIFO. The second
> half polls the FIFO for transfer completion. This path could only be chosen if
> the SPI controller has a FIFO that can hold the whole message. In other words a
> lot of spacial case handling for what it is worth probably... but still
> interesting.

Yes, that's the whole point.  This also flows nicely when you've got a
queue since you can restart the hardware from the interrupt context
without waiting to complete the transfer that just finished.

> Option 2 is probably not that bad if the SPI worker can run on another core?

Pretty much anything benefits with another core.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2022-05-17 11:57 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12 14:34 [RFC] A new SPI API for fast, low-latency regmap peripheral access David Jander
2022-05-12 20:37 ` Mark Brown
2022-05-12 22:12   ` Mark Brown
2022-05-13 12:46   ` David Jander
2022-05-13 19:36     ` Mark Brown
2022-05-16 16:28       ` Marc Kleine-Budde
2022-05-16 17:46         ` Mark Brown
2022-05-17 10:24           ` David Jander
2022-05-17 11:57             ` Mark Brown [this message]
2022-05-17 13:09               ` David Jander
2022-05-17 13:43                 ` Mark Brown
2022-05-17 15:16                   ` David Jander
2022-05-17 18:17                     ` Mark Brown
2022-05-19  8:12                       ` David Jander
2022-05-19  8:24                         ` Marc Kleine-Budde
2022-05-19 12:14                         ` Andrew Lunn
2022-05-19 14:33                           ` David Jander
2022-05-19 15:21                             ` Andrew Lunn
2022-05-20 15:22                         ` Mark Brown
2022-05-23 14:48                           ` David Jander
2022-05-23 14:59                             ` Marc Kleine-Budde
2022-05-24 11:30                               ` David Jander
2022-05-24 19:46                                 ` Mark Brown
2022-05-25 14:39                                   ` David Jander
2022-05-13 12:10 ` Andrew Lunn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YoONngxX/jdTjSOH@sirena.org.uk \
    --to=broonie@kernel.org \
    --cc=david@protonic.nl \
    --cc=linux-spi@vger.kernel.org \
    --cc=mkl@pengutronix.de \
    --cc=ore@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).