On Fri, May 15, 2020 at 11:02:50PM +0300, Serge Semin wrote: > On Fri, May 15, 2020 at 01:41:31PM +0100, Mark Brown wrote: > > I guess we could, though it's really there because for historical > > reasons we've got a bunch of different ways of specifying delays from > > client drivers rather than for the executing a delay where you've > > already got a good idea of the length of the delay. > A beauty of spi_delay_exec() is that it provides a selective delay. I mean it > checks the delay value and selects an appropriate delay method like ndelay, > udelay and so on. That's the only reason I'd use it here. But It has got a few > drawbacks: Right, usually you'd have a good ideal how long the delay is and therefore just be able to go directly for an appropraite delay function. > - timeout value has type u16. It's too small to keep nanoseconds. That could be increased, though obviously if you have a bigger delay you can specify it in usecs instead. > - semantically the xfer argument isn't optional and we can't fetch it that easy > in the dmaengine completion callbacks. Not sure I follow this. > So if there were an alternative method like _spi_transfer_delay_ns() I'd use it. > Otherwise we'd need to locally implement the selective delay. Unless you know > another alternative, which does it. If you don't and there isn't one then in > order to not over-complicate a simple delay-loop code I'd simply leave the > ndelay() here. Not that I'm aware of.