Hi Mark, On Wed, Aug 14, 2019 at 05:02:34PM +0100, Mark Brown wrote: > On Wed, Aug 14, 2019 at 12:53:24PM +0200, Ali Burak Parım wrote: > > On Thu, 1 Aug 2019 at 13:48, Mark Brown wrote: > > > On Thu, Aug 01, 2019 at 01:43:06PM +0200, Ali Burak Parım wrote: > > > > On Thu, Jul 25, 2019 at 03:01:12PM +0200, Maxime Ripard wrote: > > > > > Hardware is a custom board I designed with 4 PDM output microphones > > > > and 2 adau7002 devices as the codec for PDM-to-PCM. We want to do > > > > signal processing with this board. Therefore having separate streams for > > > > each microphone is crucial to the application though I am not sure where > > > > we should implement this exactly. > > > > What is this processing - are the streams from these microphones > > > logically related in any way (eg, is this a microphone array)? There's > > > Yes, it is a microphone array application for speech enhancement. Thus, > > signal levels and physical time delays are important. > > > > probably going to be some overlap in the input signals at least. If you > > > need to for example correlate different microphones then that's > > > relevant. > > > Yes, we correlate different microphone signals in some of our algorithms. > > OK, in that case I'd recommend providing them to userspace as a single > four channel stream - keeping everything bundled together as long as > possible to make it easier to keep the processing synced up. Ok, that's what I had in mind as well :) However, it looks like we can only capture as many channels as the max being exposed by the codec on the link? Any attempt at capturing something with 4 channels here using arecord was either reduced to two channels (the amount of channels provided by the adau7002 driver), or just refused by the ALSA core. Is there anything that we need to configure / work on to enable this? Maxime -- Maxime Ripard, Bootlin Embedded Linux and Kernel engineering https://bootlin.com