linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Propagating audio properties along the audio path
@ 2019-09-17 15:33 Marc Gonzalez
  2019-09-20  9:50 ` Marc Gonzalez
  0 siblings, 1 reply; 6+ messages in thread
From: Marc Gonzalez @ 2019-09-17 15:33 UTC (permalink / raw)
  To: alsa-devel; +Cc: Takashi Iwai, Linux ARM, Jaroslav Kysela

Hello everyone,

Disclaimer: I've never worked in the sound/ layer, and it is possible that
some of my questions are silly or obvious.

Basically, I'm trying to implement some form of eARC(*) on an arm64 SoC.
(*) enhanced Audio Return Channel (from HDMI 2.1)

The setup looks like this:

A = Some kind of audio source, typically a TV or game console
B = The arm64 SoC, equipped with some nice speakers

   HDMI
A ------> B

If we look inside B, we actually have
B1 = an eARC receiver (input = HDMI, output = I2S)
B2 = an audio DSP (input = I2S, output = speakers)

    I2S        ?
B1 -----> B2 -----> speakers


If I read the standard right, B is supposed to advertise which audio formats
it supports, and A is supposed to pick "the best". For the sake of argument,
let's say A picks "PCM, 48 kHz, 8 channels, 16b".

At some point, B receives audio packets, parses the Channel Status, and
determines that A is sending "PCM, 48 kHz, 8 channels, 16b". The driver
then configures the I2S link, and forwards the audio stream over I2S to
the DSP.

QUESTION_1:
How is the DSP supposed to "learn" the properties of the audio stream?
(AFAIU, they're not embedded in the data, so there must be some side-channel?)
I assume the driver of B1 is supposed to propagate the info to the driver of B2?
(Via some call-backs? By calling a function in B2?)

QUESTION_2:
Does it ever make sense for B2 to ask B1 to change the audio properties?
(Not sure if B1 is even allowed to renegotiate.)

Regards.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Propagating audio properties along the audio path
  2019-09-17 15:33 Propagating audio properties along the audio path Marc Gonzalez
@ 2019-09-20  9:50 ` Marc Gonzalez
  2019-09-23 10:47   ` Marc Gonzalez
  0 siblings, 1 reply; 6+ messages in thread
From: Marc Gonzalez @ 2019-09-20  9:50 UTC (permalink / raw)
  To: alsa-devel; +Cc: Takashi Iwai, Linux ARM, Jaroslav Kysela

On 17/09/2019 17:33, Marc Gonzalez wrote:

> Disclaimer: I've never worked in the sound/ layer, and it is possible that
> some of my questions are silly or obvious.
> 
> Basically, I'm trying to implement some form of eARC(*) on an arm64 SoC.
> (*) enhanced Audio Return Channel (from HDMI 2.1)
> 
> The setup looks like this:
> 
> A = Some kind of audio source, typically a TV or game console
> B = The arm64 SoC, equipped with some nice speakers
> 
>    HDMI
> A ------> B
> 
> If we look inside B, we actually have
> B1 = an eARC receiver (input = HDMI, output = I2S)
> B2 = an audio DSP (input = I2S, output = speakers)
> 
>     I2S        ?
> B1 -----> B2 -----> speakers
> 
> 
> If I read the standard right, B is supposed to advertise which audio formats
> it supports, and A is supposed to pick "the best". For the sake of argument,
> let's say A picks "PCM, 48 kHz, 8 channels, 16b".
> 
> At some point, B receives audio packets, parses the Channel Status, and
> determines that A is sending "PCM, 48 kHz, 8 channels, 16b". The driver
> then configures the I2S link, and forwards the audio stream over I2S to
> the DSP.
> 
> QUESTION_1:
> How is the DSP supposed to "learn" the properties of the audio stream?
> (AFAIU, they're not embedded in the data, so there must be some side-channel?)
> I assume the driver of B1 is supposed to propagate the info to the driver of B2?
> (Via some call-backs? By calling a function in B2?)
> 
> QUESTION_2:
> Does it ever make sense for B2 to ask B1 to change the audio properties?
> (Not sure if B1 is even allowed to renegotiate.)

I think it boils down to the "Dynamic PCM" abstraction?

	https://www.kernel.org/doc/html/latest/sound/soc/dpcm.html


The downstream driver (7500 lines) is tough to ingest for a noob.

	https://source.codeaurora.org/quic/la/kernel/msm-4.4/tree/sound/soc/msm/msm8998.c?h=LE.UM.1.3.r3.25

I'll keep chipping at whatever docs I can find.


One more concern popped up: if the audio stream changes mid-capture
(for example, a different TV program uses different audio settings),
then I would detect this in the eARC receiver, but it's not clear
(to me) how to propagate the info to the DSP...

I'm not even sure when the HW params actually get applied...
Is it for SNDRV_PCM_IOCTL_PREPARE? SNDRV_PCM_IOCTL_START?

I couldn't find much documentation for the IOCTLs in the kernel:

$ git grep SNDRV_PCM_IOCTL  Documentation/
Documentation/sound/designs/tracepoints.rst:value to these parameters, then execute ioctl(2) with SNDRV_PCM_IOCTL_HW_REFINE
Documentation/sound/designs/tracepoints.rst:or SNDRV_PCM_IOCTL_HW_PARAMS. The former is used just for refining available
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_REFINE only. Applications can select which
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_PARAMS, this mask is ignored and all of parameters
Documentation/sound/designs/tracepoints.rst:        SNDRV_PCM_IOCTL_HW_REFINE to retrieve this flag, then decide candidates
Documentation/sound/designs/tracepoints.rst:        of parameters and execute ioctl(2) with SNDRV_PCM_IOCTL_HW_PARAMS to


Regards.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Propagating audio properties along the audio path
  2019-09-20  9:50 ` Marc Gonzalez
@ 2019-09-23 10:47   ` Marc Gonzalez
  2019-09-24 13:52     ` [alsa-devel] " Charles Keepax
  0 siblings, 1 reply; 6+ messages in thread
From: Marc Gonzalez @ 2019-09-23 10:47 UTC (permalink / raw)
  To: alsa-devel; +Cc: Takashi Iwai, Linux ARM, Jaroslav Kysela

On 20/09/2019 11:50, Marc Gonzalez wrote:

> One more concern popped up: if the audio stream changes mid-capture
> (for example, a different TV program uses different audio settings),
> then I would detect this in the eARC receiver, but it's not clear
> (to me) how to propagate the info to the DSP...
> 
> I'm not even sure when the HW params actually get applied...
> Is it for SNDRV_PCM_IOCTL_PREPARE? SNDRV_PCM_IOCTL_START?

I enabled debug logs in the sound layer:
echo "file sound/* +fpm" > /sys/kernel/debug/dynamic_debug/control

and sprinkled dump_stack() in several driver callbacks.

When I run 'tinycap /tmp/earc.wav -t 10 -r 44100 -b 32'
I see the open/SyS_openat call and the capture ioctl call
which together generate calls to
1) dpcm_fe_dai_open
2) dpcm_fe_dai_hw_params
3) dpcm_fe_dai_prepare
4) dpcm_fe_dai_trigger

But everything looks "synchronous", as in "reaction to user-space commands".
I don't see how "asynchronous" events are dealt with, such as the stream
params changing while a capture is active?

Regards.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [alsa-devel] Propagating audio properties along the audio path
  2019-09-23 10:47   ` Marc Gonzalez
@ 2019-09-24 13:52     ` Charles Keepax
  2019-09-24 14:26       ` Marc Gonzalez
  0 siblings, 1 reply; 6+ messages in thread
From: Charles Keepax @ 2019-09-24 13:52 UTC (permalink / raw)
  To: Marc Gonzalez; +Cc: alsa-devel, Takashi Iwai, Linux ARM

On Mon, Sep 23, 2019 at 12:47:33PM +0200, Marc Gonzalez wrote:
> On 20/09/2019 11:50, Marc Gonzalez wrote:
> 
> > One more concern popped up: if the audio stream changes mid-capture
> > (for example, a different TV program uses different audio settings),
> > then I would detect this in the eARC receiver, but it's not clear
> > (to me) how to propagate the info to the DSP...
> > 
> > I'm not even sure when the HW params actually get applied...
> > Is it for SNDRV_PCM_IOCTL_PREPARE? SNDRV_PCM_IOCTL_START?
> 
> I enabled debug logs in the sound layer:
> echo "file sound/* +fpm" > /sys/kernel/debug/dynamic_debug/control
> 
> and sprinkled dump_stack() in several driver callbacks.
> 
> When I run 'tinycap /tmp/earc.wav -t 10 -r 44100 -b 32'
> I see the open/SyS_openat call and the capture ioctl call
> which together generate calls to
> 1) dpcm_fe_dai_open
> 2) dpcm_fe_dai_hw_params
> 3) dpcm_fe_dai_prepare
> 4) dpcm_fe_dai_trigger
> 
> But everything looks "synchronous", as in "reaction to user-space commands".
> I don't see how "asynchronous" events are dealt with, such as the stream
> params changing while a capture is active?
> 

In general the ALSA framework doesn't really allow for stream
params to change whilst the stream is active. Doing so is
also normally very hard for the types of hardware usually
involved. For example changing the clocks on a running I2S bus,
very difficult to get both ends to pick up those changes at
exactly the correct sample. Some newer buses like soundwire
have more support for things like this were the ends of the
link can synchronise changes but even there that is normally
used for adding/removing streams from the bus, not reconfiguring
a running stream.

In your case above I would imagine the system would probably be
setup where the DSP handles the conversion between the params
requested from the receiver and those requested by user-space.
One of the intentions of DPCM was to allow the backend
(DSP-receiver here) to have different params to the frontend
(DSP-userspace here). Although as you note you still probably
need to add something to propogate those changes to the DSP. What
form does the physical link between the receiver and the DSP
take?

Thanks,
Charles

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [alsa-devel] Propagating audio properties along the audio path
  2019-09-24 13:52     ` [alsa-devel] " Charles Keepax
@ 2019-09-24 14:26       ` Marc Gonzalez
  2019-09-24 14:42         ` Charles Keepax
  0 siblings, 1 reply; 6+ messages in thread
From: Marc Gonzalez @ 2019-09-24 14:26 UTC (permalink / raw)
  To: Charles Keepax
  Cc: alsa-devel, Mark Brown, Takashi Iwai, Linux ARM, Liam Girdwood

On 24/09/2019 15:52, Charles Keepax wrote:

> In general the ALSA framework doesn't really allow for stream
> params to change whilst the stream is active. Doing so is
> also normally very hard for the types of hardware usually
> involved. For example changing the clocks on a running I2S bus,
> very difficult to get both ends to pick up those changes at
> exactly the correct sample. Some newer buses like soundwire
> have more support for things like this were the ends of the
> link can synchronise changes but even there that is normally
> used for adding/removing streams from the bus, not reconfiguring
> a running stream.

This jives with what "filt3r" wrote on #alsa-soc

"at one point we were just closing the stream (somehow) if we detected
a change in e.g. sample-rate, so the user-space application would fail
on snd_pcm_readi()"

	snd_pcm_stop(p_spdif->capture_stream, SNDRV_PCM_STATE_DISCONNECTED);

> In your case above I would imagine the system would probably be
> setup where the DSP handles the conversion between the params
> requested from the receiver and those requested by user-space.
> One of the intentions of DPCM was to allow the backend
> (DSP-receiver here) to have different params to the frontend
> (DSP-userspace here). Although as you note you still probably
> need to add something to propagate those changes to the DSP. What
> form does the physical link between the receiver and the DSP
> take?

The setup looks like this:

A = Some kind of audio source, typically a TV or game console
B = The arm64 SoC, equipped with some nice speakers

   HDMI
A ------> B

If we look inside B, we actually have
B1 = an eARC receiver (input = HDMI, output = I2S)
B2 = an audio DSP (input = I2S, output = speakers)

    I2S        ?
B1 -----> B2 -----> speakers

To answer your question, B1 and B2 are connected via I2S.

Regards.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [alsa-devel] Propagating audio properties along the audio path
  2019-09-24 14:26       ` Marc Gonzalez
@ 2019-09-24 14:42         ` Charles Keepax
  0 siblings, 0 replies; 6+ messages in thread
From: Charles Keepax @ 2019-09-24 14:42 UTC (permalink / raw)
  To: Marc Gonzalez
  Cc: alsa-devel, Mark Brown, Takashi Iwai, Linux ARM, Liam Girdwood

On Tue, Sep 24, 2019 at 04:26:20PM +0200, Marc Gonzalez wrote:
> On 24/09/2019 15:52, Charles Keepax wrote:
> 
> > In general the ALSA framework doesn't really allow for stream
> > params to change whilst the stream is active. Doing so is
> > also normally very hard for the types of hardware usually
> > involved. For example changing the clocks on a running I2S bus,
> > very difficult to get both ends to pick up those changes at
> > exactly the correct sample. Some newer buses like soundwire
> > have more support for things like this were the ends of the
> > link can synchronise changes but even there that is normally
> > used for adding/removing streams from the bus, not reconfiguring
> > a running stream.
> 
> This jives with what "filt3r" wrote on #alsa-soc
> 
> "at one point we were just closing the stream (somehow) if we detected
> a change in e.g. sample-rate, so the user-space application would fail
> on snd_pcm_readi()"
> 
> 	snd_pcm_stop(p_spdif->capture_stream, SNDRV_PCM_STATE_DISCONNECTED);
> 

Ah ok yeah that seems like a pretty good option to me thus
forcing user-space to re-open at the new params.

> > In your case above I would imagine the system would probably be
> > setup where the DSP handles the conversion between the params
> > requested from the receiver and those requested by user-space.
> > One of the intentions of DPCM was to allow the backend
> > (DSP-receiver here) to have different params to the frontend
> > (DSP-userspace here). Although as you note you still probably
> > need to add something to propagate those changes to the DSP. What
> > form does the physical link between the receiver and the DSP
> > take?
> 
> The setup looks like this:
> 
> A = Some kind of audio source, typically a TV or game console
> B = The arm64 SoC, equipped with some nice speakers
> 
>    HDMI
> A ------> B
> 
> If we look inside B, we actually have
> B1 = an eARC receiver (input = HDMI, output = I2S)
> B2 = an audio DSP (input = I2S, output = speakers)
> 
>     I2S        ?
> B1 -----> B2 -----> speakers
> 
> To answer your question, B1 and B2 are connected via I2S.
> 

As yeah reconfiguring the I2S whilst it is running would be a
terrifying prospect.

Thanks,
Charles

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-24 14:43 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-17 15:33 Propagating audio properties along the audio path Marc Gonzalez
2019-09-20  9:50 ` Marc Gonzalez
2019-09-23 10:47   ` Marc Gonzalez
2019-09-24 13:52     ` [alsa-devel] " Charles Keepax
2019-09-24 14:26       ` Marc Gonzalez
2019-09-24 14:42         ` Charles Keepax

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).