All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Question about your DSP topic branch
       [not found] <4D2652C8.7030701@codeaurora.org>
@ 2011-01-25  7:01 ` Patrick Lai
  2011-01-25 11:51   ` Mark Brown
  2011-01-25 16:46   ` Liam Girdwood
  2011-03-30 20:39 ` Question about your DSP topic Patrick Lai
  1 sibling, 2 replies; 22+ messages in thread
From: Patrick Lai @ 2011-01-25  7:01 UTC (permalink / raw)
  To: alsa-devel

Hi Liam,

I have two more questions about your DSP topic branch

7. I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is 
set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
Is it for use case that two physical devices can exchange audio data 
without host-processor intervention? If so, if user-space application 
tries to write to PCM buffer, will framework reject the buffer?

8. I see there is dmic codec(dmic.c) under sound/soc/codec which is 
pretty much just a dummy codec driver. I supposed the configuration of 
DMIC is done in other driver. Would it be better if we could have 
something like fixed voltage regulator? So, there is no need to 
duplicate the effort.

Look forward to seeing your reply soon

Thanks

On 1/6/2011 3:39 PM, Patrick Lai wrote:
> Hi Liam,
>
> I sync to your kernel DSP topic branch two days back in attempt to
> understand the up-coming ASOC DSP design. I have few questions to get
> clarification from you.
>
> 1. In the sdp4430.c, there are both FE dai link and BE dai link have
> omap-aess-audio as platform driver. How is omap-aess-audio platform
> driver used in both front-end and backend?
>
> 2. Front-end dai with stream name "Multimedia" has platform driver as
> omap-pcm-audio which is the DMA based. This front-end dai is mapped to
> backend (i.e PDM-DL1) with platform driver as omap-aess-audio.
> Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA
> based, it seems strange to have DSP-based backend.
>
> 3. To best of my knowledge, I see in omap-abe.c which is the front end
> implementation. Playback_trigger() seems to go ahead enable all
> back-ends linked to the given front-end. However, front end may not be
> routed to all back-ends. Where in the code to make sure BE is only
> activated when a stream is being routed to that particular back-end?
>
> 4. omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
> routing of front-end to back-end DAPM widgets and routing map. Am I
> correct? This leads to previous question. How are two drivers working
> together to make sure BEs are only activated if Front-end has been
> routed to them?
>
> 5. Is there mechanism for front-end to switch between DMA and DSP
> platform driver? It looks to me that mapping of frond-end and platform
> driver is predetermined based on use case. For example, HDMI goes
> through DMA as legacy dailink.
>
> 6. struct abe_data has array member called dapm. It looks to me that
> this array simply tracks dapm components status but I don't see it's
> really being used in meaningful way in omap-abe-adsp.c.
>
> Thanks
>


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-25  7:01 ` Question about your DSP topic branch Patrick Lai
@ 2011-01-25 11:51   ` Mark Brown
  2011-01-26  6:22     ` Patrick Lai
  2011-01-25 16:46   ` Liam Girdwood
  1 sibling, 1 reply; 22+ messages in thread
From: Mark Brown @ 2011-01-25 11:51 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel

On Mon, Jan 24, 2011 at 11:01:10PM -0800, Patrick Lai wrote:

> 8. I see there is dmic codec(dmic.c) under sound/soc/codec which is 
> pretty much just a dummy codec driver. I supposed the configuration of 
> DMIC is done in other driver. Would it be better if we could have 

It's for direct DMIC interfaces in CPUs like OMAP4 has.

> something like fixed voltage regulator? So, there is no need to 
> duplicate the effort.

Yes, that'd be kind of nice but given how tiny these noop drivers are
and the fact that they do all need to specify their capabilites it's not
clear that there's much advantage from combining them into a single
driver - the boiler plate is so small and simple.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-25  7:01 ` Question about your DSP topic branch Patrick Lai
  2011-01-25 11:51   ` Mark Brown
@ 2011-01-25 16:46   ` Liam Girdwood
  2011-01-27 23:41     ` Patrick Lai
  1 sibling, 1 reply; 22+ messages in thread
From: Liam Girdwood @ 2011-01-25 16:46 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel, Mark Brown, alsa-devel

Hi Patrick,

CCing in Mark and alsa-devel@alsa-project.org (preferred list)

On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
> Hi Liam,
> 
> I have two more questions about your DSP topic branch
> 
> 7. I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is 
> set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?

No host mode means no audio is transferred to the host CPU in this case.
i.e. the MODEM routes audio directly between the DSP and mic/speaker.

This flags also tells the ASoC core that no DMA will be required, hence
the code in the DMA branch will not start any DMA. This part had to be
cleaned up for upstream.

> Is it for use case that two physical devices can exchange audio data 
> without host-processor intervention? If so, if user-space application 
> tries to write to PCM buffer, will framework reject the buffer?
> 

Yes, that's correct. The PCM core will also not complain here when it
receive no data either.

> 8. I see there is dmic codec(dmic.c) under sound/soc/codec which is 
> pretty much just a dummy codec driver. I supposed the configuration of 
> DMIC is done in other driver. Would it be better if we could have 
> something like fixed voltage regulator? So, there is no need to 
> duplicate the effort.
> 

Yeah, this is a generic DMIC driver. It's designed to have a very wide
coverage and should cover most DMICs out there. So it should also be
able to fit into your architecture too.

> Look forward to seeing your reply soon
> 
> Thanks
> 
> On 1/6/2011 3:39 PM, Patrick Lai wrote:
> > Hi Liam,
> >
> > I sync to your kernel DSP topic branch two days back in attempt to
> > understand the up-coming ASOC DSP design. I have few questions to get
> > clarification from you.
> >
> > 1. In the sdp4430.c, there are both FE dai link and BE dai link have
> > omap-aess-audio as platform driver. How is omap-aess-audio platform
> > driver used in both front-end and backend?
> >

The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform
driver since they do not require DMA, whilts the other FE's use the
normal DMA platform driver since they do require DMA. 


> > 2. Front-end dai with stream name "Multimedia" has platform driver as
> > omap-pcm-audio which is the DMA based. This front-end dai is mapped to
> > backend (i.e PDM-DL1) with platform driver as omap-aess-audio.
> > Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA
> > based, it seems strange to have DSP-based backend.
> >

The DMA is used to send the PCM data from the ALSA PCM device to the DSP
FIFO. 

> > 3. To best of my knowledge, I see in omap-abe.c which is the front end
> > implementation. Playback_trigger() seems to go ahead enable all
> > back-ends linked to the given front-end. However, front end may not be
> > routed to all back-ends. Where in the code to make sure BE is only
> > activated when a stream is being routed to that particular back-end?
> >

This is all done in soc-dsp.c now. We use the DAPM graph to work out all
valid routes from FE to BE and vice versa.

> > 4. omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
> > routing of front-end to back-end DAPM widgets and routing map. Am I
> > correct? 

Yes, although the routing management is now all in soc-dsp.c

> This leads to previous question. How are two drivers working
> > together to make sure BEs are only activated if Front-end has been
> > routed to them?
> >

soc-dsp.c now marshals all the PCM ops and makes sure that only valid
paths have their DAIs activated.


> > 5. Is there mechanism for front-end to switch between DMA and DSP
> > platform driver? It looks to me that mapping of frond-end and platform
> > driver is predetermined based on use case. For example, HDMI goes
> > through DMA as legacy dailink.

There is no way to dynamically switch platform drivers atm, but this can
be solved by having a mutually exclusive FE for each use case.

> >
> > 6. struct abe_data has array member called dapm. It looks to me that
> > this array simply tracks dapm components status but I don't see it's
> > really being used in meaningful way in omap-abe-adsp.c.
> >

It used by the OMAP4 ABE to work out the OPP power level and to work out
the routing between FE and BE.

Liam
-- 
Freelance Developer, SlimLogic Ltd
ASoC and Voltage Regulator Maintainer.
http://www.slimlogic.co.uk

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-25 11:51   ` Mark Brown
@ 2011-01-26  6:22     ` Patrick Lai
  2011-01-26 11:20       ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-01-26  6:22 UTC (permalink / raw)
  To: Mark Brown; +Cc: alsa-devel

On 1/25/2011 3:51 AM, Mark Brown wrote:
> On Mon, Jan 24, 2011 at 11:01:10PM -0800, Patrick Lai wrote:
>
>> 8. I see there is dmic codec(dmic.c) under sound/soc/codec which is
>> pretty much just a dummy codec driver. I supposed the configuration of
>> DMIC is done in other driver. Would it be better if we could have
>
> It's for direct DMIC interfaces in CPUs like OMAP4 has.
>
>> something like fixed voltage regulator? So, there is no need to
>> duplicate the effort.
>
> Yes, that'd be kind of nice but given how tiny these noop drivers are
> and the fact that they do all need to specify their capabilites it's not
> clear that there's much advantage from combining them into a single
> driver - the boiler plate is so small and simple.

Do we consider the case that codec driver is not required such as 
virtual sink or sink is configured outside of ALSA driver? If we create 
dummy codec driver for each use case, wouldn't sound/soc/codecs end up 
littered with bunch of noop drivers? Furthermore, couldn't capabilities
being passed through platform device?

Thanks
Patrick

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-26  6:22     ` Patrick Lai
@ 2011-01-26 11:20       ` Mark Brown
  2011-01-27 21:51         ` Patrick Lai
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Brown @ 2011-01-26 11:20 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel

On Tue, Jan 25, 2011 at 10:22:41PM -0800, Patrick Lai wrote:
> On 1/25/2011 3:51 AM, Mark Brown wrote:

> >Yes, that'd be kind of nice but given how tiny these noop drivers are
> >and the fact that they do all need to specify their capabilites it's not
> >clear that there's much advantage from combining them into a single
> >driver - the boiler plate is so small and simple.

> Do we consider the case that codec driver is not required such as
> virtual sink or sink is configured outside of ALSA driver? If we
> create dummy codec driver for each use case, wouldn't
> sound/soc/codecs end up littered with bunch of noop drivers?

Yup, but it's not really a big cost - they're all so trivial.

> Furthermore, couldn't capabilities
> being passed through platform device?

Right, but of course you probably end up defining a common set of
platform data for each device so people don't have to cut'n'paste the
same thing into all the different board files.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-26 11:20       ` Mark Brown
@ 2011-01-27 21:51         ` Patrick Lai
  2011-01-31 13:30           ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-01-27 21:51 UTC (permalink / raw)
  To: Mark Brown; +Cc: alsa-devel

On 1/26/2011 3:20 AM, Mark Brown wrote:
> On Tue, Jan 25, 2011 at 10:22:41PM -0800, Patrick Lai wrote:
>> On 1/25/2011 3:51 AM, Mark Brown wrote:
>
>>> Yes, that'd be kind of nice but given how tiny these noop drivers are
>>> and the fact that they do all need to specify their capabilites it's not
>>> clear that there's much advantage from combining them into a single
>>> driver - the boiler plate is so small and simple.
>
>> Do we consider the case that codec driver is not required such as
>> virtual sink or sink is configured outside of ALSA driver? If we
>> create dummy codec driver for each use case, wouldn't
>> sound/soc/codecs end up littered with bunch of noop drivers?
>
> Yup, but it's not really a big cost - they're all so trivial.

Currently, I already have few dummy codec drivers in mind for the 
project I am working on. I am not worried about the size of these files. 
Beside having tons of dummy codec drivers in the source tree, I am also 
looking at the value and effort to upstream these trivial drivers.
>> Furthermore, couldn't capabilities
>> being passed through platform device?
>
> Right, but of course you probably end up defining a common set of
> platform data for each device so people don't have to cut'n'paste the
> same thing into all the different board files.

Yes, that would be convenient for others. However, I see the effort to 
create multiple platform devices and paste the capabilities in the board 
file far less than upstreamng no-op drivers and making changes in 
Kconfig and Makefile under sound/soc/codecs.

Thanks

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-25 16:46   ` Liam Girdwood
@ 2011-01-27 23:41     ` Patrick Lai
  2011-03-15  7:08       ` [alsa-devel] " Patrick Lai
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-01-27 23:41 UTC (permalink / raw)
  To: Liam Girdwood; +Cc: alsa-devel, alsa-devel, Mark Brown, linux-arm-msm

On 1/25/2011 8:46 AM, Liam Girdwood wrote:
> Hi Patrick,
>
> CCing in Mark and alsa-devel@alsa-project.org (preferred list)
>
> On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
>> Hi Liam,
>>
>> I have two more questions about your DSP topic branch
>>
>> 7. I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is
>> set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
>
> No host mode means no audio is transferred to the host CPU in this case.
> i.e. the MODEM routes audio directly between the DSP and mic/speaker.
>
> This flags also tells the ASoC core that no DMA will be required, hence
> the code in the DMA branch will not start any DMA. This part had to be
> cleaned up for upstream.
>
>> Is it for use case that two physical devices can exchange audio data
>> without host-processor intervention? If so, if user-space application
>> tries to write to PCM buffer, will framework reject the buffer?
>>
>
> Yes, that's correct. The PCM core will also not complain here when it
> receive no data either.

I experimented NO_HOST option and found platform driver pcm copy 
function is called. Is it part of the clean up you were talking about?

>> 8. I see there is dmic codec(dmic.c) under sound/soc/codec which is
>> pretty much just a dummy codec driver. I supposed the configuration of
>> DMIC is done in other driver. Would it be better if we could have
>> something like fixed voltage regulator? So, there is no need to
>> duplicate the effort.
>>
>
> Yeah, this is a generic DMIC driver. It's designed to have a very wide
> coverage and should cover most DMICs out there. So it should also be
> able to fit into your architecture too.
>
>> Look forward to seeing your reply soon
>>
>> Thanks
>>
>> On 1/6/2011 3:39 PM, Patrick Lai wrote:
>>> Hi Liam,
>>>
>>> I sync to your kernel DSP topic branch two days back in attempt to
>>> understand the up-coming ASOC DSP design. I have few questions to get
>>> clarification from you.
>>>
>>> 1. In the sdp4430.c, there are both FE dai link and BE dai link have
>>> omap-aess-audio as platform driver. How is omap-aess-audio platform
>>> driver used in both front-end and backend?
>>>
>
> The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform
> driver since they do not require DMA, whilts the other FE's use the
> normal DMA platform driver since they do require DMA.
>
PCM functions inside omap-abe-dsp.c seem to do no-op if dai ID is not 
MODEM or LP. I guess reusing omap-aess-audio platform driver for the 
back-end DAI link serves purpose of architecture requirement that each 
DAI-LINK must have platform driver. Do I get the right impression?


>
>>> 2. Front-end dai with stream name "Multimedia" has platform driver as
>>> omap-pcm-audio which is the DMA based. This front-end dai is mapped to
>>> backend (i.e PDM-DL1) with platform driver as omap-aess-audio.
>>> Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA
>>> based, it seems strange to have DSP-based backend.
>>>
>
> The DMA is used to send the PCM data from the ALSA PCM device to the DSP
> FIFO.
>
>>> 3. To best of my knowledge, I see in omap-abe.c which is the front end
>>> implementation. Playback_trigger() seems to go ahead enable all
>>> back-ends linked to the given front-end. However, front end may not be
>>> routed to all back-ends. Where in the code to make sure BE is only
>>> activated when a stream is being routed to that particular back-end?
>>>
>
> This is all done in soc-dsp.c now. We use the DAPM graph to work out all
> valid routes from FE to BE and vice versa.
>

I experimented dynamic routing. If issuing mixer command to route AIF_IN 
to AIF_OUT through mixer widget before starting playback, it works. 
Otherwise, I get I/O error from aplay. I think it's acceptable to 
require application to setup path before start of playback. For  device 
switch case, is it mandatory to route stream to the mixer of BE 
DAI-LINK2 before derouting stream out of mixer of BE DAI-LINK1?

>>> 4. omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
>>> routing of front-end to back-end DAPM widgets and routing map. Am I
>>> correct?
>
> Yes, although the routing management is now all in soc-dsp.c
>
>> This leads to previous question. How are two drivers working
>>> together to make sure BEs are only activated if Front-end has been
>>> routed to them?
>>>
>
> soc-dsp.c now marshals all the PCM ops and makes sure that only valid
> paths have their DAIs activated.
>
>
>>> 5. Is there mechanism for front-end to switch between DMA and DSP
>>> platform driver? It looks to me that mapping of frond-end and platform
>>> driver is predetermined based on use case. For example, HDMI goes
>>> through DMA as legacy dailink.
>
> There is no way to dynamically switch platform drivers atm, but this can
> be solved by having a mutually exclusive FE for each use case.
>
>>>
>>> 6. struct abe_data has array member called dapm. It looks to me that
>>> this array simply tracks dapm components status but I don't see it's
>>> really being used in meaningful way in omap-abe-adsp.c.
>>>
>
> It used by the OMAP4 ABE to work out the OPP power level and to work out
> the routing between FE and BE.
>
> Liam


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-27 21:51         ` Patrick Lai
@ 2011-01-31 13:30           ` Mark Brown
  2011-03-17  7:29             ` Patrick Lai
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Brown @ 2011-01-31 13:30 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel

On Thu, Jan 27, 2011 at 01:51:58PM -0800, Patrick Lai wrote:
> On 1/26/2011 3:20 AM, Mark Brown wrote:

> >Yup, but it's not really a big cost - they're all so trivial.

> Currently, I already have few dummy codec drivers in mind for the
> project I am working on. I am not worried about the size of these
> files. Beside having tons of dummy codec drivers in the source tree,
> I am also looking at the value and effort to upstream these trivial
> drivers.

Like I say, it's utterly trivial.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [alsa-devel] Question about your DSP topic branch
  2011-01-27 23:41     ` Patrick Lai
@ 2011-03-15  7:08       ` Patrick Lai
  2011-03-15 11:25         ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-03-15  7:08 UTC (permalink / raw)
  To: Liam Girdwood; +Cc: linux-arm-msm, alsa-devel, Mark Brown, alsa-devel

Hi Liam,

In the DSP routing scheme, what is expectation of back-end CPU/Codec 
drivers in term of determining what hardware parameters especially 
channel mode to use? DSP in question is capable of resampling
, down mixing so hardware configuration of front-end DAI does not need
to match one of back-end.

Here are the scenarios that I need clarification

1. If there are two front-ends routed to same backend, which front-end 
hardware parameters should backend DAI be based on? For example, one 
front-end is MONO and another front is stereo.

2. Depending on device mode/use case, I would like to configure BE to 
different channel mode irrespective of front-end configuration(i.e 
configuring back-end to handset mode). Where is the hook to do so under 
ASoC DSP framework?

Thanks
Patrick

On 1/27/2011 3:41 PM, Patrick Lai wrote:
> On 1/25/2011 8:46 AM, Liam Girdwood wrote:
>> Hi Patrick,
>>
>> CCing in Mark and alsa-devel@alsa-project.org (preferred list)
>>
>> On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
>>> Hi Liam,
>>>
>>> I have two more questions about your DSP topic branch
>>>
>>> 7. I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is
>>> set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
>>
>> No host mode means no audio is transferred to the host CPU in this case.
>> i.e. the MODEM routes audio directly between the DSP and mic/speaker.
>>
>> This flags also tells the ASoC core that no DMA will be required, hence
>> the code in the DMA branch will not start any DMA. This part had to be
>> cleaned up for upstream.
>>
>>> Is it for use case that two physical devices can exchange audio data
>>> without host-processor intervention? If so, if user-space application
>>> tries to write to PCM buffer, will framework reject the buffer?
>>>
>>
>> Yes, that's correct. The PCM core will also not complain here when it
>> receive no data either.
>
> I experimented NO_HOST option and found platform driver pcm copy
> function is called. Is it part of the clean up you were talking about?
>
>>> 8. I see there is dmic codec(dmic.c) under sound/soc/codec which is
>>> pretty much just a dummy codec driver. I supposed the configuration of
>>> DMIC is done in other driver. Would it be better if we could have
>>> something like fixed voltage regulator? So, there is no need to
>>> duplicate the effort.
>>>
>>
>> Yeah, this is a generic DMIC driver. It's designed to have a very wide
>> coverage and should cover most DMICs out there. So it should also be
>> able to fit into your architecture too.
>>
>>> Look forward to seeing your reply soon
>>>
>>> Thanks
>>>
>>> On 1/6/2011 3:39 PM, Patrick Lai wrote:
>>>> Hi Liam,
>>>>
>>>> I sync to your kernel DSP topic branch two days back in attempt to
>>>> understand the up-coming ASOC DSP design. I have few questions to get
>>>> clarification from you.
>>>>
>>>> 1. In the sdp4430.c, there are both FE dai link and BE dai link have
>>>> omap-aess-audio as platform driver. How is omap-aess-audio platform
>>>> driver used in both front-end and backend?
>>>>
>>
>> The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform
>> driver since they do not require DMA, whilts the other FE's use the
>> normal DMA platform driver since they do require DMA.
>>
> PCM functions inside omap-abe-dsp.c seem to do no-op if dai ID is not
> MODEM or LP. I guess reusing omap-aess-audio platform driver for the
> back-end DAI link serves purpose of architecture requirement that each
> DAI-LINK must have platform driver. Do I get the right impression?
>
>
>>
>>>> 2. Front-end dai with stream name "Multimedia" has platform driver as
>>>> omap-pcm-audio which is the DMA based. This front-end dai is mapped to
>>>> backend (i.e PDM-DL1) with platform driver as omap-aess-audio.
>>>> Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA
>>>> based, it seems strange to have DSP-based backend.
>>>>
>>
>> The DMA is used to send the PCM data from the ALSA PCM device to the DSP
>> FIFO.
>>
>>>> 3. To best of my knowledge, I see in omap-abe.c which is the front end
>>>> implementation. Playback_trigger() seems to go ahead enable all
>>>> back-ends linked to the given front-end. However, front end may not be
>>>> routed to all back-ends. Where in the code to make sure BE is only
>>>> activated when a stream is being routed to that particular back-end?
>>>>
>>
>> This is all done in soc-dsp.c now. We use the DAPM graph to work out all
>> valid routes from FE to BE and vice versa.
>>
>
> I experimented dynamic routing. If issuing mixer command to route AIF_IN
> to AIF_OUT through mixer widget before starting playback, it works.
> Otherwise, I get I/O error from aplay. I think it's acceptable to
> require application to setup path before start of playback. For  device
> switch case, is it mandatory to route stream to the mixer of BE
> DAI-LINK2 before derouting stream out of mixer of BE DAI-LINK1?
>
>>>> 4. omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
>>>> routing of front-end to back-end DAPM widgets and routing map. Am I
>>>> correct?
>>
>> Yes, although the routing management is now all in soc-dsp.c
>>
>>> This leads to previous question. How are two drivers working
>>>> together to make sure BEs are only activated if Front-end has been
>>>> routed to them?
>>>>
>>
>> soc-dsp.c now marshals all the PCM ops and makes sure that only valid
>> paths have their DAIs activated.
>>
>>
>>>> 5. Is there mechanism for front-end to switch between DMA and DSP
>>>> platform driver? It looks to me that mapping of frond-end and platform
>>>> driver is predetermined based on use case. For example, HDMI goes
>>>> through DMA as legacy dailink.
>>
>> There is no way to dynamically switch platform drivers atm, but this can
>> be solved by having a mutually exclusive FE for each use case.
>>
>>>>
>>>> 6. struct abe_data has array member called dapm. It looks to me that
>>>> this array simply tracks dapm components status but I don't see it's
>>>> really being used in meaningful way in omap-abe-adsp.c.
>>>>
>>
>> It used by the OMAP4 ABE to work out the OPP power level and to work out
>> the routing between FE and BE.
>>
>> Liam
>
>


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [alsa-devel] Question about your DSP topic branch
  2011-03-15  7:08       ` [alsa-devel] " Patrick Lai
@ 2011-03-15 11:25         ` Mark Brown
  2011-03-17  7:21           ` Patrick Lai
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Brown @ 2011-03-15 11:25 UTC (permalink / raw)
  To: Patrick Lai
  Cc: Liam Girdwood, linux-arm-msm, alsa-devel, Mark Brown, alsa-devel

On Tue, Mar 15, 2011 at 12:08:39AM -0700, Patrick Lai wrote:

> 1. If there are two front-ends routed to same backend, which front-end 
> hardware parameters should backend DAI be based on? For example, one 
> front-end is MONO and another front is stereo.

> 2. Depending on device mode/use case, I would like to configure BE to 
> different channel mode irrespective of front-end configuration(i.e 
> configuring back-end to handset mode). Where is the hook to do so under 
> ASoC DSP framework?

If you can configure the two completely separately then you can just
have a DAPM path between the two devices and don't need to link the DAI
configurations at all - WM8994 is an example of doing this, there is a
bunch of DSP and mixing in the digital section between the DAIs but
because the DSP includes sample rate conversion the DAIs can be
configured separately and we don't have to worry about propagating DSP
through.

Thinking off the top of my head if you do need to link the
configurations sometimes the first thing that springs to mind is to set
constraints so broad that they're noops - we're going to want to support
propagating constraints through as well as specific settings anyway
since some stuff has restrictions like "Route DAI 1 to DAI 2 with DAI 2
at any sample rate that's an integer division of the DAI 1 rate", though
that sort of stuff may well be a second pass.

One thing I should mention which I've been working on and which may
(depending on the design of your hardware) be a better match for you is
providing a mechanism for setting up DAI links and starting streams on
them in kernel space based on DAPM routing, with DAPM propagating over
these links automatically.  This is mostly intended for representing
things like basebands where you'll have a DAI link that's going to be
running in one of a small number of configurations (often only one)
connected to something else along the lines of:

   CODEC  <  8kHz mono  >  Baseband

so the idea is that to the user you can have a couple of pin switches
(or whatever else can be represented) in the baseband to say that the
baseband is passing audio and then DAPM will be extended to figure out
that it should start up the audio intefaces and configure their params
without needing userspace to do it just like it can if the link between
the two were analogue.

This will compliment the work Liam's doing - Liam's work ties together
the configuration within a device, this'll help with inter-device links.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [alsa-devel] Question about your DSP topic branch
  2011-03-15 11:25         ` Mark Brown
@ 2011-03-17  7:21           ` Patrick Lai
  2011-03-17 23:58             ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-03-17  7:21 UTC (permalink / raw)
  To: Mark Brown
  Cc: Liam Girdwood, linux-arm-msm, alsa-devel, Mark Brown, alsa-devel

On 3/15/2011 4:25 AM, Mark Brown wrote:
> On Tue, Mar 15, 2011 at 12:08:39AM -0700, Patrick Lai wrote:
>
>> 1. If there are two front-ends routed to same backend, which front-end
>> hardware parameters should backend DAI be based on? For example, one
>> front-end is MONO and another front is stereo.
>
>> 2. Depending on device mode/use case, I would like to configure BE to
>> different channel mode irrespective of front-end configuration(i.e
>> configuring back-end to handset mode). Where is the hook to do so under
>> ASoC DSP framework?
>
> If you can configure the two completely separately then you can just
> have a DAPM path between the two devices and don't need to link the DAI
> configurations at all - WM8994 is an example of doing this, there is a
> bunch of DSP and mixing in the digital section between the DAIs but
> because the DSP includes sample rate conversion the DAIs can be
> configured separately and we don't have to worry about propagating DSP
> through.

Essentially, I have front-end DAI link represents PCM stream while
back-end DAI link represents physical hardware path. Irrespective
of PCM configuration of front-end DAI link, I want CPU DAI and CODEC
DAI of backend DAI link configured to the same PCM configuration. DSP
can handle resampling/downmixing. How to achieve this use case in
soc-dsp implementation is not obvious to me. In existing soc-dsp
implementation, back-end does not get activated unless front-end is
routed to back-end. Then, PCM parameters of front-end gets propagated
to CPU driver and codec driver of back-end DAI link. I don't see how I
would be able to pass PCM configuration specifically for back-end.

>
> One thing I should mention which I've been working on and which may
> (depending on the design of your hardware) be a better match for you is
> providing a mechanism for setting up DAI links and starting streams on
> them in kernel space based on DAPM routing, with DAPM propagating over
> these links automatically.  This is mostly intended for representing
> things like basebands where you'll have a DAI link that's going to be
> running in one of a small number of configurations (often only one)
> connected to something else along the lines of:
>
>     CODEC<   8kHz mono>   Baseband
>
> so the idea is that to the user you can have a couple of pin switches
> (or whatever else can be represented) in the baseband to say that the
> baseband is passing audio and then DAPM will be extended to figure out
> that it should start up the audio intefaces and configure their params
> without needing userspace to do it just like it can if the link between
> the two were analogue.
>

Yes, I do have use case in mind which requires functionality as you 
described to activate back-end DAI link without front-end DAI being
activated. DAPM seems to be most appropriate approach.

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-01-31 13:30           ` Mark Brown
@ 2011-03-17  7:29             ` Patrick Lai
  2011-03-17 11:57               ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-03-17  7:29 UTC (permalink / raw)
  To: Mark Brown; +Cc: alsa-devel

On 1/31/2011 5:30 AM, Mark Brown wrote:
> On Thu, Jan 27, 2011 at 01:51:58PM -0800, Patrick Lai wrote:
>> On 1/26/2011 3:20 AM, Mark Brown wrote:
>
>>> Yup, but it's not really a big cost - they're all so trivial.
>
>> Currently, I already have few dummy codec drivers in mind for the
>> project I am working on. I am not worried about the size of these
>> files. Beside having tons of dummy codec drivers in the source tree,
>> I am also looking at the value and effort to upstream these trivial
>> drivers.
>
> Like I say, it's utterly trivial.

I see a flag "no_codec" defined in struct snd_soc_dai_link. Is that
flag going to stay or just a temporary workaround?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-03-17  7:29             ` Patrick Lai
@ 2011-03-17 11:57               ` Mark Brown
  0 siblings, 0 replies; 22+ messages in thread
From: Mark Brown @ 2011-03-17 11:57 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel

On Thu, Mar 17, 2011 at 12:29:18AM -0700, Patrick Lai wrote:

> I see a flag "no_codec" defined in struct snd_soc_dai_link. Is that
> flag going to stay or just a temporary workaround?

That's actually not referenced at all at the minute so it's not even a
workaround and should be removed :) .  I'd not expect to use that for
this as a lot of the links that use this are something <-> CODEC ones, I
suspect that's an early merged bit of the DSP config propagation stuff
but I'd need to check.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [alsa-devel] Question about your DSP topic branch
  2011-03-17  7:21           ` Patrick Lai
@ 2011-03-17 23:58             ` Mark Brown
  0 siblings, 0 replies; 22+ messages in thread
From: Mark Brown @ 2011-03-17 23:58 UTC (permalink / raw)
  To: Patrick Lai; +Cc: Liam Girdwood, linux-arm-msm, alsa-devel, alsa-devel

On Thu, Mar 17, 2011 at 12:21:57AM -0700, Patrick Lai wrote:

> Yes, I do have use case in mind which requires functionality as you  
> described to activate back-end DAI link without front-end DAI being
> activated. DAPM seems to be most appropriate approach.

OK, cool - I'll try to remember to CC you on the patches when I get them
done.  I need to get it done for .40 so should be soonish.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic
       [not found] <4D2652C8.7030701@codeaurora.org>
  2011-01-25  7:01 ` Question about your DSP topic branch Patrick Lai
@ 2011-03-30 20:39 ` Patrick Lai
  2011-03-31  6:40   ` Question about your DSP topic branch Patrick Lai
  2011-03-31 18:26   ` Question about your DSP topic Liam Girdwood
  1 sibling, 2 replies; 22+ messages in thread
From: Patrick Lai @ 2011-03-30 20:39 UTC (permalink / raw)
  To: lrg, broonie; +Cc: alsa-devel

Hi Liam/Mark,

I ran some tests on top of soc-dsp framework pulled from Liam's
topic/dsp-upstream(http://git.kernel.org/?p=linux/kernel/git/lrg/asoc-
2.6.git;a=summary). I found there is a scenario that soc-dsp framework
erroneously start pcm playabck/capture. Here is the scenario:

In the platform driver, I have this route table defined

FE1 Playback -> Mixer 1 -> BE1
BE2 -> FE1 Capture
FE = Front-end, BE = Back-end

While PCM playback is going from FE1 playback to BE1, I switch off FE1
playback to Mixer 1. This caused soc_dsp_runtime_update called.
Framework correctly close BE1 as it is no longer needed. Eventually,
framework finds BE2 is connected to FE1 capture. Framework, without
checking if FE1 capture is activated by user-space application, simply
goes ahead activate BE2. Since FE1 capture is never activated,
runtime structure is not allocated. This inherently results NULL
pointer dereference exception.

For now, in soc-dsp.c be_connect function(), I have a check to make sure
fe->dsp[stream].runtime is not NULL. I don't know if it's appropriate
fix. Can you please take a look?

Thanks
Patrick

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-03-30 20:39 ` Question about your DSP topic Patrick Lai
@ 2011-03-31  6:40   ` Patrick Lai
  2011-03-31 21:35     ` Liam Girdwood
  2011-03-31 18:26   ` Question about your DSP topic Liam Girdwood
  1 sibling, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-03-31  6:40 UTC (permalink / raw)
  To: lrg, broonie; +Cc: alsa-devel

Hi Liam/Mark,

In the back-end platform driver, routing table and array of widgets are 
defined to describe interconnection of front-end and back-end. Instead 
of defining FE and BE as AIF pin using SND_SOC_DAPM_AIF_IN/OUT macros,
I would like to use SND_SOC_DAPM_AIF_IN_E/OUT_E macros. Essentially, I 
want to register event handler to jump start routing in the DSP for a 
newly activated FE under multiple streams to single HW path scenario. At 
this time, BE is already activated so I need other mechanism to jump 
start routing. I could put the logic in my front-end CPU DAI driver. 
However, I feel that path setup should be handled by DAPM/DSP framework. 
Hence, I am looking for a good place to trigger calling of event handler 
in soc-dapm.c/soc-dsp.c. Any suggestion?

Thanks
Patrick

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic
  2011-03-30 20:39 ` Question about your DSP topic Patrick Lai
  2011-03-31  6:40   ` Question about your DSP topic branch Patrick Lai
@ 2011-03-31 18:26   ` Liam Girdwood
  2011-03-31 20:59     ` Patrick Lai
  1 sibling, 1 reply; 22+ messages in thread
From: Liam Girdwood @ 2011-03-31 18:26 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel, broonie

On Wed, 2011-03-30 at 13:39 -0700, Patrick Lai wrote:
> Hi Liam/Mark,
> 
> I ran some tests on top of soc-dsp framework pulled from Liam's
> topic/dsp-upstream(http://git.kernel.org/?p=linux/kernel/git/lrg/asoc-
> 2.6.git;a=summary). I found there is a scenario that soc-dsp framework
> erroneously start pcm playabck/capture. Here is the scenario:
> 
> In the platform driver, I have this route table defined
> 
> FE1 Playback -> Mixer 1 -> BE1
> BE2 -> FE1 Capture
> FE = Front-end, BE = Back-end
> 
> While PCM playback is going from FE1 playback to BE1, I switch off FE1
> playback to Mixer 1. This caused soc_dsp_runtime_update called.
> Framework correctly close BE1 as it is no longer needed. Eventually,
> framework finds BE2 is connected to FE1 capture. Framework, without
> checking if FE1 capture is activated by user-space application, simply
> goes ahead activate BE2. Since FE1 capture is never activated,
> runtime structure is not allocated. This inherently results NULL
> pointer dereference exception.
> 
> For now, in soc-dsp.c be_connect function(), I have a check to make sure
> fe->dsp[stream].runtime is not NULL. I don't know if it's appropriate
> fix. Can you please take a look?

I've not seen that one yet and we have a similar route on OMAP4. Is it
repeatable 100% or just some of the time. It would also be useful to
post your oops message and I'll take a closer look.

I'm due to push an squashed update to my dsp branch later tonight in
prep for upstream. Could you also check soc-dsp.c against this copy,  it
may be that something that was lost in the backport ?

Liam

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic
  2011-03-31 18:26   ` Question about your DSP topic Liam Girdwood
@ 2011-03-31 20:59     ` Patrick Lai
  2011-03-31 21:37       ` Liam Girdwood
  0 siblings, 1 reply; 22+ messages in thread
From: Patrick Lai @ 2011-03-31 20:59 UTC (permalink / raw)
  To: Liam Girdwood; +Cc: alsa-devel, broonie


> > I've not seen that one yet and we have a similar route on OMAP4. Is it
> repeatable 100% or just some of the time. It would also be useful to
> post your oops message and I'll take a closer look.
>
> I'm due to push an squashed update to my dsp branch later tonight in
> prep for upstream. Could you also check soc-dsp.c against this copy,  it
> may be that something that was lost in the backport ?
>
It's 100% reproducible. I sync to your branch last week so soc-dsp.c
should be in sync. Since it's not blocking me, I will wait for
your update and see if new change in soc-dsp.c makes any difference
Otherwise, I will send you log off line.

> Liam
>
>
> _______________________________________________
> Alsa-devel mailing list
> Alsa-devel@alsa-project.org
> http://mailman.alsa-project.org/mailman/listinfo/alsa-devel


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-03-31  6:40   ` Question about your DSP topic branch Patrick Lai
@ 2011-03-31 21:35     ` Liam Girdwood
  2011-03-31 21:42       ` Mark Brown
  0 siblings, 1 reply; 22+ messages in thread
From: Liam Girdwood @ 2011-03-31 21:35 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel, broonie

On Wed, 2011-03-30 at 23:40 -0700, Patrick Lai wrote:
> Hi Liam/Mark,
> 
> In the back-end platform driver, routing table and array of widgets are 
> defined to describe interconnection of front-end and back-end. Instead 
> of defining FE and BE as AIF pin using SND_SOC_DAPM_AIF_IN/OUT macros,
> I would like to use SND_SOC_DAPM_AIF_IN_E/OUT_E macros. Essentially, I 
> want to register event handler to jump start routing in the DSP for a 
> newly activated FE under multiple streams to single HW path scenario. At 
> this time, BE is already activated so I need other mechanism to jump 
> start routing. I could put the logic in my front-end CPU DAI driver. 
> However, I feel that path setup should be handled by DAPM/DSP framework. 
> Hence, I am looking for a good place to trigger calling of event handler 
> in soc-dapm.c/soc-dsp.c. Any suggestion?


So iiuc, your BE is active (with another FE) and you then enable a
another FE that is connected to the already active BE ?

It should be possible to add support for the _E() macros in soc-dsp.c,
but I'm not sure why it's required atm since we dont need this for OMAP4
ABE.

Liam   

> 
> Thanks
> Patrick
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic
  2011-03-31 20:59     ` Patrick Lai
@ 2011-03-31 21:37       ` Liam Girdwood
  0 siblings, 0 replies; 22+ messages in thread
From: Liam Girdwood @ 2011-03-31 21:37 UTC (permalink / raw)
  To: Patrick Lai; +Cc: alsa-devel, broonie

On Thu, 2011-03-31 at 13:59 -0700, Patrick Lai wrote:
> > > I've not seen that one yet and we have a similar route on OMAP4. Is it
> > repeatable 100% or just some of the time. It would also be useful to
> > post your oops message and I'll take a closer look.
> >
> > I'm due to push an squashed update to my dsp branch later tonight in
> > prep for upstream. Could you also check soc-dsp.c against this copy,  it
> > may be that something that was lost in the backport ?
> >
> It's 100% reproducible. I sync to your branch last week so soc-dsp.c
> should be in sync. Since it's not blocking me, I will wait for
> your update and see if new change in soc-dsp.c makes any difference
> Otherwise, I will send you log off line.
> 

Just pushed. Thanks, could you also include the sequence of commands
(e.g. aplay, amixer etc used to cause the error)

Thanks

Liam

> > Liam
> >
> >
> > _______________________________________________
> > Alsa-devel mailing list
> > Alsa-devel@alsa-project.org
> > http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-03-31 21:35     ` Liam Girdwood
@ 2011-03-31 21:42       ` Mark Brown
  2011-03-31 22:07         ` Patrick Lai
  0 siblings, 1 reply; 22+ messages in thread
From: Mark Brown @ 2011-03-31 21:42 UTC (permalink / raw)
  To: Liam Girdwood; +Cc: Patrick Lai, alsa-devel

On Thu, Mar 31, 2011 at 10:35:22PM +0100, Liam Girdwood wrote:

> It should be possible to add support for the _E() macros in soc-dsp.c,
> but I'm not sure why it's required atm since we dont need this for OMAP4
> ABE.

Not sure why Patrick might need it but some of the DSPs I've seen
require a multiple write sequence to kick them into doing things - for
example, set the state up then do another write to tell it to apply the
new configuration.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Question about your DSP topic branch
  2011-03-31 21:42       ` Mark Brown
@ 2011-03-31 22:07         ` Patrick Lai
  0 siblings, 0 replies; 22+ messages in thread
From: Patrick Lai @ 2011-03-31 22:07 UTC (permalink / raw)
  To: Mark Brown; +Cc: alsa-devel, Liam Girdwood

On 3/31/2011 2:42 PM, Mark Brown wrote:
> On Thu, Mar 31, 2011 at 10:35:22PM +0100, Liam Girdwood wrote:
>
>> It should be possible to add support for the _E() macros in soc-dsp.c,
>> but I'm not sure why it's required atm since we dont need this for OMAP4
>> ABE.
>
> Not sure why Patrick might need it but some of the DSPs I've seen
> require a multiple write sequence to kick them into doing things - for
> example, set the state up then do another write to tell it to apply the
> new configuration.

The reason is that PCM stream, which is presented as front-end, is
run-time instantiated in DSP. Routing command to DSP cannot
happen until PCM stream is available. So for multiple streams to same
back-end case, activation of back-end can trigger routing of first PCM
stream. However, for 2nd stream, backend is already active. So, routing
of 2nd PCM stream would have to be triggered by activation of front-end.

As I stated before, I can have front-end CPU DAI driver trigger as 
workaround. However, it seems like DAPM framework would be the best 
place to provide the hook. It's better to keep the routing logic
localized in platform driver in my opinion

Thanks
Patrick

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2011-03-31 22:07 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4D2652C8.7030701@codeaurora.org>
2011-01-25  7:01 ` Question about your DSP topic branch Patrick Lai
2011-01-25 11:51   ` Mark Brown
2011-01-26  6:22     ` Patrick Lai
2011-01-26 11:20       ` Mark Brown
2011-01-27 21:51         ` Patrick Lai
2011-01-31 13:30           ` Mark Brown
2011-03-17  7:29             ` Patrick Lai
2011-03-17 11:57               ` Mark Brown
2011-01-25 16:46   ` Liam Girdwood
2011-01-27 23:41     ` Patrick Lai
2011-03-15  7:08       ` [alsa-devel] " Patrick Lai
2011-03-15 11:25         ` Mark Brown
2011-03-17  7:21           ` Patrick Lai
2011-03-17 23:58             ` Mark Brown
2011-03-30 20:39 ` Question about your DSP topic Patrick Lai
2011-03-31  6:40   ` Question about your DSP topic branch Patrick Lai
2011-03-31 21:35     ` Liam Girdwood
2011-03-31 21:42       ` Mark Brown
2011-03-31 22:07         ` Patrick Lai
2011-03-31 18:26   ` Question about your DSP topic Liam Girdwood
2011-03-31 20:59     ` Patrick Lai
2011-03-31 21:37       ` Liam Girdwood

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.