All of lore.kernel.org
 help / color / mirror / Atom feed
From: Takashi Iwai <tiwai@suse.de>
To: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Cc: alsa-devel@alsa-project.org,
	Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>,
	Sameer Pujar <spujar@nvidia.com>,
	vkoul@kernel.org, broonie@kernel.org,
	Gyeongtaek Lee <gt82.lee@samsung.com>,
	Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
Subject: Re: [RFC PATCH v2 0/5] ASoC: soc-pcm: fix trigger race conditions with shared BE
Date: Tue, 12 Oct 2021 17:07:25 +0200	[thread overview]
Message-ID: <s5hmtne1dua.wl-tiwai@suse.de> (raw)
In-Reply-To: <9d336461-b2fe-8cd4-0096-356620622f8d@linux.intel.com>

On Tue, 12 Oct 2021 15:45:41 +0200,
Pierre-Louis Bossart wrote:
> 
> 
> >>> And indeed there's a deadlock!
> >>>
> >>> snd_pcm_period_elapsed() takes the FE pcm stream lock, and will call
> >>> snd_pcm_trigger.
> >>
> >> Indeed, this would deadlock.
> >>
> >>> So if we also take the pcm stream lock in the BE
> >>> trigger, there's a conceptual deadlock: we painted ourselves in a corner
> >>> by using the same lock twice.
> >>>
> >>> Takashi, are you really sure we should protect these for_each_dpcm_be()
> >>> loops with the same pcm lock?
> >>
> >> The call within the FE lock is done only in dpcm_dai_trigger_fe_be(),
> >> and this should call dpcm_be_dai_trigger() as is.  In other places,
> >> the calls are without FE lock, hence they can take the lock,
> >> e.g. create a variant dpcm_dai_trigger_fe_be_lock().
> > 
> > Or rather it was the code path of snd_soc_dpcm_check_state()?
> > The function could be called from dpcm_be_dai_trigger().
> > Currently dpcm_lock seems to be added at places casually covering some
> > of the for_each_dpcm_be() or whatever...  The whole lock scheme has to
> > be revisited.
> > 
> >>> it seems like asking for trouble to
> >>> revisit the ALSA core just to walking through a list of BEs? Would you
> >>> object to changing dpcm_lock as I suggested, but not interfering with
> >>> stream handling?
> >>
> >> That would work, too, it's just a pity to degrade the fine-grained
> >> locks that have been already taken into global locks...
> > 
> > For the performance problem, at least, you can make it rwlock and
> > rwsem types (depending on nonatomic) so that the concurrent accesses
> > would work.  The only exclusive lock is the case for adding and
> > removing the entries, which should be done with write lock / sem down,
> > while other link traversals can be executed in the read lock / sem.
> 
> Thanks Takashi for your feedback.
> 
> Let's first get the locking right. We can optimize performance later.
> 
> I will continue with the idea of using existing fine-grained locks a bit
> more, I am with you that this dpcm_lock was not added in a consistent
> way and reusing the concept is really building on sand.
> 
> We can remove the lock in snd_soc_dpcm_check_state, I already did the
> change in other versions. But what I'll need to check further is if
> indeed dpcm_be_dai_trigger() is called with the FE lock taken already.
> Adding a lockdep_assert_help() or something would help I guess.
> 
> The part where I am still not clear is that snd_pcm_period_elapsed uses
> the irqsave/irqrestore version, but in the initial code you suggested
> the vanilla irq version is fine. I am not sure I get the nuance here,
> and btw in the case of SOF the snd_pcm_period_elapsed is called from a
> workqueue, not an interrupt handler, as a work-around to avoid an IPC
> deadlock, so we probably don't need the irqsave/irqrestore version anyways.

In a nutshell:
* in the code paths that are already with the stream lock
  (i.e. trigger, pointer and ack PCM callbacks), you need no extra
  lock for the stream itself.  But if you want additional locks
  (e.g. for BE), use either *_spin_lock() or *_spin_lock_irqsave(),
  but not *_spin_lock_irq().

* In other code paths, *_spin_lock_irq().

In doubt, you can use always use *_irqsave(), of course.


Takashi

      reply	other threads:[~2021-10-12 15:08 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 22:54 [RFC PATCH v2 0/5] ASoC: soc-pcm: fix trigger race conditions with shared BE Pierre-Louis Bossart
2021-10-04 22:54 ` [RFC PATCH v2 1/5] ASoC: soc-pcm: remove snd_soc_dpcm_fe_can_update() Pierre-Louis Bossart
2021-10-04 22:54   ` Pierre-Louis Bossart
2021-10-04 22:54 ` [RFC PATCH v2 2/5] ASoC: soc-pcm: don't export local functions, use static Pierre-Louis Bossart
2021-10-04 22:54   ` Pierre-Louis Bossart
2021-10-04 22:54 ` [RFC PATCH v2 3/5] ASoC: soc-pcm: replace dpcm_lock with dpcm_mutex Pierre-Louis Bossart
2021-10-04 22:54   ` Pierre-Louis Bossart
2021-10-04 22:54 ` [RFC PATCH v2 4/5] ASoC: soc-pcm: protect for_each_dpcm_be() loops " Pierre-Louis Bossart
2021-10-04 22:54   ` Pierre-Louis Bossart
2021-10-04 22:54 ` [RFC PATCH v2 5/5] ASoC: soc-pcm: test refcount before triggering Pierre-Louis Bossart
2021-10-04 22:54   ` Pierre-Louis Bossart
2021-10-05  6:36 ` [RFC PATCH v2 0/5] ASoC: soc-pcm: fix trigger race conditions with shared BE Sameer Pujar
2021-10-05 13:17   ` Pierre-Louis Bossart
2021-10-06 14:22     ` Sameer Pujar
2021-10-06 19:47       ` Pierre-Louis Bossart
2021-10-07 11:06         ` Takashi Iwai
2021-10-07 13:31           ` Pierre-Louis Bossart
2021-10-07 14:59             ` Takashi Iwai
2021-10-07 15:24               ` Pierre-Louis Bossart
2021-10-07 15:44                 ` Takashi Iwai
2021-10-07 18:13                   ` Pierre-Louis Bossart
2021-10-07 21:11                     ` Takashi Iwai
2021-10-07 21:27                       ` Pierre-Louis Bossart
2021-10-08  6:13                         ` Takashi Iwai
2021-10-08 14:41                           ` Pierre-Louis Bossart
2021-10-08 14:51                             ` Takashi Iwai
2021-10-08 15:41                               ` Pierre-Louis Bossart
2021-10-08 19:09                                 ` Pierre-Louis Bossart
2021-10-11 20:06                                   ` Pierre-Louis Bossart
2021-10-12  6:34                                     ` Takashi Iwai
2021-10-12 10:42                                       ` Takashi Iwai
2021-10-12 13:45                                         ` Pierre-Louis Bossart
2021-10-12 15:07                                           ` Takashi Iwai [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=s5hmtne1dua.wl-tiwai@suse.de \
    --to=tiwai@suse.de \
    --cc=alsa-devel@alsa-project.org \
    --cc=broonie@kernel.org \
    --cc=gt82.lee@samsung.com \
    --cc=kuninori.morimoto.gx@renesas.com \
    --cc=peter.ujfalusi@linux.intel.com \
    --cc=pierre-louis.bossart@linux.intel.com \
    --cc=spujar@nvidia.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.