On Thu, Jul 28, 2016 at 10:12:05PM +0200, Lars-Peter Clausen wrote: > output each would get their own widget. Phase 2 in DAPM consists of the > actual power-up/power-down sequencing. This is done in a order that > avoids glitching and ensures all dependencies are powered-up before and > powered-down after the dependent. One fun thing here is that the power up/down sequences are somewhat application specific however I think that's solvable in a general environment (there was originally some provision for this in DAPM but it died as we never found a need for it and the systems are only getting simpler in this regard) and probably for the bulk of things we're talking about there are fairly obvious orderings. > One issue with this approach is that you need a power-sequence scheduler > which sits above all devices. E.g. there is no master device that tells > another device it now needs to turn its resources on, but all devices > are equal in the graph and might be both resource provider and resource > consumer. If there is only one scheduler the graph would contain all > devices and be quite big and you'd run into lock contention issues if > operations are performed on different subgraphs at the same time. If you > have multiple schedulers how do you decide which device is managed by > which scheduler as dependencies might be added and removed at runtime. Lock contention and algorithm runtime are both indeed an issue, though we've got a bunch of stuff which does a very good job of mitigating against graph size in DAPM already which can most likely be lifted out of it and there's doubtless inspiration to be drawn from runtime PM for the locking. The nature of ASoC is such that we've never really had to worry about the locking, a single enormous lock really does make sense for that specific problem domain. It's certainly a very effective technique for mitigating against dependency hell.