All of lore.kernel.org
 help / color / mirror / Atom feed
* Adding PM QoS parameters
@ 2009-04-02 20:25 Premi, Sanjeev
  2009-04-06 21:12 ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Premi, Sanjeev @ 2009-04-02 20:25 UTC (permalink / raw)
  To: linux-pm

I have just started looking at the PM QoS implementation; I came across this
text in "pm_qos_interface.txt"

[quote]
The infrastructure exposes multiple misc device nodes one per implemented
parameter.  The set of parameters implement is defined by pm_qos_power_init()
and pm_qos_params.h.  This is done because having the available parameters
being runtime configurable or changeable from a driver was seen as too easy to
abuse.
[/quote]

Though I have understood the intent; i feel it may also be limiting the use
where there is a genuine need - specific to an arch/ platform.

Can we allow number of these params to grow upto a reasonable limit (say 8)?
If an arch/platform does not specifies more params, everything remains same.
But we get an opportunity to add arch/platform specific requirements.

Not sure if this has already been discussed earlier, but would like to hear
more thoughts.

Best regards,
Sanjeev

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-02 20:25 Adding PM QoS parameters Premi, Sanjeev
@ 2009-04-06 21:12 ` mark gross
  2009-04-07  9:00   ` Premi, Sanjeev
  0 siblings, 1 reply; 14+ messages in thread
From: mark gross @ 2009-04-06 21:12 UTC (permalink / raw)
  To: Premi, Sanjeev; +Cc: linux-pm

On Fri, Apr 03, 2009 at 01:55:06AM +0530, Premi, Sanjeev wrote:
> I have just started looking at the PM QoS implementation; I came across this
> text in "pm_qos_interface.txt"
> 
> [quote]
> The infrastructure exposes multiple misc device nodes one per implemented
> parameter.  The set of parameters implement is defined by pm_qos_power_init()
> and pm_qos_params.h.  This is done because having the available parameters
> being runtime configurable or changeable from a driver was seen as too easy to
> abuse.
> [/quote]
> 
> Though I have understood the intent; i feel it may also be limiting the use
> where there is a genuine need - specific to an arch/ platform.
> 
> Can we allow number of these params to grow upto a reasonable limit (say 8)?
> If an arch/platform does not specifies more params, everything remains same.
> But we get an opportunity to add arch/platform specific requirements.

If you do this then user mode software using the interface will not be
portable across architectures.  Is that what you want?


What parameters are you looking to add?  I have gotten very little
feedback on what parameters are missing or wanted.

> 
> Not sure if this has already been discussed earlier, but would like to hear
> more thoughts.

This has not been discussed, but changing the data structures to use
handles instead of strings has brought up once without data showing the
strcmps where a measurable issue.

I'm very open to improvements, applications and further discussions.

--mgross


> 
> Best regards,
> Sanjeev
> _______________________________________________
> linux-pm mailing list
> linux-pm@lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/linux-pm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-06 21:12 ` mark gross
@ 2009-04-07  9:00   ` Premi, Sanjeev
  2009-04-09 18:57     ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Premi, Sanjeev @ 2009-04-07  9:00 UTC (permalink / raw)
  To: mgross; +Cc: linux-pm

> -----Original Message-----
> From: mark gross [mailto:mgross@linux.intel.com] 
> Sent: Tuesday, April 07, 2009 2:43 AM
> To: Premi, Sanjeev
> Cc: linux-pm@lists.linux-foundation.org
> Subject: Re: [linux-pm] Adding PM QoS parameters
> 
> On Fri, Apr 03, 2009 at 01:55:06AM +0530, Premi, Sanjeev wrote:
> > I have just started looking at the PM QoS implementation; I 
> came across this
> > text in "pm_qos_interface.txt"
> > 
> > [quote]
> > The infrastructure exposes multiple misc device nodes one 
> per implemented
> > parameter.  The set of parameters implement is defined by 
> pm_qos_power_init()
> > and pm_qos_params.h.  This is done because having the 
> available parameters
> > being runtime configurable or changeable from a driver was 
> seen as too easy to
> > abuse.
> > [/quote]
> > 
> > Though I have understood the intent; i feel it may also be 
> limiting the use
> > where there is a genuine need - specific to an arch/ platform.
> > 
> > Can we allow number of these params to grow upto a 
> reasonable limit (say 8)?
> > If an arch/platform does not specifies more params, 
> everything remains same.
> > But we get an opportunity to add arch/platform specific 
> requirements.
> 
> If you do this then user mode software using the interface will not be
> portable across architectures.  Is that what you want?

[sp] Not really. I was more looking at extending the current set
     to include params specific to the architectures so that the
     apps running on these apps are able to make use of these
     params without inventing custom tricks.

> 
> What parameters are you looking to add?  I have gotten very little
> feedback on what parameters are missing or wanted.

[sp] As an example, the OMAP processors support multiple
     scalable voltage domains. Depending upon operating
     conditions, target voltage can be a param.

> 
> > 
> > Not sure if this has already been discussed earlier, but 
> would like to hear
> > more thoughts.
> 
> This has not been discussed, but changing the data structures to use
> handles instead of strings has brought up once without data 
> showing the
> strcmps where a measurable issue.
> 
> I'm very open to improvements, applications and further discussions.

[sp] Here is rough outline

     Initialize the QoS array (with room to extend) as:

    static struct pm_qos_object *pm_qos_array[] = {
        &null_pm_qos,
        &cpu_dma_pm_qos,
        &network_lat_pm_qos,
        &network_throughput_pm_qos
        &null_pm_qos,
        &null_pm_qos,
        &null_pm_qos,
        &null_pm_qos,
        &null_pm_qos,
    };

     Assuming there is an API to add objects to this array, one
     could define additional param as:

    static BLOCKING_NOTIFIER_HEAD(voltage_notifier);
    static struct pm_qos_object voltage_pm_qos = {
        .requirements = {LIST_HEAD_INIT(voltage_pm_qos.requirements.list)},
        .notifiers = &voltage_notifier,
        .name = "voltage_level",
        .default_value = 1200,              /* in mVolts */
        .target_value = ATOMIC_INIT(1200),  /* in mVolts */,
        .comparitor = min_compare
    };

     Now, this can be added to pm_qos_array replacing a
     trailing null_pm_qos.

     Requirements for voltage levels across domains
     can be added to voltage_pm_qos.requirements.list
     in platform specific init.

     Applications for this platform can choose to use
     this param - for better power savings or choose
     to ignore for portability.



     As I mentioned, I just started on the QoS infra. So, do
     correct me if any of this can be achieved via current
     params.
> 
> --mgross
> 
> 
> > 
> > Best regards,
> > Sanjeev
> > _______________________________________________
> > linux-pm mailing list
> > linux-pm@lists.linux-foundation.org
> > https://lists.linux-foundation.org/mailman/listinfo/linux-pm
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-07  9:00   ` Premi, Sanjeev
@ 2009-04-09 18:57     ` mark gross
  2009-04-14 12:24       ` Patrick Bellasi
  0 siblings, 1 reply; 14+ messages in thread
From: mark gross @ 2009-04-09 18:57 UTC (permalink / raw)
  To: Premi, Sanjeev; +Cc: linux-pm

On Tue, Apr 07, 2009 at 02:30:40PM +0530, Premi, Sanjeev wrote:
> > -----Original Message-----
> > From: mark gross [mailto:mgross@linux.intel.com] 
> > Sent: Tuesday, April 07, 2009 2:43 AM
> > To: Premi, Sanjeev
> > Cc: linux-pm@lists.linux-foundation.org
> > Subject: Re: [linux-pm] Adding PM QoS parameters
> > 
> > On Fri, Apr 03, 2009 at 01:55:06AM +0530, Premi, Sanjeev wrote:
> > > I have just started looking at the PM QoS implementation; I 
> > came across this
> > > text in "pm_qos_interface.txt"
> > > 
> > > [quote]
> > > The infrastructure exposes multiple misc device nodes one 
> > per implemented
> > > parameter.  The set of parameters implement is defined by 
> > pm_qos_power_init()
> > > and pm_qos_params.h.  This is done because having the 
> > available parameters
> > > being runtime configurable or changeable from a driver was 
> > seen as too easy to
> > > abuse.
> > > [/quote]
> > > 
> > > Though I have understood the intent; i feel it may also be 
> > limiting the use
> > > where there is a genuine need - specific to an arch/ platform.
> > > 
> > > Can we allow number of these params to grow upto a 
> > reasonable limit (say 8)?
> > > If an arch/platform does not specifies more params, 
> > everything remains same.
> > > But we get an opportunity to add arch/platform specific 
> > requirements.
> > 
> > If you do this then user mode software using the interface will not be
> > portable across architectures.  Is that what you want?
> 
> [sp] Not really. I was more looking at extending the current set
>      to include params specific to the architectures so that the
>      apps running on these apps are able to make use of these
>      params without inventing custom tricks.
> 
> > 
> > What parameters are you looking to add?  I have gotten very little
> > feedback on what parameters are missing or wanted.
> 
> [sp] As an example, the OMAP processors support multiple
>      scalable voltage domains. Depending upon operating
>      conditions, target voltage can be a param.
>

I can see that more parameters would make sense.  Can you come up with
a set of abstractions that have a chance of being portable?
 
> > 
> > > 
> > > Not sure if this has already been discussed earlier, but 
> > would like to hear
> > > more thoughts.
> > 
> > This has not been discussed, but changing the data structures to use
> > handles instead of strings has brought up once without data 
> > showing the
> > strcmps where a measurable issue.
> > 
> > I'm very open to improvements, applications and further discussions.
> 
> [sp] Here is rough outline
> 
>      Initialize the QoS array (with room to extend) as:
> 
>     static struct pm_qos_object *pm_qos_array[] = {
>         &null_pm_qos,
>         &cpu_dma_pm_qos,
>         &network_lat_pm_qos,
>         &network_throughput_pm_qos
>         &null_pm_qos,
>         &null_pm_qos,
>         &null_pm_qos,
>         &null_pm_qos,
>         &null_pm_qos,
>     };
> 
>      Assuming there is an API to add objects to this array, one
>      could define additional param as:
>

nah, If this is where we need to go, then I would change the array to a
list and make it possible to add new parameters at runtime.

Note: this is how I originally implemented it but changed it to a
compile time array to force LKML review of new parameters.  The worry
was that driver writers would just add whatever qos param they wanted
and we would loose on a consistent or stable ABI the user mode clients
of the interface could expect.
 
>     static BLOCKING_NOTIFIER_HEAD(voltage_notifier);
>     static struct pm_qos_object voltage_pm_qos = {
>         .requirements = {LIST_HEAD_INIT(voltage_pm_qos.requirements.list)},
>         .notifiers = &voltage_notifier,
>         .name = "voltage_level",
>         .default_value = 1200,              /* in mVolts */
>         .target_value = ATOMIC_INIT(1200),  /* in mVolts */,
>         .comparitor = min_compare
>     };

Can we identify something that maps to voltage level that would have a
chance at being portable to non-omap?  Higher level abstractions are
more attractive.

I'm having conflicting feelings on voltage as a QoS quantity, but I
totally see the utility of using the PM-QOS infrastructure to provide a
constraint framework on power domains. We may need to investigate this
more.

I would like to get more input from the peanut gallery on this before
acting on it too much.

 
>      Now, this can be added to pm_qos_array replacing a
>      trailing null_pm_qos.
> 
>      Requirements for voltage levels across domains
>      can be added to voltage_pm_qos.requirements.list
>      in platform specific init.
> 
>      Applications for this platform can choose to use
>      this param - for better power savings or choose
>      to ignore for portability.
> 
> 
> 
>      As I mentioned, I just started on the QoS infra. So, do
>      correct me if any of this can be achieved via current
>      params.

Right now adding new parameters is easy (other than dealing with LKML
questioning your choices for names and meanings)  To me you bring up 2
issues:

1) adding a voltage pm-qos parameter for omap power domains

2) is it the right thing to keep pm-qos-params a compile time array and
control the growth of the ABI via these mailing lists or make it a list
and enable driver creation of new parameters as they wish.

Both are good things for us to discuss on the list.


--mgross

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-09 18:57     ` mark gross
@ 2009-04-14 12:24       ` Patrick Bellasi
  2009-04-15 18:35         ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Patrick Bellasi @ 2009-04-14 12:24 UTC (permalink / raw)
  To: linux-pm

mark gross <mgross <at> linux.intel.com> writes:
 
> I can see that more parameters would make sense.  Can you come up with
> a set of abstractions that have a chance of being portable?

Hi,
   we are a group that since some months is working on "Constrained Power
Management" in STMicroelectronics. 

> > [sp] Here is rough outline
> > 
> >      Initialize the QoS array (with room to extend) as:
> > 
> >     static struct pm_qos_object *pm_qos_array[] = {
> >         &null_pm_qos,
> >         &cpu_dma_pm_qos,
> >         &network_lat_pm_qos,
> >         &network_throughput_pm_qos
> >         &null_pm_qos,
> >         &null_pm_qos,
> >         &null_pm_qos,
> >         &null_pm_qos,
> >         &null_pm_qos,
> >     };
> > 
> >      Assuming there is an API to add objects to this array, one
> >      could define additional param as:
> >
> 
> nah, If this is where we need to go, then I would change the array to a
> list and make it possible to add new parameters at runtime.
> 
> Note: this is how I originally implemented it but changed it to a
> compile time array to force LKML review of new parameters.  The worry
> was that driver writers would just add whatever qos param they wanted
> and we would loose on a consistent or stable ABI the user mode clients
> of the interface could expect.

I think that a good trade-off between "LKML control on new parameters" and
"platform extensibility" is hard to identify if we don't refine the concept of
QoS parameters first.
The QoS params defined by pm_qos should avoid to be not-sufficiently general,
to be really useful to applications, but also avoid to be too much abstract to
support platform-specific capabilities.
Since anyway the core pm_qos implementation is sufficiently general to handle
both abstract and platform-specific params, maybe we should better distinguish
among "abstract qos parameters" (AQP) and "platform-specific qos parameters"
PQP).

AQP should be intended to be used by applications to assert abstract
requirements on system behaviors, while PQP can be added by platform code in
order to enable the "constrained power management paradigm" for
architecture/board specific devices.

In this hypothesis the better solution would be to use a dynamic data structure
that will be initialized by the core itself to contain just the set of AQP that
has been reviewed and approved by LKML.
Platform code will then have the chance to add its own specific parameters too.

Moreover we could imagine that AQP will be exported to user-land, in order to
be asserted by application software, while PQP may be hidden within the core
and accessible only by platform drivers.


> Can we identify something that maps to voltage level that would have a
> chance at being portable to non-omap?  Higher level abstractions are
> more attractive.

I agree: user-land accessible params should be platform-independent and define
a portable API for applications.
This requires also to have sufficiently abstract parameters: e.g. network
bandwidth can be easily asserted by an application while cpu-dma-latency is
perhaps too difficult to identify at application level.

> I'm having conflicting feelings on voltage as a QoS quantity, but I
> totally see the utility of using the PM-QOS infrastructure to provide a
> constraint framework on power domains. We may need to investigate this
> more.

In the view I've depicted before: voltages can be eventually defined as PQP
and be used only by platform device drivers while being hidden to userspace.
In this case they could still exploit pm_qos core capabilities.

> Right now adding new parameters is easy (other than dealing with LKML
> questioning your choices for names and meanings)  To me you bring up 2
> issues:
> 
> 1) adding a voltage pm-qos parameter for omap power domains

In my opinion this is reasonable only if we keep PQP (Platform-specific QoS
Parameters) hidden from userspace by distinguishing them from AQP (Abstract
QoS Parameters) which instead are sufficiently general and community approved.

> 2) is it the right thing to keep pm-qos-params a compile time array and
> control the growth of the ABI via these mailing lists or make it a list
> and enable driver creation of new parameters as they wish.

In my opinion a mixed approach, using a dynamic data structure, could be more
interesting to target both requirements.

> Both are good things for us to discuss on the list.

We are tuned on this thread and happy to contribute to the discussion.

Best regards,
Patrick

-- 
#include <best/regards.h>

DERKLING

<-------------------------------------------------------------------------->
  Patrick Bellasi <bellasi at elet dot polimi dot it>
  PhD student at Politecnico di Milano
 
  Privacy:
   - GnuPG     0x72ABC1EE (keyserver.linux.it)
      pub      1024D/72ABC1EE 2003-12-04
      Key fingerprint = 3958 7B5F 36EC D1F8 C752
                               9589 C3B7 FD49 72AB C1EE
<-------------------------------------------------------------------------->

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-14 12:24       ` Patrick Bellasi
@ 2009-04-15 18:35         ` mark gross
  2009-04-21  8:08           ` Derkling
  0 siblings, 1 reply; 14+ messages in thread
From: mark gross @ 2009-04-15 18:35 UTC (permalink / raw)
  To: Patrick Bellasi; +Cc: linux-pm

On Tue, Apr 14, 2009 at 12:24:20PM +0000, Patrick Bellasi wrote:
> mark gross <mgross <at> linux.intel.com> writes:
>  
> > I can see that more parameters would make sense.  Can you come up with
> > a set of abstractions that have a chance of being portable?
> 
> Hi,
>    we are a group that since some months is working on "Constrained Power
> Management" in STMicroelectronics. 

I would like to hear what your ideas are around applying constraints to
power management!  The notion of constraint based PM has been rattling
around, in my head and elsewhere, for a while now (a couple of years).
PMQoS is just an early application of it.  I think a lot more could be
done in this area.

Recently I've been thinking about re-factoring PMQoS in possibly crazy
ways with the goal of somehow supporting a more generic constraint
notion without making the PMQoS ABI into a free for all.
 
> > > [sp] Here is rough outline
> > > 
> > >      Initialize the QoS array (with room to extend) as:
> > > 
> > >     static struct pm_qos_object *pm_qos_array[] = {
> > >         &null_pm_qos,
> > >         &cpu_dma_pm_qos,
> > >         &network_lat_pm_qos,
> > >         &network_throughput_pm_qos
> > >         &null_pm_qos,
> > >         &null_pm_qos,
> > >         &null_pm_qos,
> > >         &null_pm_qos,
> > >         &null_pm_qos,
> > >     };
> > > 
> > >      Assuming there is an API to add objects to this array, one
> > >      could define additional param as:
> > >
> > 
> > nah, If this is where we need to go, then I would change the array to a
> > list and make it possible to add new parameters at runtime.
> > 
> > Note: this is how I originally implemented it but changed it to a
> > compile time array to force LKML review of new parameters.  The worry
> > was that driver writers would just add whatever qos param they wanted
> > and we would loose on a consistent or stable ABI the user mode clients
> > of the interface could expect.
> 
> I think that a good trade-off between "LKML control on new parameters" and
> "platform extensibility" is hard to identify if we don't refine the concept of
> QoS parameters first.
> The QoS params defined by pm_qos should avoid to be not-sufficiently general,
> to be really useful to applications, but also avoid to be too much abstract to
> support platform-specific capabilities.
> Since anyway the core pm_qos implementation is sufficiently general to handle
> both abstract and platform-specific params, maybe we should better distinguish
> among "abstract qos parameters" (AQP) and "platform-specific qos parameters"
> PQP).
> 
> AQP should be intended to be used by applications to assert abstract
> requirements on system behaviors, while PQP can be added by platform code in
> order to enable the "constrained power management paradigm" for
> architecture/board specific devices.

Maybe.
 
> In this hypothesis the better solution would be to use a dynamic data structure
> that will be initialized by the core itself to contain just the set of AQP that
> has been reviewed and approved by LKML.
> Platform code will then have the chance to add its own specific parameters too.
> 
> Moreover we could imagine that AQP will be exported to user-land, in order to
> be asserted by application software, while PQP may be hidden within the core
> and accessible only by platform drivers.
> 

I don't know if we can keep any PQP interfaces kernel only.  Policy
managers really like to run in user mode, even if its just to set the
constraints.

> 
> > Can we identify something that maps to voltage level that would have a
> > chance at being portable to non-omap?  Higher level abstractions are
> > more attractive.
> 
> I agree: user-land accessible params should be platform-independent and define
> a portable API for applications.
> This requires also to have sufficiently abstract parameters: e.g. network
> bandwidth can be easily asserted by an application while cpu-dma-latency is
> perhaps too difficult to identify at application level.

DMA latency is a somewhat sucky name for constraining CPU Idle /
C-states, but I can't think of a better name.

> 
> > I'm having conflicting feelings on voltage as a QoS quantity, but I
> > totally see the utility of using the PM-QOS infrastructure to provide a
> > constraint framework on power domains. We may need to investigate this
> > more.
> 
> In the view I've depicted before: voltages can be eventually defined as PQP
> and be used only by platform device drivers while being hidden to userspace.
> In this case they could still exploit pm_qos core capabilities.

True, but Voltage isn't actually a QoS parameter. Where it is set
certainly effects QoS but its units are all wrong for QoS, and some
other constraint mechanism (perhaps platform specific) may be needed.  
 
> > Right now adding new parameters is easy (other than dealing with LKML
> > questioning your choices for names and meanings)  To me you bring up 2
> > issues:
> > 
> > 1) adding a voltage pm-qos parameter for omap power domains
> 
> In my opinion this is reasonable only if we keep PQP (Platform-specific QoS
> Parameters) hidden from userspace by distinguishing them from AQP (Abstract
> QoS Parameters) which instead are sufficiently general and community approved.

As I type this reply I'm thinking an ok way could be to re-factor PMQoS
into a constraint framework that exposes platform specific constraint
ABI's (in some TBD sane manner---somehow), and set PMQoS on top of this
keeping same ABI and KABI's stable.

I could use some input on the way folks anticipate a constraint
infrastructure to be used.  How hot could the code paths be?  How complex
could the dependencies and inter dependencies become?  

Am I thinking about taking a walk on a slippery slope?
 
> > 2) is it the right thing to keep pm-qos-params a compile time array and
> > control the growth of the ABI via these mailing lists or make it a list
> > and enable driver creation of new parameters as they wish.
> 
> In my opinion a mixed approach, using a dynamic data structure, could be more
> interesting to target both requirements.
> 
> > Both are good things for us to discuss on the list.
> 
> We are tuned on this thread and happy to contribute to the discussion.

very cool.

--mgross

> 
> Best regards,
> Patrick
> 
> -- 
> #include <best/regards.h>
> 
> DERKLING
> 
> <-------------------------------------------------------------------------->
>   Patrick Bellasi <bellasi at elet dot polimi dot it>
>   PhD student at Politecnico di Milano
>  
>   Privacy:
>    - GnuPG     0x72ABC1EE (keyserver.linux.it)
>       pub      1024D/72ABC1EE 2003-12-04
>       Key fingerprint = 3958 7B5F 36EC D1F8 C752
>                                9589 C3B7 FD49 72AB C1EE
> <-------------------------------------------------------------------------->
> 
> 
> _______________________________________________
> linux-pm mailing list
> linux-pm@lists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/linux-pm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-15 18:35         ` mark gross
@ 2009-04-21  8:08           ` Derkling
  2009-04-21 23:43             ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Derkling @ 2009-04-21  8:08 UTC (permalink / raw)
  To: linux-pm; +Cc: Matteo Carnevali, David Siorpaes, Stefano Bosisio

On Wed, Apr 15, 2009 at 20:35, mark gross <mgross@linux.intel.com> wrote:
> > Hi,
> >    we are a group that since some months is working on "Constrained Power
> > Management" in STMicroelectronics.
>
> I would like to hear what your ideas are around applying constraints to
> power management!  The notion of constraint based PM has been rattling
> around, in my head and elsewhere, for a while now (a couple of years).
> PMQoS is just an early application of it.  I think a lot more could be
> done in this area.

We are working on the concept of constrained power management since the times
of DPM.
We ported that framework to Nomadik platform and then we started reasoning on a
different implementation with the aims of overcoming its main drawbacks.

In the meantime pm_qos had been released and since then, this topic, i.e.
constrained power management (CPM), has officially become the main subject of my
PhD study; I agree with you that a lot can be done in this area.

In the last few months have focused on these tasks:
1. define a formal model to state the CPM problem (based both on linear
programming and constraint programming)
2. evaluate pm_qos with respect to this formal model and highlight its
potential wekenesses
3. identify an extension/refactoring of pm_qos in order to overcame its
limitations and perhaps advance the proposal for a more general implementation
4. code a first release of the proposal and submit it to the community for a
review ... and now we are at THIS fourth step.

> Recently I've been thinking about re-factoring PMQoS in possibly crazy
> ways with the goal of somehow supporting a more generic constraint
> notion without making the PMQoS ABI into a free for all.

Our re-factoring is for sure quite crazy. The proposal inherits many ideas and
concepts from both DPM and PM_QoS, perhaps only the strenghts of these two
frameworks:
- from DPM the concept of Operating Points (OP) (but don't worry: it has been
	highly reworked)
- from PM_QoS the concept of "aggregation function" for constraint management

The idea is still to consider the framework to be simply a "constraint manager"
that supports a "distributed power management model".
In such a model each driver usually runs its own local optimization policy
(e.g. cpufreq), but at the same time exchanges some information with the rest of
the system in order to either require or provide QoS levels
(e.g. admitted latency).

In this big picture PM_QoS works quite well, but perhaps we have spotted two
limitations to be better investigated:
1. it is a "best-effort" framework
2. it does not handle properly "additive constraints"

* Best-effort: pm_qos cannot grant that a required QoS level can be effectively
provided, and even worst pm_qos don't provide mechanisms to notify a constraint
requester that the required qos level cannot be satisfied by other components.
This can turn into a "misconfiguration" of a device: its local policy configure
itself expecting the required level even when actually it's impossible to get
that.
No one notifies the device about the effective QoS levels that can be
reached; all that other devices can do it is only the best they can in order to
grant the required level.
Can that leads to an only sub-optimal power optimization for the device
requesting the constraint? We have this suspect...

* additive constraints: pm_qos perhaps does not properly aggregate constraints
requests on shared resources.
If multiple applications assert a bandwidth (or throughput) request: pm_qos
aggregate them considering the MAX (or MIN) of the collected requests:
this seems to be not properly correct.
We think that in this case (e.g. bandwidth) the driver of the service provider
should be aware of the "accumulated" requests to properly configure itself and
satisfy all concurrent requirements.
We define "additive constraints" those that refer to shared resources
(e.g. bandwidth). They differ from "restrictive constraints" (e.g. latency)
since the aggregation function cannot be a simple "bound-value" (i.e. max
or min) but should be an "sum-value" in order to get the real system-wide QoS
requirement.

These considerations made us thinking about a rework on pm_qos in order to build
a more comprehensive framework able to:
- properly handle both restrictive and additive constraints
- provide an agreement mechanism that allows to notify constraint requester
about real system run-time capabilities.

These are basically the ideas behind our rework. As soon as it will be ready we
will deliver the code for review. In the meantime we are also producing a paper
describing in details theoretical foundation of the model we advance and its
implementation.

> > I think that a good trade-off between "LKML control on new parameters" and
> > "platform extensibility" is hard to identify if we don't refine the concept of
> > QoS parameters first.
> > The QoS params defined by pm_qos should avoid to be not-sufficiently general,
> > to be really useful to applications, but also avoid to be too much abstract to
> > support platform-specific capabilities.
> > Since anyway the core pm_qos implementation is sufficiently general to handle
> > both abstract and platform-specific params, maybe we should better distinguish
> > among "abstract qos parameters" (AQP) and "platform-specific qos parameters"
> > PQP).
> >
> > AQP should be intended to be used by applications to assert abstract
> > requirements on system behaviors, while PQP can be added by platform code in
> > order to enable the "constrained power management paradigm" for
> > architecture/board specific devices.
>
> Maybe.

In our model platform code can define both platform-specific and architecture
independent constraints. The formers, since are architecture specific, can be
exposed read-only to user-space (e.g. just to be used for debugging purpose).
The latter instead can be exposed read-write and applications can assert
requirements on them. These constraints should be sufficiently abstract to be
platform independent and generally usable to express high-level requirements for
an application (e.g. the network bandwidth a VoIP application requires).

A device drivers is in charge to "maps" abstract parameters on its own specific
params and eventually on other platform-specific parameters, whenever they are
platform driver (e.g. a network driver can map a bandwidth request on a
platform-specific constraint on dma-latency or amba-bus throughput).


> > In this hypothesis the better solution would be to use a dynamic data structure
> > that will be initialized by the core itself to contain just the set of AQP that
> > has been reviewed and approved by LKML.
> > Platform code will then have the chance to add its own specific parameters too.
> >
> > Moreover we could imagine that AQP will be exported to user-land, in order to
> > be asserted by application software, while PQP may be hidden within the core
> > and accessible only by platform drivers.
> >
>
> I don't know if we can keep any PQP interfaces kernel only.  Policy
> managers really like to run in user mode, even if its just to set the
> constraints.

This is not what is actually going on with both cpufreq and cpuidle.
Really few clients of pm_qos exist now, but among them the kernel-space running
policy seems to be the more widely adopted solution... may be because of
efficiency? User-space can still be in charge of choosing the policy/governor,
but then it's up to this piece of code to manage constraints requests.
In the case of it needs having a user-space policy, a simple kernel-space
wrapper defining a "forwarding policy" will be sufficient to expose the required
PQP to user-space.


> > I agree: user-land accessible params should be platform-independent and define
> > a portable API for applications.
> > This requires also to have sufficiently abstract parameters: e.g. network
> > bandwidth can be easily asserted by an application while cpu-dma-latency is
> > perhaps too difficult to identify at application level.
>
> DMA latency is a somewhat sucky name for constraining CPU Idle /
> C-states, but I can't think of a better name.

I understand, but: is it so common to have user-space code that needs to assert
such "real-time" requirements? It seems to us that user-land should be given
access only to sufficiently abstract constraints that roughly define system
requirements. While more architecture-specific constraints should come from
drivers. This should improve solution portability, isn't it?


> As I type this reply I'm thinking an ok way could be to re-factor PMQoS
> into a constraint framework that exposes platform specific constraint
> ABI's (in some TBD sane manner---somehow), and set PMQoS on top of this
> keeping same ABI and KABI's stable.

Are you thinking to something like an abstract API that each platform code
should implements in its own way?

> I could use some input on the way folks anticipate a constraint
> infrastructure to be used.  How hot could the code paths be?  How complex
> could the dependencies and inter dependencies become?

These are interesting questions! We should deepen those aspects, anyway we think
that dependencies, e.g. among constraints, can also be an interesting concept in
order to build modular solutions.
In instance the mapping that a driver needs to provide between an
(platform-independent) abstract constraint and a platform-specific one should
allow to write more portable drivers that:
1. tune on platform-independent requirements
2. translate abstract requirements into platform-specific ones

> Am I thinking about taking a walk on a slippery slope?
>
> > > 2) is it the right thing to keep pm-qos-params a compile time array and
> > > control the growth of the ABI via these mailing lists or make it a list
> > > and enable driver creation of new parameters as they wish.
> >
> > In my opinion a mixed approach, using a dynamic data structure, could be more
> > interesting to target both requirements.
> >
> > > Both are good things for us to discuss on the list.
> >
> > We are tuned on this thread and happy to contribute to the discussion.
>
> very cool.

It will be interesting even just to have some shared ideas on the table before
the upcoming LPC.

Best regards,
Patrick


--
#include <best/regards.h>

DERKLING
LRU 338214 (http://counter.li.org)

<-------------------------------------------------------------------->
  Patrick Bellasi <bellasi at elet dot polimi dot it>
  PhD student at Politecnico di Milano

  Privacy:
   - GnuPG     0x72ABC1EE (keyserver.linux.it)
      pub      1024D/72ABC1EE 2003-12-04
      Key fingerprint = 3958 7B5F 36EC D1F8 C752
                               9589 C3B7 FD49 72AB C1EE
<-------------------------------------------------------------------->

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-21  8:08           ` Derkling
@ 2009-04-21 23:43             ` mark gross
  2009-04-27 12:50               ` Matteo Carnevali
  0 siblings, 1 reply; 14+ messages in thread
From: mark gross @ 2009-04-21 23:43 UTC (permalink / raw)
  To: Derkling; +Cc: Matteo Carnevali, David Siorpaes, linux-pm, Stefano Bosisio

On Tue, Apr 21, 2009 at 10:08:18AM +0200, Derkling wrote:
> On Wed, Apr 15, 2009 at 20:35, mark gross <mgross@linux.intel.com> wrote:
> > > Hi,
> > >    we are a group that since some months is working on "Constrained Power
> > > Management" in STMicroelectronics.
> >
> > I would like to hear what your ideas are around applying constraints to
> > power management!  The notion of constraint based PM has been rattling
> > around, in my head and elsewhere, for a while now (a couple of years).
> > PMQoS is just an early application of it.  I think a lot more could be
> > done in this area.
> 
> We are working on the concept of constrained power management since the times
> of DPM.
> We ported that framework to Nomadik platform and then we started reasoning on a
> different implementation with the aims of overcoming its main drawbacks.
> 
> In the meantime pm_qos had been released and since then, this topic, i.e.
> constrained power management (CPM), has officially become the main subject of my
> PhD study; I agree with you that a lot can be done in this area.

cool.
 
> In the last few months have focused on these tasks:
> 1. define a formal model to state the CPM problem (based both on linear
> programming and constraint programming)
> 2. evaluate pm_qos with respect to this formal model and highlight its
> potential wekenesses
> 3. identify an extension/refactoring of pm_qos in order to overcame its
> limitations and perhaps advance the proposal for a more general implementation
> 4. code a first release of the proposal and submit it to the community for a
> review ... and now we are at THIS fourth step.
> 

also cool.

> > Recently I've been thinking about re-factoring PMQoS in possibly crazy
> > ways with the goal of somehow supporting a more generic constraint
> > notion without making the PMQoS ABI into a free for all.
> 
> Our re-factoring is for sure quite crazy. The proposal inherits many ideas and
> concepts from both DPM and PM_QoS, perhaps only the strenghts of these two
> frameworks:
> - from DPM the concept of Operating Points (OP) (but don't worry: it has been
> 	highly reworked)
> - from PM_QoS the concept of "aggregation function" for constraint management
> 
> The idea is still to consider the framework to be simply a "constraint manager"
> that supports a "distributed power management model".
> In such a model each driver usually runs its own local optimization policy
> (e.g. cpufreq), but at the same time exchanges some information with the rest of
> the system in order to either require or provide QoS levels
> (e.g. admitted latency).
> 
> In this big picture PM_QoS works quite well, but perhaps we have spotted two
> limitations to be better investigated:
> 1. it is a "best-effort" framework

this is a hard nut to crack, if you get anywhere on it do let me know!

> 2. it does not handle properly "additive constraints"

this is easier to address.

> 
> * Best-effort: pm_qos cannot grant that a required QoS level can be effectively
> provided, and even worst pm_qos don't provide mechanisms to notify a constraint
> requester that the required qos level cannot be satisfied by other components.
> This can turn into a "misconfiguration" of a device: its local policy configure
> itself expecting the required level even when actually it's impossible to get
> that.
> No one notifies the device about the effective QoS levels that can be
> reached; all that other devices can do it is only the best they can in order to
> grant the required level.
> Can that leads to an only sub-optimal power optimization for the device
> requesting the constraint? We have this suspect...

There is room to grow here.

> 
> * additive constraints: pm_qos perhaps does not properly aggregate constraints
> requests on shared resources.
> If multiple applications assert a bandwidth (or throughput) request: pm_qos
> aggregate them considering the MAX (or MIN) of the collected requests:
> this seems to be not properly correct.
> We think that in this case (e.g. bandwidth) the driver of the service provider
> should be aware of the "accumulated" requests to properly configure itself and
> satisfy all concurrent requirements.
> We define "additive constraints" those that refer to shared resources
> (e.g. bandwidth). They differ from "restrictive constraints" (e.g. latency)
> since the aggregation function cannot be a simple "bound-value" (i.e. max
> or min) but should be an "sum-value" in order to get the real system-wide QoS
> requirement.

do a thought experiment around network bandwidth, where multiple
applications ask for max bandwidth from the NIC.  (just like they all
expect to 100% of the CPU resources)  You quickly get to a point where
(just like for CPU resources) that aggregating cumulative QoS requests as
a sum becomes silly.

I know there are applications that need hard real time and "hard QoS"
from the platform, and the idea of building up an OS around the QoS
notion is "interesting"(really), but it just didn't see how I could
apply that to an initial implementation like pmqos.

> 
> These considerations made us thinking about a rework on pm_qos in order to build
> a more comprehensive framework able to:
> - properly handle both restrictive and additive constraints
> - provide an agreement mechanism that allows to notify constraint requester
> about real system run-time capabilities.
> 

It looks like some useful nuggets may come from your work.

> These are basically the ideas behind our rework. As soon as it will be ready we
> will deliver the code for review. In the meantime we are also producing a paper
> describing in details theoretical foundation of the model we advance and its
> implementation.

very cool.


> 
> > > I think that a good trade-off between "LKML control on new parameters" and
> > > "platform extensibility" is hard to identify if we don't refine the concept of
> > > QoS parameters first.
> > > The QoS params defined by pm_qos should avoid to be not-sufficiently general,
> > > to be really useful to applications, but also avoid to be too much abstract to
> > > support platform-specific capabilities.
> > > Since anyway the core pm_qos implementation is sufficiently general to handle
> > > both abstract and platform-specific params, maybe we should better distinguish
> > > among "abstract qos parameters" (AQP) and "platform-specific qos parameters"
> > > PQP).
> > >
> > > AQP should be intended to be used by applications to assert abstract
> > > requirements on system behaviors, while PQP can be added by platform code in
> > > order to enable the "constrained power management paradigm" for
> > > architecture/board specific devices.
> >
> > Maybe.
> 
> In our model platform code can define both platform-specific and architecture
> independent constraints. The formers, since are architecture specific, can be
> exposed read-only to user-space (e.g. just to be used for debugging purpose).
> The latter instead can be exposed read-write and applications can assert
> requirements on them. These constraints should be sufficiently abstract to be
> platform independent and generally usable to express high-level requirements for
> an application (e.g. the network bandwidth a VoIP application requires).
> 
> A device drivers is in charge to "maps" abstract parameters on its own specific
> params and eventually on other platform-specific parameters, whenever they are
> platform driver (e.g. a network driver can map a bandwidth request on a
> platform-specific constraint on dma-latency or amba-bus throughput).
> 
> 
> > > In this hypothesis the better solution would be to use a dynamic data structure
> > > that will be initialized by the core itself to contain just the set of AQP that
> > > has been reviewed and approved by LKML.
> > > Platform code will then have the chance to add its own specific parameters too.
> > >
> > > Moreover we could imagine that AQP will be exported to user-land, in order to
> > > be asserted by application software, while PQP may be hidden within the core
> > > and accessible only by platform drivers.
> > >
> >
> > I don't know if we can keep any PQP interfaces kernel only.  Policy
> > managers really like to run in user mode, even if its just to set the
> > constraints.
> 
> This is not what is actually going on with both cpufreq and cpuidle.
> Really few clients of pm_qos exist now, but among them the kernel-space running
> policy seems to be the more widely adopted solution... may be because of
> efficiency? User-space can still be in charge of choosing the policy/governor,
> but then it's up to this piece of code to manage constraints requests.
> In the case of it needs having a user-space policy, a simple kernel-space
> wrapper defining a "forwarding policy" will be sufficient to expose the required
> PQP to user-space.


I need to think more about this when your code is available.
 
> 
> > > I agree: user-land accessible params should be platform-independent and define
> > > a portable API for applications.
> > > This requires also to have sufficiently abstract parameters: e.g. network
> > > bandwidth can be easily asserted by an application while cpu-dma-latency is
> > > perhaps too difficult to identify at application level.
> >
> > DMA latency is a somewhat sucky name for constraining CPU Idle /
> > C-states, but I can't think of a better name.
> 
> I understand, but: is it so common to have user-space code that needs to assert
> such "real-time" requirements? It seems to us that user-land should be given
> access only to sufficiently abstract constraints that roughly define system
> requirements. While more architecture-specific constraints should come from
> drivers. This should improve solution portability, isn't it?
>

cpu dma latency was chosen thinking it was a abstract / canonical
notion across CPU architectures.  I guess idle-cpu-wake-up-latency may be
slightly better.

Hardware will only get better at being idle, the longer a system can
do a low power idle the more likely user mode applications (policy
managers mostly) will need API's to modify constraints. 

> 
> > As I type this reply I'm thinking an ok way could be to re-factor PMQoS
> > into a constraint framework that exposes platform specific constraint
> > ABI's (in some TBD sane manner---somehow), and set PMQoS on top of this
> > keeping same ABI and KABI's stable.
> 
> Are you thinking to something like an abstract API that each platform code
> should implements in its own way?

no.

But, I would like to make it easier for different architectures to add
stuff like voltage constraints that don't really fit in a QoS only world
without making a miss of the ABI.  Basically I'd like to keep those
sorts of things off to the side and protect the ABI stability.

> 
> > I could use some input on the way folks anticipate a constraint
> > infrastructure to be used.  How hot could the code paths be?  How complex
> > could the dependencies and inter dependencies become?
> 
> These are interesting questions! We should deepen those aspects, anyway we think
> that dependencies, e.g. among constraints, can also be an interesting concept in
> order to build modular solutions.
> In instance the mapping that a driver needs to provide between an
> (platform-independent) abstract constraint and a platform-specific one should
> allow to write more portable drivers that:
> 1. tune on platform-independent requirements
> 2. translate abstract requirements into platform-specific ones

True, with thinking about parameters, I try to think top down.  What
does application x need to execute "properly".  If you go bottoms up
from DPM or Op-points you end up with a mess only a mother could love.

Keep in mind pm-qos is about best effort PM while providing what the
applications need to execute properly.  This is not optimal PM control
theory.  (yet)  Don't feel bad if there is push back on some ideas, the
theme of providing an interface for enabling best effort PM will be
protected.
 
> > Am I thinking about taking a walk on a slippery slope?
> >
> > > > 2) is it the right thing to keep pm-qos-params a compile time array and
> > > > control the growth of the ABI via these mailing lists or make it a list
> > > > and enable driver creation of new parameters as they wish.
> > >
> > > In my opinion a mixed approach, using a dynamic data structure, could be more
> > > interesting to target both requirements.
> > >
> > > > Both are good things for us to discuss on the list.
> > >
> > > We are tuned on this thread and happy to contribute to the discussion.
> >
> > very cool.
> 
> It will be interesting even just to have some shared ideas on the table before
> the upcoming LPC.

That would be really cool to be ready to talk and work on this stuff at
the LPC.  FWIW I've been grounded by cost saving measures and the LPC is
about the only conference I'll get to this year.  I don't like it but
thats the way it goes some years.

we should keep in touch and bounce some ideas around.

--mgross

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-21 23:43             ` mark gross
@ 2009-04-27 12:50               ` Matteo Carnevali
  2009-04-27 20:46                 ` mark gross
  0 siblings, 1 reply; 14+ messages in thread
From: Matteo Carnevali @ 2009-04-27 12:50 UTC (permalink / raw)
  To: mgross
  Cc: Patrick Bellasi, David Siorpaes, linux-pm, Premi Sanjeev,
	Stefano Bosisio


[-- Attachment #1.1: Type: text/plain, Size: 16522 bytes --]

On Wed, Apr 22, 2009 at 1:43 AM, mark gross <mgross@linux.intel.com> wrote:

> On Tue, Apr 21, 2009 at 10:08:18AM +0200, Derkling wrote:
> > On Wed, Apr 15, 2009 at 20:35, mark gross <mgross@linux.intel.com>
> wrote:
> > > > Hi,
> > > >    we are a group that since some months is working on "Constrained
> Power
> > > > Management" in STMicroelectronics.
> > >
>

Hi, I'm Matteo Carnevali and I'm working in ST with Patrick on the topic of
CPM.


>
> > > I would like to hear what your ideas are around applying constraints to
> > > power management!  The notion of constraint based PM has been rattling
> > > around, in my head and elsewhere, for a while now (a couple of years).
> > > PMQoS is just an early application of it.  I think a lot more could be
> > > done in this area.
> >
> > We are working on the concept of constrained power management since the
> times
> > of DPM.
> > We ported that framework to Nomadik platform and then we started
> reasoning on a
> > different implementation with the aims of overcoming its main drawbacks.
> >
> > In the meantime pm_qos had been released and since then, this topic, i.e.
> > constrained power management (CPM), has officially become the main
> subject of my
> > PhD study; I agree with you that a lot can be done in this area.
>
> cool.
>
> > In the last few months have focused on these tasks:
> > 1. define a formal model to state the CPM problem (based both on linear
> > programming and constraint programming)
> > 2. evaluate pm_qos with respect to this formal model and highlight its
> > potential wekenesses
> > 3. identify an extension/refactoring of pm_qos in order to overcame its
> > limitations and perhaps advance the proposal for a more general
> implementation
> > 4. code a first release of the proposal and submit it to the community
> for a
> > review ... and now we are at THIS fourth step.
> >
>
> also cool.
>
> > > Recently I've been thinking about re-factoring PMQoS in possibly crazy
> > > ways with the goal of somehow supporting a more generic constraint
> > > notion without making the PMQoS ABI into a free for all.
> >
> > Our re-factoring is for sure quite crazy. The proposal inherits many
> ideas and
> > concepts from both DPM and PM_QoS, perhaps only the strenghts of these
> two
> > frameworks:
> > - from DPM the concept of Operating Points (OP) (but don't worry: it has
> been
> >       highly reworked)
> > - from PM_QoS the concept of "aggregation function" for constraint
> management
> >
> > The idea is still to consider the framework to be simply a "constraint
> manager"
> > that supports a "distributed power management model".
> > In such a model each driver usually runs its own local optimization
> policy
> > (e.g. cpufreq), but at the same time exchanges some information with the
> rest of
> > the system in order to either require or provide QoS levels
> > (e.g. admitted latency).
> >
> > In this big picture PM_QoS works quite well, but perhaps we have spotted
> two
> > limitations to be better investigated:
> > 1. it is a "best-effort" framework
>
> this is a hard nut to crack, if you get anywhere on it do let me know!
>
> > 2. it does not handle properly "additive constraints"
>
> this is easier to address.
>
> >
> > * Best-effort: pm_qos cannot grant that a required QoS level can be
> effectively
> > provided, and even worst pm_qos don't provide mechanisms to notify a
> constraint
> > requester that the required qos level cannot be satisfied by other
> components.
> > This can turn into a "misconfiguration" of a device: its local policy
> configure
> > itself expecting the required level even when actually it's impossible to
> get
> > that.
> > No one notifies the device about the effective QoS levels that can be
> > reached; all that other devices can do it is only the best they can in
> order to
> > grant the required level.
> > Can that leads to an only sub-optimal power optimization for the device
> > requesting the constraint? We have this suspect...
>
> There is room to grow here.
>
> >
> > * additive constraints: pm_qos perhaps does not properly aggregate
> constraints
> > requests on shared resources.
> > If multiple applications assert a bandwidth (or throughput) request:
> pm_qos
> > aggregate them considering the MAX (or MIN) of the collected requests:
> > this seems to be not properly correct.
> > We think that in this case (e.g. bandwidth) the driver of the service
> provider
> > should be aware of the "accumulated" requests to properly configure
> itself and
> > satisfy all concurrent requirements.
> > We define "additive constraints" those that refer to shared resources
> > (e.g. bandwidth). They differ from "restrictive constraints" (e.g.
> latency)
> > since the aggregation function cannot be a simple "bound-value" (i.e. max
> > or min) but should be an "sum-value" in order to get the real system-wide
> QoS
> > requirement.
>
> do a thought experiment around network bandwidth, where multiple
> applications ask for max bandwidth from the NIC.  (just like they all
> expect to 100% of the CPU resources)  You quickly get to a point where
> (just like for CPU resources) that aggregating cumulative QoS requests as
> a sum becomes silly.
>

Yes, if many applications all ask for the MAX bandwith from the NIC it is
not
relevant to aggregrate in an additive manner... and the problem turns into a
resource management one.

On the other hand I think at the case the NIC has multiple operating points,

defined by the local driver configuration, where each point can be
characterized,
 for instance, by a certain amount of power required to work and by a bound
 of bandwidth it can provide,
e.g. :
- 1 watt power consumption -> max 12 Mbit/s
- 2 watts power consumption -> max 24 Mbit/s
- 3 watts power consumption -> max 54 Mbit/s (full bandwidth)
in a dummy scenario like this it may be useful to aggregate bandwidth
requirement with an additive method, instead of max - min.


>
> I know there are applications that need hard real time and "hard QoS"
> from the platform, and the idea of building up an OS around the QoS
> notion is "interesting"(really), but it just didn't see how I could
> apply that to an initial implementation like pmqos.
>

pmqos was one of the starting point of our study and triggered our thoughts
and
our ideas.
We would like to develop an in-kernel framework that aims at optimizing the
trade-off between power-consumption and performances.
We are targeting embedded multimedia mobile devices, but all this work can
be
applied to other kind of devices like netbooks, laptops, desktops and even
servers.


>
> >
> > These considerations made us thinking about a rework on pm_qos in order
> to build
> > a more comprehensive framework able to:
> > - properly handle both restrictive and additive constraints
> > - provide an agreement mechanism that allows to notify constraint
> requester
> > about real system run-time capabilities.
> >
>
> It looks like some useful nuggets may come from your work.
>

we hope so.


>
> > These are basically the ideas behind our rework. As soon as it will be
> ready we
> > will deliver the code for review. In the meantime we are also producing a
> paper
> > describing in details theoretical foundation of the model we advance and
> its
> > implementation.
>
> very cool.
>
>
> >
> > > > I think that a good trade-off between "LKML control on new
> parameters" and
> > > > "platform extensibility" is hard to identify if we don't refine the
> concept of
> > > > QoS parameters first.
> > > > The QoS params defined by pm_qos should avoid to be not-sufficiently
> general,
> > > > to be really useful to applications, but also avoid to be too much
> abstract to
> > > > support platform-specific capabilities.
> > > > Since anyway the core pm_qos implementation is sufficiently general
> to handle
> > > > both abstract and platform-specific params, maybe we should better
> distinguish
> > > > among "abstract qos parameters" (AQP) and "platform-specific qos
> parameters"
> > > > PQP).
> > > >
> > > > AQP should be intended to be used by applications to assert abstract
> > > > requirements on system behaviors, while PQP can be added by platform
> code in
> > > > order to enable the "constrained power management paradigm" for
> > > > architecture/board specific devices.
> > >
> > > Maybe.
> >
> > In our model platform code can define both platform-specific and
> architecture
> > independent constraints. The formers, since are architecture specific,
> can be
> > exposed read-only to user-space (e.g. just to be used for debugging
> purpose).
> > The latter instead can be exposed read-write and applications can assert
> > requirements on them. These constraints should be sufficiently abstract
> to be
> > platform independent and generally usable to express high-level
> requirements for
> > an application (e.g. the network bandwidth a VoIP application requires).
> >
> > A device drivers is in charge to "maps" abstract parameters on its own
> specific
> > params and eventually on other platform-specific parameters, whenever
> they are
> > platform driver (e.g. a network driver can map a bandwidth request on a
> > platform-specific constraint on dma-latency or amba-bus throughput).
> >
> >
> > > > In this hypothesis the better solution would be to use a dynamic data
> structure
> > > > that will be initialized by the core itself to contain just the set
> of AQP that
> > > > has been reviewed and approved by LKML.
> > > > Platform code will then have the chance to add its own specific
> parameters too.
> > > >
> > > > Moreover we could imagine that AQP will be exported to user-land, in
> order to
> > > > be asserted by application software, while PQP may be hidden within
> the core
> > > > and accessible only by platform drivers.
> > > >
> > >
> > > I don't know if we can keep any PQP interfaces kernel only.  Policy
> > > managers really like to run in user mode, even if its just to set the
> > > constraints.
> >
> > This is not what is actually going on with both cpufreq and cpuidle.
> > Really few clients of pm_qos exist now, but among them the kernel-space
> running
> > policy seems to be the more widely adopted solution... may be because of
> > efficiency? User-space can still be in charge of choosing the
> policy/governor,
> > but then it's up to this piece of code to manage constraints requests.
> > In the case of it needs having a user-space policy, a simple kernel-space
> > wrapper defining a "forwarding policy" will be sufficient to expose the
> required
> > PQP to user-space.
>
>
> I need to think more about this when your code is available.
>

Ok, we are working hard on it!


>
> >
> > > > I agree: user-land accessible params should be platform-independent
> and define
> > > > a portable API for applications.
> > > > This requires also to have sufficiently abstract parameters: e.g.
> network
> > > > bandwidth can be easily asserted by an application while
> cpu-dma-latency is
> > > > perhaps too difficult to identify at application level.
> > >
> > > DMA latency is a somewhat sucky name for constraining CPU Idle /
> > > C-states, but I can't think of a better name.
> >
> > I understand, but: is it so common to have user-space code that needs to
> assert
> > such "real-time" requirements? It seems to us that user-land should be
> given
> > access only to sufficiently abstract constraints that roughly define
> system
> > requirements. While more architecture-specific constraints should come
> from
> > drivers. This should improve solution portability, isn't it?
> >
>
> cpu dma latency was chosen thinking it was a abstract / canonical
> notion across CPU architectures.  I guess idle-cpu-wake-up-latency may be
> slightly better.
>
> Hardware will only get better at being idle, the longer a system can
> do a low power idle the more likely user mode applications (policy
> managers mostly) will need API's to modify constraints.
>

You're right, but in our ideas we would like to exploit different hardware
devices' power states (not only the lowest idle state) and to select the
best
states (mapped to an operating mode of the device) in each instant, in order
to
grant the required performances and to waste as less power as we can.

So, both drivers and applications can set a constraint on an abstract
parameter
(like bandwidth and latency) e.g:
- Applications: ask for more network bandwidth or require a certain latency
on a
 bus, for example to decode a video stream.
- Drivers: if a driver realizes that it can set its controlled device in a
less
consuming operating mode while granting the QoS, it sets a constraint on the

abstract parameter that maps its operating modes on the whole system.

At this point of the game, applications' constraints are pushed down at
drivers'
level and drivers (that have a deep and complete knowledge of their internal

state, operating modes and hardware capabilities) "collaborate" to find an
agreement on the new value of the abstract parameter. In this way the best
(optimal or sub-optimal) system-wide configuration can be found.
Collaboration is achieved through shared knowledge of abstract parameters.


>
> >
> > > As I type this reply I'm thinking an ok way could be to re-factor PMQoS
> > > into a constraint framework that exposes platform specific constraint
> > > ABI's (in some TBD sane manner---somehow), and set PMQoS on top of this
> > > keeping same ABI and KABI's stable.
> >
> > Are you thinking to something like an abstract API that each platform
> code
> > should implements in its own way?
>
> no.
>
> But, I would like to make it easier for different architectures to add
> stuff like voltage constraints that don't really fit in a QoS only world
> without making a miss of the ABI.  Basically I'd like to keep those
> sorts of things off to the side and protect the ABI stability.
>
> >
> > > I could use some input on the way folks anticipate a constraint
> > > infrastructure to be used.  How hot could the code paths be?  How
> complex
> > > could the dependencies and inter dependencies become?
> >
> > These are interesting questions! We should deepen those aspects, anyway
> we think
> > that dependencies, e.g. among constraints, can also be an interesting
> concept in
> > order to build modular solutions.
> > In instance the mapping that a driver needs to provide between an
> > (platform-independent) abstract constraint and a platform-specific one
> should
> > allow to write more portable drivers that:
> > 1. tune on platform-independent requirements
> > 2. translate abstract requirements into platform-specific ones
>
> True, with thinking about parameters, I try to think top down.  What
> does application x need to execute "properly".  If you go bottoms up
> from DPM or Op-points you end up with a mess only a mother could love.
>
> Keep in mind pm-qos is about best effort PM while providing what the
> applications need to execute properly.  This is not optimal PM control
> theory.  (yet)  Don't feel bad if there is push back on some ideas, the
> theme of providing an interface for enabling best effort PM will be
> protected.
>
> > > Am I thinking about taking a walk on a slippery slope?
> > >
> > > > > 2) is it the right thing to keep pm-qos-params a compile time array
> and
> > > > > control the growth of the ABI via these mailing lists or make it a
> list
> > > > > and enable driver creation of new parameters as they wish.
> > > >
> > > > In my opinion a mixed approach, using a dynamic data structure, could
> be more
> > > > interesting to target both requirements.
> > > >
> > > > > Both are good things for us to discuss on the list.
> > > >
> > > > We are tuned on this thread and happy to contribute to the
> discussion.
> > >
> > > very cool.
> >
> > It will be interesting even just to have some shared ideas on the table
> before
> > the upcoming LPC.
>
> That would be really cool to be ready to talk and work on this stuff at
> the LPC.  FWIW I've been grounded by cost saving measures and the LPC is
> about the only conference I'll get to this year.  I don't like it but
> thats the way it goes some years.
>

I hope to have the opportunity to join LPC too.


>
> we should keep in touch and bounce some ideas around.
>

sure


>
> --mgross
>
best regards,
Matteo

<-------------------------------------------------------------------->
  Matteo Carnevali < rekstorm at gmail dot com>
  Master student at Politecnico di Milano
<-------------------------------------------------------------------->

[-- Attachment #1.2: Type: text/html, Size: 20724 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-27 12:50               ` Matteo Carnevali
@ 2009-04-27 20:46                 ` mark gross
  0 siblings, 0 replies; 14+ messages in thread
From: mark gross @ 2009-04-27 20:46 UTC (permalink / raw)
  To: Matteo Carnevali
  Cc: Patrick Bellasi, David Siorpaes, linux-pm, Premi Sanjeev,
	Stefano Bosisio

On Mon, Apr 27, 2009 at 02:50:22PM +0200, Matteo Carnevali wrote:
> On Wed, Apr 22, 2009 at 1:43 AM, mark gross <mgross@linux.intel.com> wrote:
> 
> > On Tue, Apr 21, 2009 at 10:08:18AM +0200, Derkling wrote:
> > > On Wed, Apr 15, 2009 at 20:35, mark gross <mgross@linux.intel.com>
> > wrote:
> > > > > Hi,
> > > > >    we are a group that since some months is working on "Constrained
> > Power
> > > > > Management" in STMicroelectronics.
> > > >
> >
> 
> Hi, I'm Matteo Carnevali and I'm working in ST with Patrick on the topic of
> CPM.
> 
> 

snip

> >
> > do a thought experiment around network bandwidth, where multiple
> > applications ask for max bandwidth from the NIC.  (just like they all
> > expect to 100% of the CPU resources)  You quickly get to a point where
> > (just like for CPU resources) that aggregating cumulative QoS requests as
> > a sum becomes silly.
> >
> 
> Yes, if many applications all ask for the MAX bandwith from the NIC it is
> not
> relevant to aggregrate in an additive manner... and the problem turns into a
> resource management one.
> 
> On the other hand I think at the case the NIC has multiple operating points,
> 
> defined by the local driver configuration, where each point can be
> characterized,
>  for instance, by a certain amount of power required to work and by a bound
>  of bandwidth it can provide,
> e.g. :
> - 1 watt power consumption -> max 12 Mbit/s
> - 2 watts power consumption -> max 24 Mbit/s
> - 3 watts power consumption -> max 54 Mbit/s (full bandwidth)
> in a dummy scenario like this it may be useful to aggregate bandwidth
> requirement with an additive method, instead of max - min.
> 

I'm not seeing it.  Unless all the applications you are planning to run
"know" what the minimal network bandwidth they require, changing the
aggregation method is deployment specific.
...
um, perhaps adding a mechanism for changing the aggregation method would
be a useful addition to PMQoS?
 



snip

> >
> > Hardware will only get better at being idle, the longer a system can
> > do a low power idle the more likely user mode applications (policy
> > managers mostly) will need API's to modify constraints.
> >
> 
> You're right, but in our ideas we would like to exploit different hardware
> devices' power states (not only the lowest idle state) and to select the
> best
> states (mapped to an operating mode of the device) in each instant, in order
> to
> grant the required performances and to waste as less power as we can.
> 
> So, both drivers and applications can set a constraint on an abstract
> parameter
> (like bandwidth and latency) e.g:
> - Applications: ask for more network bandwidth or require a certain latency
> on a
>  bus, for example to decode a video stream.
> - Drivers: if a driver realizes that it can set its controlled device in a
> less
> consuming operating mode while granting the QoS, it sets a constraint on the
> 
> abstract parameter that maps its operating modes on the whole system.
> 
> At this point of the game, applications' constraints are pushed down at
> drivers'
> level and drivers (that have a deep and complete knowledge of their internal
> 
> state, operating modes and hardware capabilities) "collaborate" to find an
> agreement on the new value of the abstract parameter. In this way the best
> (optimal or sub-optimal) system-wide configuration can be found.
> Collaboration is achieved through shared knowledge of abstract parameters.
>

But, this collaboration agreement is what PMQoS was meant to provide
(in a best effort way).  What's missing from your point of view?

--mgross

 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
@ 2009-04-30 12:28 Patrick Bellasi
  0 siblings, 0 replies; 14+ messages in thread
From: Patrick Bellasi @ 2009-04-30 12:28 UTC (permalink / raw)
  To: linux-pm


[-- Attachment #1.1: Type: text/plain, Size: 7792 bytes --]

mark gross <mgross <at> linux.intel.com> writes:

> I don't want to see PM-QoS or constraint based PM to degenerate into a
> DPM OppPoint type of thing.  This statement reads that you are not
> aggregating the PMQoS requests in a sensible manner.  (i.e. attempting
> to code DPM styled PM using pmqos interfaces)
>
> One thing that is core to PMQoS and Constraint based PM is that there is
> a assumed partial ordering of the PM states.

Is that assumption really feasible?

Of course each device can define a _local_ partial ordering of its own power
states, but what can be assumed about the system-wide power state?
If we consider devices interdependencies, it may be that a _local_
optimization
has indirect impact on some other device performances and thus the
system-wide
state could turn out to be not the overall optimal one.

Let us consider two devices (D1 and D2) which _local_ optimization policies
are both influenced by the same QoS parameter (C1).
Let consider also someone asserting a constraint (C1<c) on that parameter.
It could happen for instance that D1 is able to fulfill this requirement, by
reconfiguring itself on a compatible operating mode, while instead D2 cannot
respect the constraint.
In this case it could happen that, finally, the required QoS level (C1<c)
cannot be granted by the system (i.e. due to the D2 impossibility to satisfy
it), anyway the D1 local policy has configured its device to work according
to
the required service level... at the end, perhaps, we spend "more power for
nothing".

This is just a dummy example, we are still trying to map it on a real
scenario,
but actually we have the feeling that the "composition" of locally optimal
configurations (based on a local partial ordering) cannot be sufficient to
get a global optimal configuration too.

> Absolute or specified performance settings are explicitly not part of
PMQoS.

I agree on your view about what PMQoS should be: definitely it should
support
a distributed control model. In this model each driver has a local
optimization
policy, usually trying to reduce power consumption, and PMQoS just delivers
some informations system-wide about expected QoS, in order to constraint
local
policies.
Therefore, such a framework will not be in charge of directly specify
performance settings in the DPM style.

However, I think that it should provide also some kind of support for the
identification on feasible system-wide optimal configurations.
We have some ideas on how such a support could be effectively implemented
but
in this sense, of course, every contribution that could come from this
discussion is welcome.
Certainly our idea is not to overcame the best-effort approach of pm_qos,
which
actually is also at the base of its simplicity, but at least also to provide

support for a "distributed agreement" approach.
Such an approach, even if generally not necessary, could be eventually
better exploited in some specific embedded application contexts, e.g. on
complex
and multi-functional new generation SoC based multimedia mobile devices.


> > Once a requested level is achieved the requester should be
> > notified for possible reconfiguration. It could be via an
> > optional registration.
>
> performance / power state entry notification?
> I think we should be careful with that idea.

A notification after entering could be difficult, and could also imply
energy
wastage whenever the required state should turn out to be not feasible.
Perhaps it could be simpler to verify whether the required performance level
can
 be granted and _only then_ either:
- to notify affected subsystems to grant the required service level
or
- to notify the service level requester that the required configuration is
not
feasible


> > We could start with a smaller set, e.g.:
> > - Interrupt latency (in lieu of DMA latency)
> > - Sleep lateny (to control sleep in absence of cpuidle)
> > - Cpu frequency
> > - Cpu voltage
>
> These are bottom up performance parameters.  Lets first go top down and
> keep in mind the partial ordering component of the system.  You can only
> constrain a min or max value not both.  Typically you constrain the
> lowest platform power a parameter is allowed to enter.

If we consider sufficiently abstract parameters (top-down approach) it could
be
difficult to understand "what" maps to lower power consumptions.
Perhaps, we could identify that mapping if we consider just one device.
But if we consider a platform, and the interdependencies among different
devices, then it may be complex to foresee how a local optimization can
impact
on system-wide power consumption.
For instance, the "run-to-idle" approach adopted by the on-demand cpufreq
governor testifies that a local non-optimal policy decision can have
system-wide
benefits (e.g. longer idle times).

> If you are looking to constrain the highest power setting the platform
> can go too, then you are talking PMQoS when really thinking DPM.

I agree.

> > I am not sure if I understood this completely, but I believe
> > that abstract -> specific mapping should be done at system level.
> > Letting drivers define them, may not be portable; and might lead
> > to more confusion.
>
> Parameters need to be defined at the application (solution) level and
> exposed to the drivers to enable them to make the best choice.  Not the
> other way around.  So if by system level you mean "solution" level then
> we agree, but over email I'm not sure if we do.

Along with "solution" level parameters (abstract parameters), it could be
interesting to have also system-level parameters.
These will be defined by drivers and platform code in order to track devices
functional dependencies.

Lets have an example:
- solution param: mems-sample-rate
    which is an abstract parameter used for instance by applications to
    assert a QoS level
    (e.g. how frequently I expect to read mems' accelerations)
- system-level param: i2c-bus-bandwidth
    - mems driver: assert constraint on that param to translate the abstract
        parameter request according to the specific device capabilities
        (e.g. we are I2C bus attached)
    - platform code: specify the platform-specific I2C channel
        corresponding to that device. This can be done for instance
        defining the system-level param as a device resource.

Such a solution should allow to translate abstract "solution" parameters to
platform-specific in a sufficiently general and portable way.

> > Generic params that impact the apps could/should be an array;
> > while arch/plat specific ones could be a linked list.
>
> I'll look at this.

I personally would prefer a common solution: "solution" params could be both
statically defined and pre-loaded into the same dynamic data structure that
will host platform-specific params too.

> Keep in mind I am against the DPM-ification of this design.  At a basic
> level the Linux OS is a best effort OS and although you can always bang
> some CR's to set power states, that isn't useful as platform independent
> or common code.

That's true, but since some time Linux is winking to real-time and embedded
systems. On such resource constrained systems, searching for "as much as
possible granted" system-wide optimal solutions is one of the main
challenges.
If we will be able to extend pm_qos to be suitable not only for general
purpose
systems but also to fit well on these application scenarios I think we will
have
interesting returns from our efforts.

> --mgross
Patrick

-- 
#include <best/regards.h>

Patrick Bellasi <bellasi at elet dot polimi dot it>
PhD student at Politecnico di Milano


GnuPG     0x72ABC1EE (keyserver.linux.it)
    pub      1024D/72ABC1EE 2003-12-04
    Key fingerprint = 3958 7B5F 36EC D1F8 C752
                             9589 C3B7 FD49 72AB C1EE

[-- Attachment #1.2: Type: text/html, Size: 8891 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-21 20:02 ` Premi, Sanjeev
  2009-04-22 16:35   ` mark gross
@ 2009-04-27 12:41   ` Matteo Carnevali
  1 sibling, 0 replies; 14+ messages in thread
From: Matteo Carnevali @ 2009-04-27 12:41 UTC (permalink / raw)
  To: linux-pm

Premi, Sanjeev <premi <at> ti.com> writes:

> 
> Sorry, picking-up thread late...
> 

Hi, I'm Matteo Carnevali and I'm working in ST with Patrick on the topic of CPM.

> 
> Would like to see more of this before real comment; because I
> am trying to see what could happen if 2 drivers sharing a
> common parent clock try to change the freq in 2 different
> directions.
> 

Of Course, the QoS should always be granted, at least as long as system 
resources are available by the hardware.

In a real context is not so common that drivers (different from the cpu drivers 
like cpufreq or cpuidle) explicitly ask to lower cpu frequency, the cpufreq 
driver can do that but it relies on the governor which monitors the cpu load 
and consequently tunes the frequency.

Moreover the cpu frequency cannot be considered as an abstract parameter, but
something very related to the physical architecture. A device driver, in our 
ideas, cannot state "I now need more than 200Mhz to the cpu"...

However, two conflicting requests on an abstract parameter will be actually 
treated as a conflict and the framework core (the controller) or a governor 
associated to the framework will avoid to work on such proposal.
Anyway, the framework core should not be considered as a central entity that 
manages the control telling the driver how to act under certain circumstances, 
this is a feature of "Centralized power management" and it is not our idea.

Moreover, it is also not very possible that two requests to change the value of 
an abstract parameter (i.e. set a constraint on a parameter) happen exactly at 
the same time.

> 
> Once a requested level is achieved the requester should be
> notified for possible reconfiguration. It could be via an
> optional registration.
>

Yes, this is can be done with the notification chain.
Writing about the Best Effort approach of pmqos, we wanted to be sure we 
correctly pointed out a possible limitation of pmqos it self and we would like 
to know if you agree on this. 

> 
> So, again we are again looking at means to add more constraints
> and also the "kind" of constraint.
> 

> 
> We could start with a smaller set, e.g.:
> - Interrupt latency (in lieu of DMA latency)  
> - Sleep lateny (to control sleep in absence of cpuidle)
> - Cpu frequency
> - Cpu voltage
> 
> Of course, when we extend this to SoCs with multiple
> CPUs then we do have (possible) need of multiplicity
> if one of these has to act as "master".
> 
> e.g. OMAP3 allows ARM and IVA voltages and frequency
> to be changed independently.
> 

Specific hardware and platform feature should be kept confined in the driver 
code and not brought directly at framework level, i.e translated in parameters; 
on the other hand, the mapping among device operating modes (aka local driver 
configuration) and abstract parameters should be performed inside the device 
driver to support the framework.

> 
> On the first read, I believe abstract parameter+level combination
> can help us achieve the appliation portability across architectures
> and allow specific map most suitable for each arch.
> 

> 
> I am not sure if I understood this completely, but I believe
> that abstract -> specific mapping should be done at system level.
> Letting drivers define them, may not be portable; and might lead
> to more confusion.
> 

We can consider to have two mappings:
1) mapping between the driver local configuration and the abstract parameters.
Hence, if abstract parameter are well defined and a device driver is well 
written to support them and correctly maps its operating modes to the 
parameters, there should not be confusion.
Let's consider a NIC driver with 3 operating modes and just one abstract 
parameter, the bandwidth:

op mode   ---mapping---    bandwidth 

idle <----------------->   0
low power <------------>   0-20 Mbit/s
max power <------------>   0-54 Mbit/s

this is just a dummy example...

Drivers using this framework should of course correctly implement the framework 
API.

2) mapping between the platform code and the abstract parameters, which allows 
the hierarchical binding of platform specific resources and the abstract 
parameters

If we consider the example of the NIC driver, let say that it uses an SPI bus to 
transfer data to the SoC: the NIC declares it wants at least x bandwidth on the 
bus by setting a constraint on that abstract parameter. At this point the 
platform code has the job to assign the correct SPI channel for that request and 
can do that thanks to its mapping with the abstract parameter.
The NIC driver itself does not know at which SPI channel is attached. It just 
sets a constraint on a abstract parameter.

> 
> Agree.
> 

> 
> ...if we get rope closer to ground, gravity doesn't harm much!
> 

> 
> Generic params that impact the apps could/should be an array;
> while arch/plat specific ones could be a linked list.
> 
> Best regards,
> Sanjeev
> 


best regards,
Matteo


<-------------------------------------------------------------------->
  Matteo Carnevali < rekstorm at gmail dot com>
  Master student at Politecnico di Milano

<-------------------------------------------------------------------->

> > 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
  2009-04-21 20:02 ` Premi, Sanjeev
@ 2009-04-22 16:35   ` mark gross
  2009-04-27 12:41   ` Matteo Carnevali
  1 sibling, 0 replies; 14+ messages in thread
From: mark gross @ 2009-04-22 16:35 UTC (permalink / raw)
  To: Premi, Sanjeev; +Cc: linux-pm

On Wed, Apr 22, 2009 at 01:32:58AM +0530, Premi, Sanjeev wrote:
> Sorry, picking-up thread late...
> 
> > Date: Tue, 21 Apr 2009 10:08:18 +0200
> > From: Derkling <derkling@gmail.com>
> > Subject: Re: [linux-pm] Adding PM QoS parameters
> > To: linux-pm@lists.osdl.org
> > Cc: Matteo Carnevali <rekstorm@gmail.com>,	David Siorpaes
> > 	<david.siorpaes@st.com>,	Stefano Bosisio 
> > <stebosisio@gmail.com>
> > Message-ID:
> > 	<aa8983610904210108v6490ee58hfa47ef126dca0f7d@mail.gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1
> > 
> > On Wed, Apr 15, 2009 at 20:35, mark gross 
> > <mgross@linux.intel.com> wrote:
> > > > Hi,
> > > >    we are a group that since some months is working on 
> > "Constrained Power
> > > > Management" in STMicroelectronics.
> > >
> > > I would like to hear what your ideas are around applying 
> > constraints to
> > > power management!  The notion of constraint based PM has 
> > been rattling
> > > around, in my head and elsewhere, for a while now (a couple 
> > of years).
> > > PMQoS is just an early application of it.  I think a lot 
> > more could be
> > > done in this area.
> > 
> > We are working on the concept of constrained power management 
> > since the times
> > of DPM.
> > We ported that framework to Nomadik platform and then we 
> > started reasoning on a
> > different implementation with the aims of overcoming its main 
> > drawbacks.
> > 
> > In the meantime pm_qos had been released and since then, this 
> > topic, i.e.
> > constrained power management (CPM), has officially become the 
> > main subject of my
> > PhD study; I agree with you that a lot can be done in this area.
> > 
> > In the last few months have focused on these tasks:
> > 1. define a formal model to state the CPM problem (based both 
> > on linear
> > programming and constraint programming)
> > 2. evaluate pm_qos with respect to this formal model and highlight its
> > potential wekenesses
> > 3. identify an extension/refactoring of pm_qos in order to 
> > overcame its
> > limitations and perhaps advance the proposal for a more 
> > general implementation
> > 4. code a first release of the proposal and submit it to the 
> > community for a
> > review ... and now we are at THIS fourth step.
> > 
> > > Recently I've been thinking about re-factoring PMQoS in 
> > possibly crazy
> > > ways with the goal of somehow supporting a more generic constraint
> > > notion without making the PMQoS ABI into a free for all.
> > 
> > Our re-factoring is for sure quite crazy. The proposal 
> > inherits many ideas and
> > concepts from both DPM and PM_QoS, perhaps only the strenghts 
> > of these two
> > frameworks:
> > - from DPM the concept of Operating Points (OP) (but don't 
> > worry: it has been
> > 	highly reworked)
> > - from PM_QoS the concept of "aggregation function" for 
> > constraint management
> > 
> > The idea is still to consider the framework to be simply a 
> > "constraint manager"
> > that supports a "distributed power management model".
> > In such a model each driver usually runs its own local 
> > optimization policy
> > (e.g. cpufreq), but at the same time exchanges some 
> > information with the rest of
> > the system in order to either require or provide QoS levels
> > (e.g. admitted latency).
> 
> Would like to see more of this before real comment; because I
> am trying to see what could happen if 2 drivers sharing a
> common parent clock try to change the freq in 2 different
> directions.

I don't want to see PM-QoS or constraint based PM to degenerate into a
DPM OppPoint type of thing.  This statement reads that you are not
aggregating the PMQoS requests in a sensible manner.  (i.e. attempting
to code DPM styled PM using pmqos interfaces)

One thing that is core to PMQoS and Constraint based PM is that there is
a assumed partial ordering of the PM states.  Absolute or specified
performance settings are explicitly not part of PMQoS.
 
> > 
> > In this big picture PM_QoS works quite well, but perhaps we 
> > have spotted two
> > limitations to be better investigated:
> > 1. it is a "best-effort" framework
> > 2. it does not handle properly "additive constraints"
> > 
> > * Best-effort: pm_qos cannot grant that a required QoS level 
> > can be effectively
> > provided, and even worst pm_qos don't provide mechanisms to 
> > notify a constraint
> > requester that the required qos level cannot be satisfied by 
> > other components.
> > This can turn into a "misconfiguration" of a device: its 
> > local policy configure
> > itself expecting the required level even when actually it's 
> > impossible to get
> > that.
> > No one notifies the device about the effective QoS levels that can be
> > reached; all that other devices can do it is only the best 
> > they can in order to
> > grant the required level.
> > Can that leads to an only sub-optimal power optimization for 
> > the device
> > requesting the constraint? We have this suspect...
> 
> Once a requested level is achieved the requester should be
> notified for possible reconfiguration. It could be via an
> optional registration.

performance / power state entry notification?
I think we should be careful with that idea.

> 
> > 
> > * additive constraints: pm_qos perhaps does not properly 
> > aggregate constraints
> > requests on shared resources.
> > If multiple applications assert a bandwidth (or throughput) 
> > request: pm_qos
> > aggregate them considering the MAX (or MIN) of the collected requests:
> > this seems to be not properly correct.
> > We think that in this case (e.g. bandwidth) the driver of the 
> > service provider
> > should be aware of the "accumulated" requests to properly 
> > configure itself and
> > satisfy all concurrent requirements.
> > We define "additive constraints" those that refer to shared resources
> > (e.g. bandwidth). They differ from "restrictive constraints" 
> > (e.g. latency)
> > since the aggregation function cannot be a simple 
> > "bound-value" (i.e. max
> > or min) but should be an "sum-value" in order to get the real 
> > system-wide QoS
> > requirement.
> > These considerations made us thinking about a rework on 
> > pm_qos in order to build
> > a more comprehensive framework able to:
> > - properly handle both restrictive and additive constraints
> > - provide an agreement mechanism that allows to notify 
> > constraint requester
> > about real system run-time capabilities.
> 
> So, again we are again looking at means to add more constraints
> and also the "kind" of constraint.
> 
> > 
> > These are basically the ideas behind our rework. As soon as 
> > it will be ready we
> > will deliver the code for review. In the meantime we are also 
> > producing a paper
> > describing in details theoretical foundation of the model we 
> > advance and its
> > implementation.
> > 
> > > > I think that a good trade-off between "LKML control on 
> > new parameters" and
> > > > "platform extensibility" is hard to identify if we don't 
> > refine the concept of
> > > > QoS parameters first.
> > > > The QoS params defined by pm_qos should avoid to be 
> > not-sufficiently general,
> > > > to be really useful to applications, but also avoid to be 
> > too much abstract to
> > > > support platform-specific capabilities.
> 
> We could start with a smaller set, e.g.:
> - Interrupt latency (in lieu of DMA latency)  
> - Sleep lateny (to control sleep in absence of cpuidle)
> - Cpu frequency
> - Cpu voltage

These are bottom up performance parameters.  Lets first go top down and
keep in mind the partial ordering component of the system.  You can only
constrain a min or max value not both.  Typically you constrain the
lowest platform power a parameter is allowed to enter.  

If you are looking to constrain the highest power setting the platform
can go too, then you are talking PMQoS when really thinking DPM.



> 
> Of course, when we extend this to SoCs with multiple
> CPUs then we do have (possible) need of multiplicity
> if one of these has to act as "master".
> 
> e.g. OMAP3 allows ARM and IVA voltages and frequency
> to be changed independently.
> 
> > > > Since anyway the core pm_qos implementation is 
> > sufficiently general to handle
> > > > both abstract and platform-specific params, maybe we 
> > should better distinguish
> > > > among "abstract qos parameters" (AQP) and 
> > "platform-specific qos parameters"
> > > > PQP).
> > > >
> > > > AQP should be intended to be used by applications to 
> > assert abstract
> > > > requirements on system behaviors, while PQP can be added 
> > by platform code in
> > > > order to enable the "constrained power management paradigm" for
> > > > architecture/board specific devices.
> > >
> > > Maybe.
> 
> On the first read, I believe abstract parameter+level combination
> can help us achieve the appliation portability across architectures
> and allow specific map most suitable for each arch.
> 
> > In our model platform code can define both platform-specific 
> > and architecture
> > independent constraints. The formers, since are architecture 
> > specific, can be
> > exposed read-only to user-space (e.g. just to be used for 
> > debugging purpose).
> > The latter instead can be exposed read-write and applications 
> > can assert
> > requirements on them. These constraints should be 
> > sufficiently abstract to be
> > platform independent and generally usable to express 
> > high-level requirements for
> > an application (e.g. the network bandwidth a VoIP application 
> > requires).
> > 
> > A device drivers is in charge to "maps" abstract parameters 
> > on its own specific
> > params and eventually on other platform-specific parameters, 
> > whenever they are
> > platform driver (e.g. a network driver can map a bandwidth 
> > request on a
> > platform-specific constraint on dma-latency or amba-bus throughput).
> > 
> 
> I am not sure if I understood this completely, but I believe
> that abstract -> specific mapping should be done at system level.
> Letting drivers define them, may not be portable; and might lead
> to more confusion.

Parameters need to be defined at the application (solution) level and
exposed to the drivers to enable them to make the best choice.  Not the
other way around.  So if by system level you mean "solution" level then
we agree, but over email I'm not sure if we do.

 
> > > > In this hypothesis the better solution would be to use a 
> > dynamic data structure
> > > > that will be initialized by the core itself to contain 
> > just the set of AQP that
> > > > has been reviewed and approved by LKML.
> > > > Platform code will then have the chance to add its own 
> > specific parameters too.
> > > >
> > > > Moreover we could imagine that AQP will be exported to 
> > user-land, in order to
> > > > be asserted by application software, while PQP may be 
> > hidden within the core
> > > > and accessible only by platform drivers.
> > > >
> > >
> > > I don't know if we can keep any PQP interfaces kernel only.  Policy
> > > managers really like to run in user mode, even if its just 
> > to set the
> > > constraints.
> > 
> > This is not what is actually going on with both cpufreq and cpuidle.
> > Really few clients of pm_qos exist now, but among them the 
> > kernel-space running
> > policy seems to be the more widely adopted solution... may be 
> > because of
> > efficiency? User-space can still be in charge of choosing the 
> > policy/governor,
> > but then it's up to this piece of code to manage constraints requests.
> > In the case of it needs having a user-space policy, a simple 
> > kernel-space
> > wrapper defining a "forwarding policy" will be sufficient to 
> > expose the required
> > PQP to user-space.
> > 
> > 
> > > > I agree: user-land accessible params should be 
> > platform-independent and define
> > > > a portable API for applications.
> > > > This requires also to have sufficiently abstract 
> > parameters: e.g. network
> > > > bandwidth can be easily asserted by an application while 
> > cpu-dma-latency is
> > > > perhaps too difficult to identify at application level.
> > >
> > > DMA latency is a somewhat sucky name for constraining CPU Idle /
> > > C-states, but I can't think of a better name.
> > 
> > I understand, but: is it so common to have user-space code 
> > that needs to assert
> > such "real-time" requirements? It seems to us that user-land 
> > should be given
> > access only to sufficiently abstract constraints that roughly 
> > define system
> > requirements. While more architecture-specific constraints 
> > should come from
> > drivers. This should improve solution portability, isn't it?
> 
> Or, the drivers should 'map' their constraints onto arch
> specific constratints making them portable. e.g. we wouldn't like
> an ethernet driver tied to one arch/platform... even with most
> of this mapping in a platform specific code. 
> 
> > 
> > 
> > > As I type this reply I'm thinking an ok way could be to 
> > re-factor PMQoS
> > > into a constraint framework that exposes platform specific 
> > constraint
> > > ABI's (in some TBD sane manner---somehow), and set PMQoS on 
> > top of this
> > > keeping same ABI and KABI's stable.
> 
> Agree.
> 
> > 
> > Are you thinking to something like an abstract API that each 
> > platform code
> > should implements in its own way?
> > 
> > > I could use some input on the way folks anticipate a constraint
> > > infrastructure to be used.  How hot could the code paths 
> > be?  How complex
> > > could the dependencies and inter dependencies become?
> > 
> > These are interesting questions! We should deepen those 
> > aspects, anyway we think
> > that dependencies, e.g. among constraints, can also be an 
> > interesting concept in
> > order to build modular solutions.
> > In instance the mapping that a driver needs to provide between an
> > (platform-independent) abstract constraint and a 
> > platform-specific one should
> > allow to write more portable drivers that:
> > 1. tune on platform-independent requirements
> > 2. translate abstract requirements into platform-specific ones
> > 
> > > Am I thinking about taking a walk on a slippery slope?
> 
> ...if we get rope closer to ground, gravity doesn't harm much!

true.

> 
> > >
> > > > > 2) is it the right thing to keep pm-qos-params a 
> > compile time array and
> > > > > control the growth of the ABI via these mailing lists 
> > or make it a list
> > > > > and enable driver creation of new parameters as they wish.
> > > >
> > > > In my opinion a mixed approach, using a dynamic data 
> > structure, could be more
> > > > interesting to target both requirements.
> > > >
> > > > > Both are good things for us to discuss on the list.
> > > >
> > > > We are tuned on this thread and happy to contribute to 
> > the discussion.
> 
> Generic params that impact the apps could/should be an array;
> while arch/plat specific ones could be a linked list.

I'll look at this.  

Keep in mind I am against the DPM-ification of this design.  At a basic
level the Linux OS is a best effort OS and although you can always bang
some CR's to set power states, that isn't useful as platform independent
or common code.

--mgross

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Adding PM QoS parameters
       [not found] <mailman.459.1240339694.10269.linux-pm@lists.linux-foundation.org>
@ 2009-04-21 20:02 ` Premi, Sanjeev
  2009-04-22 16:35   ` mark gross
  2009-04-27 12:41   ` Matteo Carnevali
  0 siblings, 2 replies; 14+ messages in thread
From: Premi, Sanjeev @ 2009-04-21 20:02 UTC (permalink / raw)
  To: linux-pm

Sorry, picking-up thread late...

> Date: Tue, 21 Apr 2009 10:08:18 +0200
> From: Derkling <derkling@gmail.com>
> Subject: Re: [linux-pm] Adding PM QoS parameters
> To: linux-pm@lists.osdl.org
> Cc: Matteo Carnevali <rekstorm@gmail.com>,	David Siorpaes
> 	<david.siorpaes@st.com>,	Stefano Bosisio 
> <stebosisio@gmail.com>
> Message-ID:
> 	<aa8983610904210108v6490ee58hfa47ef126dca0f7d@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
> 
> On Wed, Apr 15, 2009 at 20:35, mark gross 
> <mgross@linux.intel.com> wrote:
> > > Hi,
> > >    we are a group that since some months is working on 
> "Constrained Power
> > > Management" in STMicroelectronics.
> >
> > I would like to hear what your ideas are around applying 
> constraints to
> > power management!  The notion of constraint based PM has 
> been rattling
> > around, in my head and elsewhere, for a while now (a couple 
> of years).
> > PMQoS is just an early application of it.  I think a lot 
> more could be
> > done in this area.
> 
> We are working on the concept of constrained power management 
> since the times
> of DPM.
> We ported that framework to Nomadik platform and then we 
> started reasoning on a
> different implementation with the aims of overcoming its main 
> drawbacks.
> 
> In the meantime pm_qos had been released and since then, this 
> topic, i.e.
> constrained power management (CPM), has officially become the 
> main subject of my
> PhD study; I agree with you that a lot can be done in this area.
> 
> In the last few months have focused on these tasks:
> 1. define a formal model to state the CPM problem (based both 
> on linear
> programming and constraint programming)
> 2. evaluate pm_qos with respect to this formal model and highlight its
> potential wekenesses
> 3. identify an extension/refactoring of pm_qos in order to 
> overcame its
> limitations and perhaps advance the proposal for a more 
> general implementation
> 4. code a first release of the proposal and submit it to the 
> community for a
> review ... and now we are at THIS fourth step.
> 
> > Recently I've been thinking about re-factoring PMQoS in 
> possibly crazy
> > ways with the goal of somehow supporting a more generic constraint
> > notion without making the PMQoS ABI into a free for all.
> 
> Our re-factoring is for sure quite crazy. The proposal 
> inherits many ideas and
> concepts from both DPM and PM_QoS, perhaps only the strenghts 
> of these two
> frameworks:
> - from DPM the concept of Operating Points (OP) (but don't 
> worry: it has been
> 	highly reworked)
> - from PM_QoS the concept of "aggregation function" for 
> constraint management
> 
> The idea is still to consider the framework to be simply a 
> "constraint manager"
> that supports a "distributed power management model".
> In such a model each driver usually runs its own local 
> optimization policy
> (e.g. cpufreq), but at the same time exchanges some 
> information with the rest of
> the system in order to either require or provide QoS levels
> (e.g. admitted latency).

Would like to see more of this before real comment; because I
am trying to see what could happen if 2 drivers sharing a
common parent clock try to change the freq in 2 different
directions.

> 
> In this big picture PM_QoS works quite well, but perhaps we 
> have spotted two
> limitations to be better investigated:
> 1. it is a "best-effort" framework
> 2. it does not handle properly "additive constraints"
> 
> * Best-effort: pm_qos cannot grant that a required QoS level 
> can be effectively
> provided, and even worst pm_qos don't provide mechanisms to 
> notify a constraint
> requester that the required qos level cannot be satisfied by 
> other components.
> This can turn into a "misconfiguration" of a device: its 
> local policy configure
> itself expecting the required level even when actually it's 
> impossible to get
> that.
> No one notifies the device about the effective QoS levels that can be
> reached; all that other devices can do it is only the best 
> they can in order to
> grant the required level.
> Can that leads to an only sub-optimal power optimization for 
> the device
> requesting the constraint? We have this suspect...

Once a requested level is achieved the requester should be
notified for possible reconfiguration. It could be via an
optional registration.

> 
> * additive constraints: pm_qos perhaps does not properly 
> aggregate constraints
> requests on shared resources.
> If multiple applications assert a bandwidth (or throughput) 
> request: pm_qos
> aggregate them considering the MAX (or MIN) of the collected requests:
> this seems to be not properly correct.
> We think that in this case (e.g. bandwidth) the driver of the 
> service provider
> should be aware of the "accumulated" requests to properly 
> configure itself and
> satisfy all concurrent requirements.
> We define "additive constraints" those that refer to shared resources
> (e.g. bandwidth). They differ from "restrictive constraints" 
> (e.g. latency)
> since the aggregation function cannot be a simple 
> "bound-value" (i.e. max
> or min) but should be an "sum-value" in order to get the real 
> system-wide QoS
> requirement.
> These considerations made us thinking about a rework on 
> pm_qos in order to build
> a more comprehensive framework able to:
> - properly handle both restrictive and additive constraints
> - provide an agreement mechanism that allows to notify 
> constraint requester
> about real system run-time capabilities.

So, again we are again looking at means to add more constraints
and also the "kind" of constraint.

> 
> These are basically the ideas behind our rework. As soon as 
> it will be ready we
> will deliver the code for review. In the meantime we are also 
> producing a paper
> describing in details theoretical foundation of the model we 
> advance and its
> implementation.
> 
> > > I think that a good trade-off between "LKML control on 
> new parameters" and
> > > "platform extensibility" is hard to identify if we don't 
> refine the concept of
> > > QoS parameters first.
> > > The QoS params defined by pm_qos should avoid to be 
> not-sufficiently general,
> > > to be really useful to applications, but also avoid to be 
> too much abstract to
> > > support platform-specific capabilities.

We could start with a smaller set, e.g.:
- Interrupt latency (in lieu of DMA latency)  
- Sleep lateny (to control sleep in absence of cpuidle)
- Cpu frequency
- Cpu voltage

Of course, when we extend this to SoCs with multiple
CPUs then we do have (possible) need of multiplicity
if one of these has to act as "master".

e.g. OMAP3 allows ARM and IVA voltages and frequency
to be changed independently.

> > > Since anyway the core pm_qos implementation is 
> sufficiently general to handle
> > > both abstract and platform-specific params, maybe we 
> should better distinguish
> > > among "abstract qos parameters" (AQP) and 
> "platform-specific qos parameters"
> > > PQP).
> > >
> > > AQP should be intended to be used by applications to 
> assert abstract
> > > requirements on system behaviors, while PQP can be added 
> by platform code in
> > > order to enable the "constrained power management paradigm" for
> > > architecture/board specific devices.
> >
> > Maybe.

On the first read, I believe abstract parameter+level combination
can help us achieve the appliation portability across architectures
and allow specific map most suitable for each arch.

> In our model platform code can define both platform-specific 
> and architecture
> independent constraints. The formers, since are architecture 
> specific, can be
> exposed read-only to user-space (e.g. just to be used for 
> debugging purpose).
> The latter instead can be exposed read-write and applications 
> can assert
> requirements on them. These constraints should be 
> sufficiently abstract to be
> platform independent and generally usable to express 
> high-level requirements for
> an application (e.g. the network bandwidth a VoIP application 
> requires).
> 
> A device drivers is in charge to "maps" abstract parameters 
> on its own specific
> params and eventually on other platform-specific parameters, 
> whenever they are
> platform driver (e.g. a network driver can map a bandwidth 
> request on a
> platform-specific constraint on dma-latency or amba-bus throughput).
> 

I am not sure if I understood this completely, but I believe
that abstract -> specific mapping should be done at system level.
Letting drivers define them, may not be portable; and might lead
to more confusion.

> > > In this hypothesis the better solution would be to use a 
> dynamic data structure
> > > that will be initialized by the core itself to contain 
> just the set of AQP that
> > > has been reviewed and approved by LKML.
> > > Platform code will then have the chance to add its own 
> specific parameters too.
> > >
> > > Moreover we could imagine that AQP will be exported to 
> user-land, in order to
> > > be asserted by application software, while PQP may be 
> hidden within the core
> > > and accessible only by platform drivers.
> > >
> >
> > I don't know if we can keep any PQP interfaces kernel only.  Policy
> > managers really like to run in user mode, even if its just 
> to set the
> > constraints.
> 
> This is not what is actually going on with both cpufreq and cpuidle.
> Really few clients of pm_qos exist now, but among them the 
> kernel-space running
> policy seems to be the more widely adopted solution... may be 
> because of
> efficiency? User-space can still be in charge of choosing the 
> policy/governor,
> but then it's up to this piece of code to manage constraints requests.
> In the case of it needs having a user-space policy, a simple 
> kernel-space
> wrapper defining a "forwarding policy" will be sufficient to 
> expose the required
> PQP to user-space.
> 
> 
> > > I agree: user-land accessible params should be 
> platform-independent and define
> > > a portable API for applications.
> > > This requires also to have sufficiently abstract 
> parameters: e.g. network
> > > bandwidth can be easily asserted by an application while 
> cpu-dma-latency is
> > > perhaps too difficult to identify at application level.
> >
> > DMA latency is a somewhat sucky name for constraining CPU Idle /
> > C-states, but I can't think of a better name.
> 
> I understand, but: is it so common to have user-space code 
> that needs to assert
> such "real-time" requirements? It seems to us that user-land 
> should be given
> access only to sufficiently abstract constraints that roughly 
> define system
> requirements. While more architecture-specific constraints 
> should come from
> drivers. This should improve solution portability, isn't it?

Or, the drivers should 'map' their constraints onto arch
specific constratints making them portable. e.g. we wouldn't like
an ethernet driver tied to one arch/platform... even with most
of this mapping in a platform specific code. 

> 
> 
> > As I type this reply I'm thinking an ok way could be to 
> re-factor PMQoS
> > into a constraint framework that exposes platform specific 
> constraint
> > ABI's (in some TBD sane manner---somehow), and set PMQoS on 
> top of this
> > keeping same ABI and KABI's stable.

Agree.

> 
> Are you thinking to something like an abstract API that each 
> platform code
> should implements in its own way?
> 
> > I could use some input on the way folks anticipate a constraint
> > infrastructure to be used.  How hot could the code paths 
> be?  How complex
> > could the dependencies and inter dependencies become?
> 
> These are interesting questions! We should deepen those 
> aspects, anyway we think
> that dependencies, e.g. among constraints, can also be an 
> interesting concept in
> order to build modular solutions.
> In instance the mapping that a driver needs to provide between an
> (platform-independent) abstract constraint and a 
> platform-specific one should
> allow to write more portable drivers that:
> 1. tune on platform-independent requirements
> 2. translate abstract requirements into platform-specific ones
> 
> > Am I thinking about taking a walk on a slippery slope?

...if we get rope closer to ground, gravity doesn't harm much!

> >
> > > > 2) is it the right thing to keep pm-qos-params a 
> compile time array and
> > > > control the growth of the ABI via these mailing lists 
> or make it a list
> > > > and enable driver creation of new parameters as they wish.
> > >
> > > In my opinion a mixed approach, using a dynamic data 
> structure, could be more
> > > interesting to target both requirements.
> > >
> > > > Both are good things for us to discuss on the list.
> > >
> > > We are tuned on this thread and happy to contribute to 
> the discussion.

Generic params that impact the apps could/should be an array;
while arch/plat specific ones could be a linked list.

Best regards,
Sanjeev

> >
> > very cool.
> 
> It will be interesting even just to have some shared ideas on 
> the table before
> the upcoming LPC.
> 
> Best regards,
> Patrick
> 
> 
> --
> #include <best/regards.h>
> 
> DERKLING
> LRU 338214 (http://counter.li.org)
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2009-04-30 12:28 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-02 20:25 Adding PM QoS parameters Premi, Sanjeev
2009-04-06 21:12 ` mark gross
2009-04-07  9:00   ` Premi, Sanjeev
2009-04-09 18:57     ` mark gross
2009-04-14 12:24       ` Patrick Bellasi
2009-04-15 18:35         ` mark gross
2009-04-21  8:08           ` Derkling
2009-04-21 23:43             ` mark gross
2009-04-27 12:50               ` Matteo Carnevali
2009-04-27 20:46                 ` mark gross
     [not found] <mailman.459.1240339694.10269.linux-pm@lists.linux-foundation.org>
2009-04-21 20:02 ` Premi, Sanjeev
2009-04-22 16:35   ` mark gross
2009-04-27 12:41   ` Matteo Carnevali
2009-04-30 12:28 Patrick Bellasi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.