All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rjw@rjwysocki.net>
To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Linux PM <linux-pm@vger.kernel.org>,
	Kevin Hilman <khilman@kernel.org>,
	Lina Iyer <ilina@codeaurora.org>,
	Lina Iyer <lina.iyer@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Stephen Boyd <sboyd@kernel.org>, Juri Lelli <juri.lelli@arm.com>,
	Geert Uytterhoeven <geert+renesas@glider.be>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Frederic Weisbecker <fweisbec@>
Subject: Re: [PATCH v8 07/26] PM / Domains: Add genpd governor for CPUs
Date: Fri, 14 Sep 2018 11:50:15 +0200	[thread overview]
Message-ID: <5398488.CyAMIAYSYI@aspire.rjw.lan> (raw)
In-Reply-To: <20180809153925.GA20329@red-moon>

On Thursday, August 9, 2018 5:39:25 PM CEST Lorenzo Pieralisi wrote:
> On Mon, Aug 06, 2018 at 11:20:59AM +0200, Rafael J. Wysocki wrote:
> 
> [...]
> 
> > >>> > @@ -245,6 +248,56 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
> > >>> >     return false;
> > >>> >  }
> > >>> >
> > >>> > +static bool cpu_power_down_ok(struct dev_pm_domain *pd)
> > >>> > +{
> > >>> > +   struct generic_pm_domain *genpd = pd_to_genpd(pd);
> > >>> > +   ktime_t domain_wakeup, cpu_wakeup;
> > >>> > +   s64 idle_duration_ns;
> > >>> > +   int cpu, i;
> > >>> > +
> > >>> > +   if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
> > >>> > +           return true;
> > >>> > +
> > >>> > +   /*
> > >>> > +    * Find the next wakeup for any of the online CPUs within the PM domain
> > >>> > +    * and its subdomains. Note, we only need the genpd->cpus, as it already
> > >>> > +    * contains a mask of all CPUs from subdomains.
> > >>> > +    */
> > >>> > +   domain_wakeup = ktime_set(KTIME_SEC_MAX, 0);
> > >>> > +   for_each_cpu_and(cpu, genpd->cpus, cpu_online_mask) {
> > >>> > +           cpu_wakeup = tick_nohz_get_next_wakeup(cpu);
> > >>> > +           if (ktime_before(cpu_wakeup, domain_wakeup))
> > >>> > +                   domain_wakeup = cpu_wakeup;
> > >>> > +   }
> > >>
> > >> Here's a concern I have missed before. :-/
> > >>
> > >> Say, one of the CPUs you're walking here is woken up in the meantime.
> > >
> > > Yes, that can happen - when we miss-predicted "next wakeup".
> > >
> > >>
> > >> I don't think it is valid to evaluate tick_nohz_get_next_wakeup() for it then
> > >> to update domain_wakeup.  We really should just avoid the domain power off in
> > >> that case at all IMO.
> > >
> > > Correct.
> > >
> > > However, we also want to avoid locking contentions in the idle path,
> > > which is what this boils done to.
> > 
> > This already is done under genpd_lock() AFAICS, so I'm not quite sure
> > what exactly you mean.
> > 
> > Besides, this is not just about increased latency, which is a concern
> > by itself but maybe not so much in all environments, but also about
> > possibility of missing a CPU wakeup, which is a major issue.
> > 
> > If one of the CPUs sharing the domain with the current one is woken up
> > during cpu_power_down_ok() and the wakeup is an edge-triggered
> > interrupt and the domain is turned off regardless, the wakeup may be
> > missed entirely if I'm not mistaken.
> > 
> > It looks like there needs to be a way for the hardware to prevent a
> > domain poweroff when there's a pending interrupt or I don't quite see
> > how this can be handled correctly.
> > 
> > >> Sure enough, if the domain power off is already started and one of the CPUs
> > >> in the domain is woken up then, too bad, it will suffer the latency (but in
> > >> that case the hardware should be able to help somewhat), but otherwise CPU
> > >> wakeup should prevent domain power off from being carried out.
> > >
> > > The CPU is not prevented from waking up, as we rely on the FW to deal with that.
> > >
> > > Even if the above computation turns out to wrongly suggest that the
> > > cluster can be powered off, the FW shall together with the genpd
> > > backend driver prevent it.
> > 
> > Fine, but then the solution depends on specific FW/HW behavior, so I'm
> > not sure how generic it really is.  At least, that expectation should
> > be clearly documented somewhere, preferably in code comments.
> > 
> > > To cover this case for PSCI, we also use a per cpu variable for the
> > > CPU's power off state, as can be seen later in the series.
> > 
> > Oh great, but the generic part should be independent on the underlying
> > implementation of the driver.  If it isn't, then it also is not
> > generic.
> > 
> > > Hope this clarifies your concern, else tell and will to elaborate a bit more.
> > 
> > Not really.
> > 
> > There also is one more problem and that is the interaction between
> > this code and the idle governor.
> > 
> > Namely, the idle governor may select a shallower state for some
> > reason, for example due to an additional latency limit derived from
> > CPU utilization (like in the menu governor), and how does the code in
> > cpu_power_down_ok() know what state has been selected and how does it
> > honor the selection made by the idle governor?
> 
> That's a good question and it maybe gives a path towards a solution.
> 
> AFAICS the genPD governor only selects the idle state parameter that
> determines the idle state at, say, GenPD cpumask level it does not touch
> the CPUidle decision, that works on a subset of idle states (at cpu
> level).

I've deferred responding to this as I wasn't quite sure if I followed you
at that time, but I'm afraid I'm still not following you now. :-)

The idle governor has to take the total worst-case wakeup latency into
account.  Not just from the logical CPU itself, but also from whatever
state the SoC may end up in as a result of this particular logical CPU
going idle, this way or another.

So for example, if your logical CPU has an idle state A that may trigger an
idle state X at the cluster level (if the other logical CPUs happen to be in
the right states and so on), then the worst-case exit latency for that
is the one of state X.

> That's my understanding, which can be wrong so please correct me
> if that's the case because that's a bit confusing.
> 
> Let's imagine that we flattened out the list of idle states and feed
> CPUidle with it (all of them - cpu, cluster, package, system - as it is
> in the mainline _now_). Then the GenPD governor can run-through the
> CPUidle selection and _demote_ the idle state if necessary since it
> understands that some CPUs in the GenPD will wake up shortly and break
> the target residency hyphothesis the CPUidle governor is expecting.
> 
> The whole idea about this series is improving CPUidle decision when
> the target idle state is _shared_ among groups of cpus (again, please
> do correct me if I am wrong).
> 
> It is obvious that a GenPD governor must only demote - never promote a
> CPU idle state selection given that hierarchy implies more power
> savings and higher target residencies required.

So I see a problem here, because the way patch 9 in this series is done,
the genpd governor for CPUs has no idea what states have been selected by
the idle governor, so how does it know how deep it can go with turning
off domains?

My point is that the selection made by the idle governor need not be
based only on timers which is the only thing that the genpd governor
seems to be looking at.  The genpd governor should rather look at what
idle states have been selected for each CPU in the domain by the idle
governor and work within the boundaries of those.

Thanks,
Rafael

WARNING: multiple messages have this Message-ID (diff)
From: "Rafael J. Wysocki" <rjw@rjwysocki.net>
To: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Linux PM <linux-pm@vger.kernel.org>,
	Kevin Hilman <khilman@kernel.org>,
	Lina Iyer <ilina@codeaurora.org>,
	Lina Iyer <lina.iyer@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Stephen Boyd <sboyd@kernel.org>, Juri Lelli <juri.lelli@arm.com>,
	Geert Uytterhoeven <geert+renesas@glider.be>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH v8 07/26] PM / Domains: Add genpd governor for CPUs
Date: Fri, 14 Sep 2018 11:50:15 +0200	[thread overview]
Message-ID: <5398488.CyAMIAYSYI@aspire.rjw.lan> (raw)
In-Reply-To: <20180809153925.GA20329@red-moon>

On Thursday, August 9, 2018 5:39:25 PM CEST Lorenzo Pieralisi wrote:
> On Mon, Aug 06, 2018 at 11:20:59AM +0200, Rafael J. Wysocki wrote:
> 
> [...]
> 
> > >>> > @@ -245,6 +248,56 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
> > >>> >     return false;
> > >>> >  }
> > >>> >
> > >>> > +static bool cpu_power_down_ok(struct dev_pm_domain *pd)
> > >>> > +{
> > >>> > +   struct generic_pm_domain *genpd = pd_to_genpd(pd);
> > >>> > +   ktime_t domain_wakeup, cpu_wakeup;
> > >>> > +   s64 idle_duration_ns;
> > >>> > +   int cpu, i;
> > >>> > +
> > >>> > +   if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
> > >>> > +           return true;
> > >>> > +
> > >>> > +   /*
> > >>> > +    * Find the next wakeup for any of the online CPUs within the PM domain
> > >>> > +    * and its subdomains. Note, we only need the genpd->cpus, as it already
> > >>> > +    * contains a mask of all CPUs from subdomains.
> > >>> > +    */
> > >>> > +   domain_wakeup = ktime_set(KTIME_SEC_MAX, 0);
> > >>> > +   for_each_cpu_and(cpu, genpd->cpus, cpu_online_mask) {
> > >>> > +           cpu_wakeup = tick_nohz_get_next_wakeup(cpu);
> > >>> > +           if (ktime_before(cpu_wakeup, domain_wakeup))
> > >>> > +                   domain_wakeup = cpu_wakeup;
> > >>> > +   }
> > >>
> > >> Here's a concern I have missed before. :-/
> > >>
> > >> Say, one of the CPUs you're walking here is woken up in the meantime.
> > >
> > > Yes, that can happen - when we miss-predicted "next wakeup".
> > >
> > >>
> > >> I don't think it is valid to evaluate tick_nohz_get_next_wakeup() for it then
> > >> to update domain_wakeup.  We really should just avoid the domain power off in
> > >> that case at all IMO.
> > >
> > > Correct.
> > >
> > > However, we also want to avoid locking contentions in the idle path,
> > > which is what this boils done to.
> > 
> > This already is done under genpd_lock() AFAICS, so I'm not quite sure
> > what exactly you mean.
> > 
> > Besides, this is not just about increased latency, which is a concern
> > by itself but maybe not so much in all environments, but also about
> > possibility of missing a CPU wakeup, which is a major issue.
> > 
> > If one of the CPUs sharing the domain with the current one is woken up
> > during cpu_power_down_ok() and the wakeup is an edge-triggered
> > interrupt and the domain is turned off regardless, the wakeup may be
> > missed entirely if I'm not mistaken.
> > 
> > It looks like there needs to be a way for the hardware to prevent a
> > domain poweroff when there's a pending interrupt or I don't quite see
> > how this can be handled correctly.
> > 
> > >> Sure enough, if the domain power off is already started and one of the CPUs
> > >> in the domain is woken up then, too bad, it will suffer the latency (but in
> > >> that case the hardware should be able to help somewhat), but otherwise CPU
> > >> wakeup should prevent domain power off from being carried out.
> > >
> > > The CPU is not prevented from waking up, as we rely on the FW to deal with that.
> > >
> > > Even if the above computation turns out to wrongly suggest that the
> > > cluster can be powered off, the FW shall together with the genpd
> > > backend driver prevent it.
> > 
> > Fine, but then the solution depends on specific FW/HW behavior, so I'm
> > not sure how generic it really is.  At least, that expectation should
> > be clearly documented somewhere, preferably in code comments.
> > 
> > > To cover this case for PSCI, we also use a per cpu variable for the
> > > CPU's power off state, as can be seen later in the series.
> > 
> > Oh great, but the generic part should be independent on the underlying
> > implementation of the driver.  If it isn't, then it also is not
> > generic.
> > 
> > > Hope this clarifies your concern, else tell and will to elaborate a bit more.
> > 
> > Not really.
> > 
> > There also is one more problem and that is the interaction between
> > this code and the idle governor.
> > 
> > Namely, the idle governor may select a shallower state for some
> > reason, for example due to an additional latency limit derived from
> > CPU utilization (like in the menu governor), and how does the code in
> > cpu_power_down_ok() know what state has been selected and how does it
> > honor the selection made by the idle governor?
> 
> That's a good question and it maybe gives a path towards a solution.
> 
> AFAICS the genPD governor only selects the idle state parameter that
> determines the idle state at, say, GenPD cpumask level it does not touch
> the CPUidle decision, that works on a subset of idle states (at cpu
> level).

I've deferred responding to this as I wasn't quite sure if I followed you
at that time, but I'm afraid I'm still not following you now. :-)

The idle governor has to take the total worst-case wakeup latency into
account.  Not just from the logical CPU itself, but also from whatever
state the SoC may end up in as a result of this particular logical CPU
going idle, this way or another.

So for example, if your logical CPU has an idle state A that may trigger an
idle state X at the cluster level (if the other logical CPUs happen to be in
the right states and so on), then the worst-case exit latency for that
is the one of state X.

> That's my understanding, which can be wrong so please correct me
> if that's the case because that's a bit confusing.
> 
> Let's imagine that we flattened out the list of idle states and feed
> CPUidle with it (all of them - cpu, cluster, package, system - as it is
> in the mainline _now_). Then the GenPD governor can run-through the
> CPUidle selection and _demote_ the idle state if necessary since it
> understands that some CPUs in the GenPD will wake up shortly and break
> the target residency hyphothesis the CPUidle governor is expecting.
> 
> The whole idea about this series is improving CPUidle decision when
> the target idle state is _shared_ among groups of cpus (again, please
> do correct me if I am wrong).
> 
> It is obvious that a GenPD governor must only demote - never promote a
> CPU idle state selection given that hierarchy implies more power
> savings and higher target residencies required.

So I see a problem here, because the way patch 9 in this series is done,
the genpd governor for CPUs has no idea what states have been selected by
the idle governor, so how does it know how deep it can go with turning
off domains?

My point is that the selection made by the idle governor need not be
based only on timers which is the only thing that the genpd governor
seems to be looking at.  The genpd governor should rather look at what
idle states have been selected for each CPU in the domain by the idle
governor and work within the boundaries of those.

Thanks,
Rafael


WARNING: multiple messages have this Message-ID (diff)
From: rjw@rjwysocki.net (Rafael J. Wysocki)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v8 07/26] PM / Domains: Add genpd governor for CPUs
Date: Fri, 14 Sep 2018 11:50:15 +0200	[thread overview]
Message-ID: <5398488.CyAMIAYSYI@aspire.rjw.lan> (raw)
In-Reply-To: <20180809153925.GA20329@red-moon>

On Thursday, August 9, 2018 5:39:25 PM CEST Lorenzo Pieralisi wrote:
> On Mon, Aug 06, 2018 at 11:20:59AM +0200, Rafael J. Wysocki wrote:
> 
> [...]
> 
> > >>> > @@ -245,6 +248,56 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
> > >>> >     return false;
> > >>> >  }
> > >>> >
> > >>> > +static bool cpu_power_down_ok(struct dev_pm_domain *pd)
> > >>> > +{
> > >>> > +   struct generic_pm_domain *genpd = pd_to_genpd(pd);
> > >>> > +   ktime_t domain_wakeup, cpu_wakeup;
> > >>> > +   s64 idle_duration_ns;
> > >>> > +   int cpu, i;
> > >>> > +
> > >>> > +   if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN))
> > >>> > +           return true;
> > >>> > +
> > >>> > +   /*
> > >>> > +    * Find the next wakeup for any of the online CPUs within the PM domain
> > >>> > +    * and its subdomains. Note, we only need the genpd->cpus, as it already
> > >>> > +    * contains a mask of all CPUs from subdomains.
> > >>> > +    */
> > >>> > +   domain_wakeup = ktime_set(KTIME_SEC_MAX, 0);
> > >>> > +   for_each_cpu_and(cpu, genpd->cpus, cpu_online_mask) {
> > >>> > +           cpu_wakeup = tick_nohz_get_next_wakeup(cpu);
> > >>> > +           if (ktime_before(cpu_wakeup, domain_wakeup))
> > >>> > +                   domain_wakeup = cpu_wakeup;
> > >>> > +   }
> > >>
> > >> Here's a concern I have missed before. :-/
> > >>
> > >> Say, one of the CPUs you're walking here is woken up in the meantime.
> > >
> > > Yes, that can happen - when we miss-predicted "next wakeup".
> > >
> > >>
> > >> I don't think it is valid to evaluate tick_nohz_get_next_wakeup() for it then
> > >> to update domain_wakeup.  We really should just avoid the domain power off in
> > >> that case at all IMO.
> > >
> > > Correct.
> > >
> > > However, we also want to avoid locking contentions in the idle path,
> > > which is what this boils done to.
> > 
> > This already is done under genpd_lock() AFAICS, so I'm not quite sure
> > what exactly you mean.
> > 
> > Besides, this is not just about increased latency, which is a concern
> > by itself but maybe not so much in all environments, but also about
> > possibility of missing a CPU wakeup, which is a major issue.
> > 
> > If one of the CPUs sharing the domain with the current one is woken up
> > during cpu_power_down_ok() and the wakeup is an edge-triggered
> > interrupt and the domain is turned off regardless, the wakeup may be
> > missed entirely if I'm not mistaken.
> > 
> > It looks like there needs to be a way for the hardware to prevent a
> > domain poweroff when there's a pending interrupt or I don't quite see
> > how this can be handled correctly.
> > 
> > >> Sure enough, if the domain power off is already started and one of the CPUs
> > >> in the domain is woken up then, too bad, it will suffer the latency (but in
> > >> that case the hardware should be able to help somewhat), but otherwise CPU
> > >> wakeup should prevent domain power off from being carried out.
> > >
> > > The CPU is not prevented from waking up, as we rely on the FW to deal with that.
> > >
> > > Even if the above computation turns out to wrongly suggest that the
> > > cluster can be powered off, the FW shall together with the genpd
> > > backend driver prevent it.
> > 
> > Fine, but then the solution depends on specific FW/HW behavior, so I'm
> > not sure how generic it really is.  At least, that expectation should
> > be clearly documented somewhere, preferably in code comments.
> > 
> > > To cover this case for PSCI, we also use a per cpu variable for the
> > > CPU's power off state, as can be seen later in the series.
> > 
> > Oh great, but the generic part should be independent on the underlying
> > implementation of the driver.  If it isn't, then it also is not
> > generic.
> > 
> > > Hope this clarifies your concern, else tell and will to elaborate a bit more.
> > 
> > Not really.
> > 
> > There also is one more problem and that is the interaction between
> > this code and the idle governor.
> > 
> > Namely, the idle governor may select a shallower state for some
> > reason, for example due to an additional latency limit derived from
> > CPU utilization (like in the menu governor), and how does the code in
> > cpu_power_down_ok() know what state has been selected and how does it
> > honor the selection made by the idle governor?
> 
> That's a good question and it maybe gives a path towards a solution.
> 
> AFAICS the genPD governor only selects the idle state parameter that
> determines the idle state at, say, GenPD cpumask level it does not touch
> the CPUidle decision, that works on a subset of idle states (at cpu
> level).

I've deferred responding to this as I wasn't quite sure if I followed you
at that time, but I'm afraid I'm still not following you now. :-)

The idle governor has to take the total worst-case wakeup latency into
account.  Not just from the logical CPU itself, but also from whatever
state the SoC may end up in as a result of this particular logical CPU
going idle, this way or another.

So for example, if your logical CPU has an idle state A that may trigger an
idle state X at the cluster level (if the other logical CPUs happen to be in
the right states and so on), then the worst-case exit latency for that
is the one of state X.

> That's my understanding, which can be wrong so please correct me
> if that's the case because that's a bit confusing.
> 
> Let's imagine that we flattened out the list of idle states and feed
> CPUidle with it (all of them - cpu, cluster, package, system - as it is
> in the mainline _now_). Then the GenPD governor can run-through the
> CPUidle selection and _demote_ the idle state if necessary since it
> understands that some CPUs in the GenPD will wake up shortly and break
> the target residency hyphothesis the CPUidle governor is expecting.
> 
> The whole idea about this series is improving CPUidle decision when
> the target idle state is _shared_ among groups of cpus (again, please
> do correct me if I am wrong).
> 
> It is obvious that a GenPD governor must only demote - never promote a
> CPU idle state selection given that hierarchy implies more power
> savings and higher target residencies required.

So I see a problem here, because the way patch 9 in this series is done,
the genpd governor for CPUs has no idea what states have been selected by
the idle governor, so how does it know how deep it can go with turning
off domains?

My point is that the selection made by the idle governor need not be
based only on timers which is the only thing that the genpd governor
seems to be looking at.  The genpd governor should rather look at what
idle states have been selected for each CPU in the domain by the idle
governor and work within the boundaries of those.

Thanks,
Rafael

  parent reply	other threads:[~2018-09-14  9:50 UTC|newest]

Thread overview: 165+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-20 17:22 [PATCH v8 00/26] PM / Domains: Support hierarchical CPU arrangement (PSCI/ARM) Ulf Hansson
2018-06-20 17:22 ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 01/26] PM / Domains: Don't treat zero found compatible idle states as an error Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 02/26] PM / Domains: Deal with multiple states but no governor in genpd Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 03/26] PM / Domains: Add generic data pointer to genpd_power_state struct Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-24 21:09   ` Rafael J. Wysocki
2018-06-24 21:09     ` Rafael J. Wysocki
2018-06-25  8:34     ` Ulf Hansson
2018-06-25  8:34       ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 04/26] PM / Domains: Add support for CPU devices to genpd Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-19 10:25   ` Rafael J. Wysocki
2018-07-19 10:25     ` Rafael J. Wysocki
2018-08-03 11:43     ` Ulf Hansson
2018-08-03 11:43       ` Ulf Hansson
2018-08-06  9:36       ` Rafael J. Wysocki
2018-08-06  9:36         ` Rafael J. Wysocki
2018-08-24  6:47         ` Ulf Hansson
2018-08-24  6:47           ` Ulf Hansson
2018-09-14  9:26           ` Rafael J. Wysocki
2018-09-14  9:26             ` Rafael J. Wysocki
2018-06-20 17:22 ` [PATCH v8 05/26] PM / Domains: Add helper functions to attach/detach CPUs to/from genpd Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-19 10:22   ` Rafael J. Wysocki
2018-07-19 10:22     ` Rafael J. Wysocki
2018-08-03 11:44     ` Ulf Hansson
2018-08-03 11:44       ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 06/26] timer: Export next wakeup time of a CPU Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-19 10:15   ` Rafael J. Wysocki
2018-07-19 10:15     ` Rafael J. Wysocki
2018-06-20 17:22 ` [PATCH v8 07/26] PM / Domains: Add genpd governor for CPUs Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-19 10:32   ` Rafael J. Wysocki
2018-07-19 10:32     ` Rafael J. Wysocki
2018-07-26  9:14     ` Rafael J. Wysocki
2018-07-26  9:14       ` Rafael J. Wysocki
2018-08-03 14:28       ` Ulf Hansson
2018-08-03 14:28         ` Ulf Hansson
2018-08-03 14:28         ` Ulf Hansson
2018-08-06  9:20         ` Rafael J. Wysocki
2018-08-06  9:20           ` Rafael J. Wysocki
2018-08-06  9:20           ` Rafael J. Wysocki
2018-08-09 15:39           ` Lorenzo Pieralisi
2018-08-09 15:39             ` Lorenzo Pieralisi
2018-08-09 15:39             ` Lorenzo Pieralisi
2018-08-24  9:26             ` Ulf Hansson
2018-08-24  9:26               ` Ulf Hansson
2018-08-24  9:26               ` Ulf Hansson
2018-08-24 10:38               ` Lorenzo Pieralisi
2018-08-24 10:38                 ` Lorenzo Pieralisi
2018-08-24 10:38                 ` Lorenzo Pieralisi
2018-08-30 13:36                 ` Ulf Hansson
2018-08-30 13:36                   ` Ulf Hansson
2018-08-30 13:36                   ` Ulf Hansson
2018-09-13 15:37                   ` Lorenzo Pieralisi
2018-09-13 15:37                     ` Lorenzo Pieralisi
2018-09-13 15:37                     ` Lorenzo Pieralisi
2018-09-14  9:50             ` Rafael J. Wysocki [this message]
2018-09-14  9:50               ` Rafael J. Wysocki
2018-09-14  9:50               ` Rafael J. Wysocki
2018-09-14 10:44               ` Lorenzo Pieralisi
2018-09-14 10:44                 ` Lorenzo Pieralisi
2018-09-14 10:44                 ` Lorenzo Pieralisi
2018-09-14 11:34                 ` Rafael J. Wysocki
2018-09-14 11:34                   ` Rafael J. Wysocki
2018-09-14 11:34                   ` Rafael J. Wysocki
2018-09-14 12:30                   ` Lorenzo Pieralisi
2018-09-14 12:30                     ` Lorenzo Pieralisi
2018-09-14 12:30                     ` Lorenzo Pieralisi
2018-08-24  8:29           ` Ulf Hansson
2018-08-24  8:29             ` Ulf Hansson
2018-08-24  8:29             ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 08/26] PM / Domains: Extend genpd CPU governor to cope with QoS constraints Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-19 10:35   ` Rafael J. Wysocki
2018-07-19 10:35     ` Rafael J. Wysocki
2018-08-03 11:42     ` Ulf Hansson
2018-08-03 11:42       ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 09/26] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-18 10:11   ` Rafael J. Wysocki
2018-07-18 10:11     ` Rafael J. Wysocki
2018-07-19 10:12     ` Rafael J. Wysocki
2018-07-19 10:12       ` Rafael J. Wysocki
2018-07-19 10:39       ` Rafael J. Wysocki
2018-07-19 10:39         ` Rafael J. Wysocki
2018-08-03 11:42         ` Ulf Hansson
2018-08-03 11:42           ` Ulf Hansson
2018-08-06  9:37           ` Rafael J. Wysocki
2018-08-06  9:37             ` Rafael J. Wysocki
2018-08-08 10:56             ` Lorenzo Pieralisi
2018-08-08 10:56               ` Lorenzo Pieralisi
2018-08-08 18:02               ` Lina Iyer
2018-08-08 18:02                 ` Lina Iyer
2018-08-09  8:16                 ` Rafael J. Wysocki
2018-08-09  8:16                   ` Rafael J. Wysocki
2018-08-09  8:16                   ` Rafael J. Wysocki
2018-08-10 20:36                   ` Lina Iyer
2018-08-10 20:36                     ` Lina Iyer
2018-08-12  9:53                     ` Rafael J. Wysocki
2018-08-12  9:53                       ` Rafael J. Wysocki
2018-08-12  9:53                       ` Rafael J. Wysocki
2018-08-09  9:58                 ` Sudeep Holla
2018-08-09  9:58                   ` Sudeep Holla
2018-08-09  9:58                   ` Sudeep Holla
2018-08-09 10:25                 ` Lorenzo Pieralisi
2018-08-09 10:25                   ` Lorenzo Pieralisi
2018-08-10 20:18                   ` Lina Iyer
2018-08-10 20:18                     ` Lina Iyer
2018-08-15 10:44                     ` Lorenzo Pieralisi
2018-08-15 10:44                       ` Lorenzo Pieralisi
2018-08-24 12:24                       ` Ulf Hansson
2018-08-24 12:24                         ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 10/26] dt: psci: Update DT bindings to support hierarchical PSCI states Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 11/26] of: base: Add of_get_cpu_state_node() to get idle states for a CPU node Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 12/26] cpuidle: dt: Support hierarchical CPU idle states Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 13/26] drivers: firmware: psci: Move psci to separate directory Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 14/26] MAINTAINERS: Update files for PSCI Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 15/26] drivers: firmware: psci: Split psci_dt_cpu_init_idle() Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 16/26] drivers: firmware: psci: Support hierarchical CPU idle states Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 17/26] drivers: firmware: psci: Simplify error path of psci_dt_init() Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 18/26] drivers: firmware: psci: Announce support for OS initiated suspend mode Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 19/26] drivers: firmware: psci: Prepare to use " Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 20/26] drivers: firmware: psci: Share a few internal PSCI functions Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 21/26] drivers: firmware: psci: Add support for PM domains using genpd Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 22/26] drivers: firmware: psci: Introduce psci_dt_topology_init() Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 23/26] drivers: firmware: psci: Try to attach CPU devices to their PM domains Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 24/26] drivers: firmware: psci: Deal with CPU hotplug when using OSI mode Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-11-19 19:50   ` Raju P L S S S N
2018-11-19 19:50     ` Raju P L S S S N
2018-11-20  9:50     ` Ulf Hansson
2018-11-20  9:50       ` Ulf Hansson
2018-11-20 10:47       ` Raju P L S S S N
2018-11-20 10:47         ` Raju P L S S S N
2018-06-20 17:22 ` [PATCH v8 25/26] arm64: kernel: Respect the hierarchical CPU topology in DT for PSCI Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-06-20 17:22 ` [PATCH v8 26/26] arm64: dts: Convert to the hierarchical CPU topology layout for MSM8916 Ulf Hansson
2018-06-20 17:22   ` Ulf Hansson
2018-07-03  5:44 ` [PATCH v8 00/26] PM / Domains: Support hierarchical CPU arrangement (PSCI/ARM) Ulf Hansson
2018-07-03  5:44   ` Ulf Hansson
2018-07-03  7:54   ` Rafael J. Wysocki
2018-07-03  7:54     ` Rafael J. Wysocki
2018-07-09 11:42     ` Ulf Hansson
2018-07-09 11:42       ` Ulf Hansson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5398488.CyAMIAYSYI@aspire.rjw.lan \
    --to=rjw@rjwysocki.net \
    --cc=daniel.lezcano@linaro.org \
    --cc=geert+renesas@glider.be \
    --cc=ilina@codeaurora.org \
    --cc=juri.lelli@arm.com \
    --cc=khilman@kernel.org \
    --cc=lina.iyer@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=sboyd@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=tglx@linutronix.de \
    --cc=ulf.hansson@linaro.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.