From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7F21ECDFB8 for ; Wed, 18 Jul 2018 10:12:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 884A52075C for ; Wed, 18 Jul 2018 10:12:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 884A52075C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=rjwysocki.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728276AbeGRKuA (ORCPT ); Wed, 18 Jul 2018 06:50:00 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:61241 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726121AbeGRKuA (ORCPT ); Wed, 18 Jul 2018 06:50:00 -0400 Received: from 89-70-153-66.dynamic.chello.pl (89.70.153.66) (HELO aspire.rjw.lan) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83) id a8b765b13d4eb60e; Wed, 18 Jul 2018 12:12:46 +0200 From: "Rafael J. Wysocki" To: Ulf Hansson Cc: Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , linux-pm@vger.kernel.org, Kevin Hilman , Lina Iyer , Lina Iyer , Rob Herring , Daniel Lezcano , Thomas Gleixner , Vincent Guittot , Stephen Boyd , Juri Lelli , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v8 09/26] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs Date: Wed, 18 Jul 2018 12:11:06 +0200 Message-ID: <2056372.NMt4aPaF4h@aspire.rjw.lan> In-Reply-To: <20180620172226.15012-10-ulf.hansson@linaro.org> References: <20180620172226.15012-1-ulf.hansson@linaro.org> <20180620172226.15012-10-ulf.hansson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wednesday, June 20, 2018 7:22:09 PM CEST Ulf Hansson wrote: > To allow CPUs being power managed by PM domains, let's deploy support for > runtime PM for the CPU's corresponding struct device. > > More precisely, at the point when the CPU is about to enter an idle state, > decrease the runtime PM usage count for its corresponding struct device, > via calling pm_runtime_put_sync_suspend(). Then, at the point when the CPU > resumes from idle, let's increase the runtime PM usage count, via calling > pm_runtime_get_sync(). > > Cc: Lina Iyer > Co-developed-by: Lina Iyer > Signed-off-by: Ulf Hansson I finally got to this one, sorry for the huge delay. Let me confirm that I understand the code flow correctly. > --- > kernel/cpu_pm.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c > index 67b02e138a47..492d4a83dca0 100644 > --- a/kernel/cpu_pm.c > +++ b/kernel/cpu_pm.c > @@ -16,9 +16,11 @@ > */ > > #include > +#include > #include > #include > #include > +#include > #include > #include > > @@ -91,6 +93,7 @@ int cpu_pm_enter(void) This is called from a cpuidle driver's ->enter callback for the target state selected by the idle governor -> > { > int nr_calls; > int ret = 0; > + struct device *dev = get_cpu_device(smp_processor_id()); > > ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); > if (ret) > @@ -100,6 +103,9 @@ int cpu_pm_enter(void) > */ > cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); > > + if (!ret && dev && dev->pm_domain) > + pm_runtime_put_sync_suspend(dev); -> so this is going to invoke genpd_runtime_suspend() if the usage counter of dev is 0. That will cause cpu_power_down_ok() to be called (because this is a CPU domain) and that will walk the domain cpumask and compute the estimated idle duration as the minimum of tick_nohz_get_next_wakeup() values over the CPUs in that cpumask. [Note that the weight of the cpumask must be seriously limited for that to actually work, as this happens in the idle path.] Next, it will return "true" if it can find a domain state with residency within the estimated idle duration. [Note that this sort of overlaps with the idle governor's job.] Next, __genpd_runtime_suspend() will be invoked to run the device-specific callback if any [Note that this has to be suitable for the idle path if present.] and genpd_stop_dev() runs (which, again, may invoke a callback) and genpd_power_off() runs under the domain lock (which must be a spinlock then). > + > return ret; > } > EXPORT_SYMBOL_GPL(cpu_pm_enter); > @@ -118,6 +124,11 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter); > */ > int cpu_pm_exit(void) > { > + struct device *dev = get_cpu_device(smp_processor_id()); > + > + if (dev && dev->pm_domain) > + pm_runtime_get_sync(dev); > + > return cpu_pm_notify(CPU_PM_EXIT, -1, NULL); > } > EXPORT_SYMBOL_GPL(cpu_pm_exit); > And this is called on wakeup when the cpuidle driver's ->enter callback is about to return and it reverses the suspend flow (except that the governor doesn't need to be called now). Have I got that right?