All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Caesar Wang <sasukewxt@163.com>
Cc: "Caesar Wang" <wxt@rock-chips.com>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"Ian Campbell" <ijc+devicetree@hellion.org.uk>,
	"Kevin Hilman" <khilman@linaro.org>,
	"Russell King - ARM Linux" <linux@arm.linux.org.uk>,
	"Heiko Stübner" <heiko@sntech.de>,
	"Paweł Moll" <pawel.moll@arm.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"jinkun.hong" <jinkun.hong@rock-chips.com>,
	"Linus Walleij" <linus.walleij@linaro.org>,
	"Randy Dunlap" <rdunlap@infradead.org>,
	"Tomasz Figa" <tomasz.figa@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Mark Brown" <broonie@kernel.org>,
	"Doug Anderson" <dianders@chromium.org>,
	"Kumar Gala" <galak@codeaurora.org>,
	"Grant Likely" <grant.likely@linaro.org>,
	"Dmitry Torokhov" <dmitry.torokhov@gmail.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v14 2/3] power-domain: rockchip: add power domain driver
Date: Thu, 25 Jun 2015 17:33:56 +0200	[thread overview]
Message-ID: <CAPDyKFqa9cQin2HUFuLO_-sSkeASSr08b-3DZ5xGbYHQBPxKag@mail.gmail.com> (raw)
In-Reply-To: <557CF1D6.4090506@163.com>

[...]

>>> +#include <linux/clk-provider.h>
>>
>> clk-provider.h, why?
>
>
> The following is needed.
>
> _clk_get_name(clk)

I see, you need it for for the dev_dbg().

I think you shall use "%pC" as the formatting string for the dev_dbg()
message, since that will take care of printing the clock name for you.

[...]

>>> +static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool
>>> power_on)
>>> +{
>>> +       int i;
>>> +
>>> +       mutex_lock(&pd->pmu->mutex);
>>
>> I don't quite understand why you need a lock, what are you protecting and
>> why?
>
>
> Says:
> [ 3551.678762] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.899180] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.905958] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.912832] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.919730] rockchip_pd_power:139: mutex_lock
> [ 3551.924130] rockchip_pd_power:167,mutex_unlock
> [ 3563.827295] rockchip_pd_power:139: mutex_lock
> [ 3563.831753] rockchip_pd_power:167,mutex_unlock
> [ 3563.836216] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.058989] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.065659] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.072354] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.079047] rockchip_pd_power:139: mutex_lock
> [ 3564.083426] rockchip_pd_power:167,mutex_unlock
> [ 3611.665726] rockchip_pd_power:139: mutex_lock
> [ 3611.670206] rockchip_pd_power:167,mutex_unlock
> [ 3611.674692] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.899160] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.905938] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.912818] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.919689] rockchip_pd_power:139: mutex_lock
> [ 3611.924090] rockchip_pd_power:167,mutex_unlock
> [ 3623.848296] rockchip_pd_power:139: mutex_lock
> [ 3623.852752] rockchip_pd_power:167,mutex_unlock
>
>
>
>
>>> +
>>> +       if (rockchip_pmu_domain_is_on(pd) != power_on) {
>>> +               for (i = 0; i < pd->num_clks; i++)
>>> +                       clk_enable(pd->clks[i]);
>>> +
>>> +               if (!power_on) {
>>> +                       /* FIXME: add code to save AXI_QOS */
>>> +
>>> +                       /* if powering down, idle request to NIU first */
>>> +                       rockchip_pmu_set_idle_request(pd, true);
>>> +               }
>>> +
>>> +               rockchip_do_pmu_set_power_domain(pd, power_on);
>>> +
>>> +               if (power_on) {
>>> +                       /* if powering up, leave idle mode */
>>> +                       rockchip_pmu_set_idle_request(pd, false);
>>> +
>>> +                       /* FIXME: add code to restore AXI_QOS */
>>> +               }
>>> +
>>> +               for (i = pd->num_clks - 1; i >= 0; i--)
>>> +                       clk_disable(pd->clks[i]);
>>> +       }
>>> +
>>> +       mutex_unlock(&pd->pmu->mutex);
>>> +       return 0;
>>> +}
>>> +
>>> +static int rockchip_pd_power_on(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, true);
>>> +}
>>> +
>>> +static int rockchip_pd_power_off(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, false);
>>> +}
>>> +
>>> +static int rockchip_pd_attach_dev(struct generic_pm_domain *genpd,
>>> +                                 struct device *dev)
>>> +{
>>> +       struct clk *clk;
>>> +       int i;
>>> +       int error;
>>> +
>>> +       dev_dbg(dev, "attaching to power domain '%s'\n", genpd->name);
>>> +
>>> +       error = pm_clk_create(dev);
>>> +       if (error) {
>>> +               dev_err(dev, "pm_clk_create failed %d\n", error);
>>> +               return error;
>>> +       }
>>> +
>>> +       i = 0;
>>> +       while ((clk = of_clk_get(dev->of_node, i++)) && !IS_ERR(clk)) {
>>
>> This loop adds all available device clocks to the PM clock list. I
>> wonder if this could be considered as a common thing and if so, we
>> might want to extend the pm_clk API with this.
>
>
> There are several reasons as follows:
>
>     Firstly, the clocks need be turned off to save power when
>     the system enter the suspend state. So we need to enumerate the clocks
>     in the dts. In order to power domain can turn on and off.
>
>     Secondly, the reset-circuit should reset be synchronous on rk3288, then
> sync revoked.
>     So we need to enable clocks of all devices.

I think you misinterpreted my previous answer. Instead of fetching all
clocks for a device and manually create the pm_clk list as you do
here, I was suggesting to make this a part of the pm clk API.

I would happily support such an API, but I can't tell what the other
maintainers think.

>
>>> +               dev_dbg(dev, "adding clock '%s' to list of PM clocks\n",
>>> +                       __clk_get_name(clk));
>>> +               error = pm_clk_add_clk(dev, clk);
>>> +               clk_put(clk);
>>> +               if (error) {
>>> +                       dev_err(dev, "pm_clk_add_clk failed %d\n",
>>> error);
>>> +                       pm_clk_destroy(dev);
>>> +                       return error;
>>> +               }
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +

[...]

>>> +
>>> +static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
>>> +                                     struct device_node *node)
>>> +{
>>> +       const struct rockchip_domain_info *pd_info;
>>> +       struct rockchip_pm_domain *pd;
>>> +       struct clk *clk;
>>> +       int clk_cnt;
>>> +       int i;
>>> +       u32 id;
>>> +       int error;
>>> +
>>> +       error = of_property_read_u32(node, "reg", &id);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "%s: failed to retrieve domain id (reg): %d\n",
>>> +                       node->name, error);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       if (id >= pmu->info->num_domains) {
>>> +               dev_err(pmu->dev, "%s: invalid domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       pd_info = &pmu->info->domain_info[id];
>>> +       if (!pd_info) {
>>> +               dev_err(pmu->dev, "%s: undefined domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       clk_cnt = of_count_phandle_with_args(node, "clocks",
>>> "#clock-cells");
>>> +       pd = devm_kzalloc(pmu->dev,
>>> +                         sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]),
>>> +                         GFP_KERNEL);
>>> +       if (!pd)
>>> +               return -ENOMEM;
>>> +
>>> +       pd->info = pd_info;
>>> +       pd->pmu = pmu;
>>> +
>>> +       for (i = 0; i < clk_cnt; i++) {
>>
>> This loop is similar to the one when creates the PM clock list in the
>> rockchip_pd_attach_dev().
>>
>> What's the reason you don't want to use pm clk for these clocks?
>>
>
> Ditto.

I don't follow. Can't you do the exact same thing via the pm clk API,
can you please elaborate!?

>
>>> +               clk = of_clk_get(node, i);
>>> +               if (IS_ERR(clk)) {
>>> +                       error = PTR_ERR(clk);
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to get clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               error = clk_prepare(clk);
>>> +               if (error) {
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to prepare clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       clk_put(clk);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               pd->clks[pd->num_clks++] = clk;
>>> +
>>> +               dev_dbg(pmu->dev, "added clock '%s' to domain '%s'\n",
>>> +                       __clk_get_name(clk), node->name);
>>> +       }
>>> +
>>> +       error = rockchip_pd_power(pd, true);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "failed to power on domain '%s': %d\n",
>>> +                       node->name, error);
>>> +               goto err_out;
>>> +       }
>>> +
>>> +       pd->genpd.name = node->name;
>>> +       pd->genpd.power_off = rockchip_pd_power_off;
>>> +       pd->genpd.power_on = rockchip_pd_power_on;
>>> +       pd->genpd.attach_dev = rockchip_pd_attach_dev;
>>> +       pd->genpd.detach_dev = rockchip_pd_detach_dev;
>>> +       pd->genpd.dev_ops.start = rockchip_pd_start_dev;
>>> +       pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;
>>
>> Instead of assigning the ->stop|start() callbacks, which do
>> pm_clk_suspend|resume(), just set the genpd->flags to
>> GENPD_FLAG_PM_CLK, and genpd will fix this for you.
>>
> Ditto.

Again, I don't follow. Here is what goes on:

rockchip_pd_start_dev() calls pm_clk_resume() and
rockchip_pd_stop_dev() calls pm_clk_suspend().
Instead of assigning the below callbacks...

pd->genpd.dev_ops.start = rockchip_pd_start_dev;
pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;

... just use GENPD_FLAG_PM_CLK and let genpd deal with the calls to
pm_clk_suspend|resume().

>
>
>>> +       pm_genpd_init(&pd->genpd, NULL, false);
>>> +
>>> +       pmu->genpd_data.domains[id] = &pd->genpd;
>>> +       return 0;
>>> +
>>> +err_out:
>>> +       while (--i >= 0) {
>>> +               clk_unprepare(pd->clks[i]);
>>> +               clk_put(pd->clks[i]);
>>> +       }
>>> +       return error;
>>> +}
>>> +

[...]

Kind regards
Uffe

WARNING: multiple messages have this Message-ID (diff)
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Caesar Wang <sasukewxt@163.com>
Cc: "Caesar Wang" <wxt@rock-chips.com>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"Ian Campbell" <ijc+devicetree@hellion.org.uk>,
	"Kevin Hilman" <khilman@linaro.org>,
	"Russell King - ARM Linux" <linux@arm.linux.org.uk>,
	"Heiko Stübner" <heiko@sntech.de>,
	"Paweł Moll" <pawel.moll@arm.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"jinkun.hong" <jinkun.hong@rock-chips.com>,
	"Linus Walleij" <linus.walleij@linaro.org>,
	"Randy Dunlap" <rdunlap@infradead.org>,
	"Tomasz Figa" <tomasz.figa@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"open list:ARM/Rockchip SoC..."
	<linux-rockchip@lists.infradead.org>,
	"Rob Herring" <robh+dt@kernel.org>,
	"Mark Brown" <broonie@kernel.org>,
	"Doug Anderson" <dianders@chromium.org>,
	"Kumar Gala" <galak@codeaurora.org>
Subject: Re: [PATCH v14 2/3] power-domain: rockchip: add power domain driver
Date: Thu, 25 Jun 2015 17:33:56 +0200	[thread overview]
Message-ID: <CAPDyKFqa9cQin2HUFuLO_-sSkeASSr08b-3DZ5xGbYHQBPxKag@mail.gmail.com> (raw)
In-Reply-To: <557CF1D6.4090506@163.com>

[...]

>>> +#include <linux/clk-provider.h>
>>
>> clk-provider.h, why?
>
>
> The following is needed.
>
> _clk_get_name(clk)

I see, you need it for for the dev_dbg().

I think you shall use "%pC" as the formatting string for the dev_dbg()
message, since that will take care of printing the clock name for you.

[...]

>>> +static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool
>>> power_on)
>>> +{
>>> +       int i;
>>> +
>>> +       mutex_lock(&pd->pmu->mutex);
>>
>> I don't quite understand why you need a lock, what are you protecting and
>> why?
>
>
> Says:
> [ 3551.678762] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.899180] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.905958] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.912832] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.919730] rockchip_pd_power:139: mutex_lock
> [ 3551.924130] rockchip_pd_power:167,mutex_unlock
> [ 3563.827295] rockchip_pd_power:139: mutex_lock
> [ 3563.831753] rockchip_pd_power:167,mutex_unlock
> [ 3563.836216] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.058989] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.065659] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.072354] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.079047] rockchip_pd_power:139: mutex_lock
> [ 3564.083426] rockchip_pd_power:167,mutex_unlock
> [ 3611.665726] rockchip_pd_power:139: mutex_lock
> [ 3611.670206] rockchip_pd_power:167,mutex_unlock
> [ 3611.674692] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.899160] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.905938] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.912818] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.919689] rockchip_pd_power:139: mutex_lock
> [ 3611.924090] rockchip_pd_power:167,mutex_unlock
> [ 3623.848296] rockchip_pd_power:139: mutex_lock
> [ 3623.852752] rockchip_pd_power:167,mutex_unlock
>
>
>
>
>>> +
>>> +       if (rockchip_pmu_domain_is_on(pd) != power_on) {
>>> +               for (i = 0; i < pd->num_clks; i++)
>>> +                       clk_enable(pd->clks[i]);
>>> +
>>> +               if (!power_on) {
>>> +                       /* FIXME: add code to save AXI_QOS */
>>> +
>>> +                       /* if powering down, idle request to NIU first */
>>> +                       rockchip_pmu_set_idle_request(pd, true);
>>> +               }
>>> +
>>> +               rockchip_do_pmu_set_power_domain(pd, power_on);
>>> +
>>> +               if (power_on) {
>>> +                       /* if powering up, leave idle mode */
>>> +                       rockchip_pmu_set_idle_request(pd, false);
>>> +
>>> +                       /* FIXME: add code to restore AXI_QOS */
>>> +               }
>>> +
>>> +               for (i = pd->num_clks - 1; i >= 0; i--)
>>> +                       clk_disable(pd->clks[i]);
>>> +       }
>>> +
>>> +       mutex_unlock(&pd->pmu->mutex);
>>> +       return 0;
>>> +}
>>> +
>>> +static int rockchip_pd_power_on(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, true);
>>> +}
>>> +
>>> +static int rockchip_pd_power_off(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, false);
>>> +}
>>> +
>>> +static int rockchip_pd_attach_dev(struct generic_pm_domain *genpd,
>>> +                                 struct device *dev)
>>> +{
>>> +       struct clk *clk;
>>> +       int i;
>>> +       int error;
>>> +
>>> +       dev_dbg(dev, "attaching to power domain '%s'\n", genpd->name);
>>> +
>>> +       error = pm_clk_create(dev);
>>> +       if (error) {
>>> +               dev_err(dev, "pm_clk_create failed %d\n", error);
>>> +               return error;
>>> +       }
>>> +
>>> +       i = 0;
>>> +       while ((clk = of_clk_get(dev->of_node, i++)) && !IS_ERR(clk)) {
>>
>> This loop adds all available device clocks to the PM clock list. I
>> wonder if this could be considered as a common thing and if so, we
>> might want to extend the pm_clk API with this.
>
>
> There are several reasons as follows:
>
>     Firstly, the clocks need be turned off to save power when
>     the system enter the suspend state. So we need to enumerate the clocks
>     in the dts. In order to power domain can turn on and off.
>
>     Secondly, the reset-circuit should reset be synchronous on rk3288, then
> sync revoked.
>     So we need to enable clocks of all devices.

I think you misinterpreted my previous answer. Instead of fetching all
clocks for a device and manually create the pm_clk list as you do
here, I was suggesting to make this a part of the pm clk API.

I would happily support such an API, but I can't tell what the other
maintainers think.

>
>>> +               dev_dbg(dev, "adding clock '%s' to list of PM clocks\n",
>>> +                       __clk_get_name(clk));
>>> +               error = pm_clk_add_clk(dev, clk);
>>> +               clk_put(clk);
>>> +               if (error) {
>>> +                       dev_err(dev, "pm_clk_add_clk failed %d\n",
>>> error);
>>> +                       pm_clk_destroy(dev);
>>> +                       return error;
>>> +               }
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +

[...]

>>> +
>>> +static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
>>> +                                     struct device_node *node)
>>> +{
>>> +       const struct rockchip_domain_info *pd_info;
>>> +       struct rockchip_pm_domain *pd;
>>> +       struct clk *clk;
>>> +       int clk_cnt;
>>> +       int i;
>>> +       u32 id;
>>> +       int error;
>>> +
>>> +       error = of_property_read_u32(node, "reg", &id);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "%s: failed to retrieve domain id (reg): %d\n",
>>> +                       node->name, error);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       if (id >= pmu->info->num_domains) {
>>> +               dev_err(pmu->dev, "%s: invalid domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       pd_info = &pmu->info->domain_info[id];
>>> +       if (!pd_info) {
>>> +               dev_err(pmu->dev, "%s: undefined domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       clk_cnt = of_count_phandle_with_args(node, "clocks",
>>> "#clock-cells");
>>> +       pd = devm_kzalloc(pmu->dev,
>>> +                         sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]),
>>> +                         GFP_KERNEL);
>>> +       if (!pd)
>>> +               return -ENOMEM;
>>> +
>>> +       pd->info = pd_info;
>>> +       pd->pmu = pmu;
>>> +
>>> +       for (i = 0; i < clk_cnt; i++) {
>>
>> This loop is similar to the one when creates the PM clock list in the
>> rockchip_pd_attach_dev().
>>
>> What's the reason you don't want to use pm clk for these clocks?
>>
>
> Ditto.

I don't follow. Can't you do the exact same thing via the pm clk API,
can you please elaborate!?

>
>>> +               clk = of_clk_get(node, i);
>>> +               if (IS_ERR(clk)) {
>>> +                       error = PTR_ERR(clk);
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to get clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               error = clk_prepare(clk);
>>> +               if (error) {
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to prepare clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       clk_put(clk);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               pd->clks[pd->num_clks++] = clk;
>>> +
>>> +               dev_dbg(pmu->dev, "added clock '%s' to domain '%s'\n",
>>> +                       __clk_get_name(clk), node->name);
>>> +       }
>>> +
>>> +       error = rockchip_pd_power(pd, true);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "failed to power on domain '%s': %d\n",
>>> +                       node->name, error);
>>> +               goto err_out;
>>> +       }
>>> +
>>> +       pd->genpd.name = node->name;
>>> +       pd->genpd.power_off = rockchip_pd_power_off;
>>> +       pd->genpd.power_on = rockchip_pd_power_on;
>>> +       pd->genpd.attach_dev = rockchip_pd_attach_dev;
>>> +       pd->genpd.detach_dev = rockchip_pd_detach_dev;
>>> +       pd->genpd.dev_ops.start = rockchip_pd_start_dev;
>>> +       pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;
>>
>> Instead of assigning the ->stop|start() callbacks, which do
>> pm_clk_suspend|resume(), just set the genpd->flags to
>> GENPD_FLAG_PM_CLK, and genpd will fix this for you.
>>
> Ditto.

Again, I don't follow. Here is what goes on:

rockchip_pd_start_dev() calls pm_clk_resume() and
rockchip_pd_stop_dev() calls pm_clk_suspend().
Instead of assigning the below callbacks...

pd->genpd.dev_ops.start = rockchip_pd_start_dev;
pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;

... just use GENPD_FLAG_PM_CLK and let genpd deal with the calls to
pm_clk_suspend|resume().

>
>
>>> +       pm_genpd_init(&pd->genpd, NULL, false);
>>> +
>>> +       pmu->genpd_data.domains[id] = &pd->genpd;
>>> +       return 0;
>>> +
>>> +err_out:
>>> +       while (--i >= 0) {
>>> +               clk_unprepare(pd->clks[i]);
>>> +               clk_put(pd->clks[i]);
>>> +       }
>>> +       return error;
>>> +}
>>> +

[...]

Kind regards
Uffe

WARNING: multiple messages have this Message-ID (diff)
From: ulf.hansson@linaro.org (Ulf Hansson)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v14 2/3] power-domain: rockchip: add power domain driver
Date: Thu, 25 Jun 2015 17:33:56 +0200	[thread overview]
Message-ID: <CAPDyKFqa9cQin2HUFuLO_-sSkeASSr08b-3DZ5xGbYHQBPxKag@mail.gmail.com> (raw)
In-Reply-To: <557CF1D6.4090506@163.com>

[...]

>>> +#include <linux/clk-provider.h>
>>
>> clk-provider.h, why?
>
>
> The following is needed.
>
> _clk_get_name(clk)

I see, you need it for for the dev_dbg().

I think you shall use "%pC" as the formatting string for the dev_dbg()
message, since that will take care of printing the clock name for you.

[...]

>>> +static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool
>>> power_on)
>>> +{
>>> +       int i;
>>> +
>>> +       mutex_lock(&pd->pmu->mutex);
>>
>> I don't quite understand why you need a lock, what are you protecting and
>> why?
>
>
> Says:
> [ 3551.678762] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.899180] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.905958] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3551.912832] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3551.919730] rockchip_pd_power:139: mutex_lock
> [ 3551.924130] rockchip_pd_power:167,mutex_unlock
> [ 3563.827295] rockchip_pd_power:139: mutex_lock
> [ 3563.831753] rockchip_pd_power:167,mutex_unlock
> [ 3563.836216] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.058989] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.065659] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3564.072354] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3564.079047] rockchip_pd_power:139: mutex_lock
> [ 3564.083426] rockchip_pd_power:167,mutex_unlock
> [ 3611.665726] rockchip_pd_power:139: mutex_lock
> [ 3611.670206] rockchip_pd_power:167,mutex_unlock
> [ 3611.674692] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.899160] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.905938] mali ffa30000.gpu: starting device in power domain 'pd_gpu'
> [ 3611.912818] mali ffa30000.gpu: stopping device in power domain 'pd_gpu'
> [ 3611.919689] rockchip_pd_power:139: mutex_lock
> [ 3611.924090] rockchip_pd_power:167,mutex_unlock
> [ 3623.848296] rockchip_pd_power:139: mutex_lock
> [ 3623.852752] rockchip_pd_power:167,mutex_unlock
>
>
>
>
>>> +
>>> +       if (rockchip_pmu_domain_is_on(pd) != power_on) {
>>> +               for (i = 0; i < pd->num_clks; i++)
>>> +                       clk_enable(pd->clks[i]);
>>> +
>>> +               if (!power_on) {
>>> +                       /* FIXME: add code to save AXI_QOS */
>>> +
>>> +                       /* if powering down, idle request to NIU first */
>>> +                       rockchip_pmu_set_idle_request(pd, true);
>>> +               }
>>> +
>>> +               rockchip_do_pmu_set_power_domain(pd, power_on);
>>> +
>>> +               if (power_on) {
>>> +                       /* if powering up, leave idle mode */
>>> +                       rockchip_pmu_set_idle_request(pd, false);
>>> +
>>> +                       /* FIXME: add code to restore AXI_QOS */
>>> +               }
>>> +
>>> +               for (i = pd->num_clks - 1; i >= 0; i--)
>>> +                       clk_disable(pd->clks[i]);
>>> +       }
>>> +
>>> +       mutex_unlock(&pd->pmu->mutex);
>>> +       return 0;
>>> +}
>>> +
>>> +static int rockchip_pd_power_on(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, true);
>>> +}
>>> +
>>> +static int rockchip_pd_power_off(struct generic_pm_domain *domain)
>>> +{
>>> +       struct rockchip_pm_domain *pd = to_rockchip_pd(domain);
>>> +
>>> +       return rockchip_pd_power(pd, false);
>>> +}
>>> +
>>> +static int rockchip_pd_attach_dev(struct generic_pm_domain *genpd,
>>> +                                 struct device *dev)
>>> +{
>>> +       struct clk *clk;
>>> +       int i;
>>> +       int error;
>>> +
>>> +       dev_dbg(dev, "attaching to power domain '%s'\n", genpd->name);
>>> +
>>> +       error = pm_clk_create(dev);
>>> +       if (error) {
>>> +               dev_err(dev, "pm_clk_create failed %d\n", error);
>>> +               return error;
>>> +       }
>>> +
>>> +       i = 0;
>>> +       while ((clk = of_clk_get(dev->of_node, i++)) && !IS_ERR(clk)) {
>>
>> This loop adds all available device clocks to the PM clock list. I
>> wonder if this could be considered as a common thing and if so, we
>> might want to extend the pm_clk API with this.
>
>
> There are several reasons as follows:
>
>     Firstly, the clocks need be turned off to save power when
>     the system enter the suspend state. So we need to enumerate the clocks
>     in the dts. In order to power domain can turn on and off.
>
>     Secondly, the reset-circuit should reset be synchronous on rk3288, then
> sync revoked.
>     So we need to enable clocks of all devices.

I think you misinterpreted my previous answer. Instead of fetching all
clocks for a device and manually create the pm_clk list as you do
here, I was suggesting to make this a part of the pm clk API.

I would happily support such an API, but I can't tell what the other
maintainers think.

>
>>> +               dev_dbg(dev, "adding clock '%s' to list of PM clocks\n",
>>> +                       __clk_get_name(clk));
>>> +               error = pm_clk_add_clk(dev, clk);
>>> +               clk_put(clk);
>>> +               if (error) {
>>> +                       dev_err(dev, "pm_clk_add_clk failed %d\n",
>>> error);
>>> +                       pm_clk_destroy(dev);
>>> +                       return error;
>>> +               }
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +

[...]

>>> +
>>> +static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
>>> +                                     struct device_node *node)
>>> +{
>>> +       const struct rockchip_domain_info *pd_info;
>>> +       struct rockchip_pm_domain *pd;
>>> +       struct clk *clk;
>>> +       int clk_cnt;
>>> +       int i;
>>> +       u32 id;
>>> +       int error;
>>> +
>>> +       error = of_property_read_u32(node, "reg", &id);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "%s: failed to retrieve domain id (reg): %d\n",
>>> +                       node->name, error);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       if (id >= pmu->info->num_domains) {
>>> +               dev_err(pmu->dev, "%s: invalid domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       pd_info = &pmu->info->domain_info[id];
>>> +       if (!pd_info) {
>>> +               dev_err(pmu->dev, "%s: undefined domain id %d\n",
>>> +                       node->name, id);
>>> +               return -EINVAL;
>>> +       }
>>> +
>>> +       clk_cnt = of_count_phandle_with_args(node, "clocks",
>>> "#clock-cells");
>>> +       pd = devm_kzalloc(pmu->dev,
>>> +                         sizeof(*pd) + clk_cnt * sizeof(pd->clks[0]),
>>> +                         GFP_KERNEL);
>>> +       if (!pd)
>>> +               return -ENOMEM;
>>> +
>>> +       pd->info = pd_info;
>>> +       pd->pmu = pmu;
>>> +
>>> +       for (i = 0; i < clk_cnt; i++) {
>>
>> This loop is similar to the one when creates the PM clock list in the
>> rockchip_pd_attach_dev().
>>
>> What's the reason you don't want to use pm clk for these clocks?
>>
>
> Ditto.

I don't follow. Can't you do the exact same thing via the pm clk API,
can you please elaborate!?

>
>>> +               clk = of_clk_get(node, i);
>>> +               if (IS_ERR(clk)) {
>>> +                       error = PTR_ERR(clk);
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to get clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               error = clk_prepare(clk);
>>> +               if (error) {
>>> +                       dev_err(pmu->dev,
>>> +                               "%s: failed to prepare clk %s (index %d):
>>> %d\n",
>>> +                               node->name, __clk_get_name(clk), i,
>>> error);
>>> +                       clk_put(clk);
>>> +                       goto err_out;
>>> +               }
>>> +
>>> +               pd->clks[pd->num_clks++] = clk;
>>> +
>>> +               dev_dbg(pmu->dev, "added clock '%s' to domain '%s'\n",
>>> +                       __clk_get_name(clk), node->name);
>>> +       }
>>> +
>>> +       error = rockchip_pd_power(pd, true);
>>> +       if (error) {
>>> +               dev_err(pmu->dev,
>>> +                       "failed to power on domain '%s': %d\n",
>>> +                       node->name, error);
>>> +               goto err_out;
>>> +       }
>>> +
>>> +       pd->genpd.name = node->name;
>>> +       pd->genpd.power_off = rockchip_pd_power_off;
>>> +       pd->genpd.power_on = rockchip_pd_power_on;
>>> +       pd->genpd.attach_dev = rockchip_pd_attach_dev;
>>> +       pd->genpd.detach_dev = rockchip_pd_detach_dev;
>>> +       pd->genpd.dev_ops.start = rockchip_pd_start_dev;
>>> +       pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;
>>
>> Instead of assigning the ->stop|start() callbacks, which do
>> pm_clk_suspend|resume(), just set the genpd->flags to
>> GENPD_FLAG_PM_CLK, and genpd will fix this for you.
>>
> Ditto.

Again, I don't follow. Here is what goes on:

rockchip_pd_start_dev() calls pm_clk_resume() and
rockchip_pd_stop_dev() calls pm_clk_suspend().
Instead of assigning the below callbacks...

pd->genpd.dev_ops.start = rockchip_pd_start_dev;
pd->genpd.dev_ops.stop = rockchip_pd_stop_dev;

... just use GENPD_FLAG_PM_CLK and let genpd deal with the calls to
pm_clk_suspend|resume().

>
>
>>> +       pm_genpd_init(&pd->genpd, NULL, false);
>>> +
>>> +       pmu->genpd_data.domains[id] = &pd->genpd;
>>> +       return 0;
>>> +
>>> +err_out:
>>> +       while (--i >= 0) {
>>> +               clk_unprepare(pd->clks[i]);
>>> +               clk_put(pd->clks[i]);
>>> +       }
>>> +       return error;
>>> +}
>>> +

[...]

Kind regards
Uffe

  reply	other threads:[~2015-06-25 15:34 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-24  8:07 [PATCH v14 0/3] ARM: rk3288: Add PM Domain support Caesar Wang
2015-04-24  8:07 ` [PATCH v14 1/3] dt-bindings: add document of Rockchip power domain Caesar Wang
2015-05-28  9:02   ` Ulf Hansson
2015-05-28  9:02     ` Ulf Hansson
2015-05-28  9:02     ` Ulf Hansson
2015-04-24  8:07 ` [PATCH v14 2/3] power-domain: rockchip: add power domain driver Caesar Wang
2015-05-28 10:38   ` Ulf Hansson
2015-05-28 10:38     ` Ulf Hansson
2015-05-28 10:38     ` Ulf Hansson
2015-06-14  3:15     ` Caesar Wang
2015-06-14  3:15       ` Caesar Wang
2015-06-14  3:15       ` Caesar Wang
2015-06-25 15:33       ` Ulf Hansson [this message]
2015-06-25 15:33         ` Ulf Hansson
2015-06-25 15:33         ` Ulf Hansson
2015-06-29  8:16         ` Caesar Wang
2015-06-29  8:16           ` Caesar Wang
2015-06-29  8:16           ` Caesar Wang
2015-04-24  8:07 ` [PATCH v14 3/3] ARM: dts: add RK3288 power-domain node Caesar Wang
2015-04-25 18:47 ` [PATCH v14 0/3] ARM: rk3288: Add PM Domain support Heiko Stübner
2015-04-25 18:47   ` Heiko Stübner
2015-04-27 18:28   ` Kevin Hilman
2015-04-27 18:28     ` Kevin Hilman
2015-06-12  5:11     ` Caesar Wang
2015-06-12  5:11       ` Caesar Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPDyKFqa9cQin2HUFuLO_-sSkeASSr08b-3DZ5xGbYHQBPxKag@mail.gmail.com \
    --to=ulf.hansson@linaro.org \
    --cc=broonie@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=dianders@chromium.org \
    --cc=dmitry.torokhov@gmail.com \
    --cc=galak@codeaurora.org \
    --cc=grant.likely@linaro.org \
    --cc=heiko@sntech.de \
    --cc=ijc+devicetree@hellion.org.uk \
    --cc=jinkun.hong@rock-chips.com \
    --cc=khilman@linaro.org \
    --cc=linus.walleij@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=linux@arm.linux.org.uk \
    --cc=mark.rutland@arm.com \
    --cc=pawel.moll@arm.com \
    --cc=rdunlap@infradead.org \
    --cc=robh+dt@kernel.org \
    --cc=sasukewxt@163.com \
    --cc=tomasz.figa@gmail.com \
    --cc=wxt@rock-chips.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.