linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Stephen Boyd <swboyd@chromium.org>
Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>,
	Kevin Hilman <khilman@kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Len Brown <len.brown@intel.com>, Pavel Machek <pavel@ucw.cz>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Linux PM <linux-pm@vger.kernel.org>
Subject: Re: [PATCH] PM: domains: Shrink locking area of the gpd_list_lock
Date: Wed, 23 Jun 2021 11:55:24 +0200	[thread overview]
Message-ID: <CAPDyKFoxg0OhHUONm4dsOTyJperfM7bkkHpK0ikqP8u9mgi97w@mail.gmail.com> (raw)
In-Reply-To: <CAE-0n53=AuYcBSTKkvDmNHpLMq7j4yTeMh5j80uN5dobqvC5ag@mail.gmail.com>

On Wed, 23 Jun 2021 at 10:31, Stephen Boyd <swboyd@chromium.org> wrote:
>
> Quoting Ulf Hansson (2021-06-22 09:27:09)
> > On Mon, 21 Jun 2021 at 22:10, Stephen Boyd <swboyd@chromium.org> wrote:
> > > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > > index b6a782c31613..18063046961c 100644
> > > --- a/drivers/base/power/domain.c
> > > +++ b/drivers/base/power/domain.c
> > > @@ -1984,8 +1984,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
> > >
> > >         mutex_lock(&gpd_list_lock);
> > >         list_add(&genpd->gpd_list_node, &gpd_list);
> > > -       genpd_debug_add(genpd);
> > >         mutex_unlock(&gpd_list_lock);
> > > +       genpd_debug_add(genpd);
> > >
> > >         return 0;
> > >  }
> > > @@ -2162,9 +2162,11 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate,
> > >         cp->xlate = xlate;
> > >         fwnode_dev_initialized(&np->fwnode, true);
> > >
> > > +       mutex_lock(&gpd_list_lock);
> >
> > By looking at the existing code, $subject patch makes the behavior
> > consistent and fixes the problem that the locks must always be
> > taken/released in the same order.
> >
> > However, as I have been looking at this before (but never got to the
> > point of sending a patch), I am actually thinking that it would be
> > better to decouple the two locks, instead of further combining them.
> >
> > In other words, we shouldn't lock/unlock the &gpd_list_lock here in
> > this function. Of course, that also means we need to fixup the code in
> > of_genpd_del_provider() accordingly.
>
> Yes I was wondering why this list lock was used here at all. It seems to
> be a substitute for calling genpd_lock()? I opted to just push the list

The genpd_lock should be used to protect some of the data in the genpd
struct. Like the genpd->provider and the genpd->has_provider, for
example.

Clearly, some of the data in the genpd struct are protected with
gpd_list_lock, which is suboptimal.

> lock as far down as possible to fix the problem, which is holding it
> over the calls into OPP.

Yes, we don't want that.

>
> If I've read the code correctly it serves no purpose to grab the
> gpd_list_lock here in genpd_add_provider() because we grab the
> of_genpd_mutex and that is protecting the of_genpd_providers list
> everywhere else. Is that right? Put another way, This hunk of the patch
> can be dropped and then your concern will be addressed and there isn't
> anything more to do.

It certainly can be dropped from the $subject patch, please re-spin to
update that.

However, there are additional changes that deserve to be done to
improve the behaviour around the locks. More precisely, the
&gpd_list_lock and the &of_genpd_mutex should be completely decoupled,
but there are some other related things as well.

Probably it's easier if I post a patch, on top of yours, to try to
further improve the behavior. I would appreciate it if you could help
with the test/review then.

>
> >
> >
> > >         mutex_lock(&of_genpd_mutex);
> > >         list_add(&cp->link, &of_genpd_providers);
> > >         mutex_unlock(&of_genpd_mutex);
> > > +       mutex_unlock(&gpd_list_lock);
> > >         pr_debug("Added domain provider from %pOF\n", np);
> > >
> > >         return 0;
> > > @@ -2314,8 +2314,6 @@ int of_genpd_add_provider_onecell(struct device_node *np,
> > >                 }
> > >         }
> > >
> > > -       mutex_unlock(&gpd_list_lock);
> > > -
> > >         return ret;
> > >  }
> > >  EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell);
> >
> > I will continue to have a look at this and provide some more comments
> > asap, but overall the change is a step in the right direction.
> >
> > Possibly, we may even consider applying it as is and work on the
> > things I pointed out above, as improvements on top. Let's see, give me
> > a day or so.
> >
>
> Ok sure.

Kind regards
Uffe

  reply	other threads:[~2021-06-23  9:56 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-21 20:10 [PATCH] PM: domains: Shrink locking area of the gpd_list_lock Stephen Boyd
2021-06-22 13:17 ` Rafael J. Wysocki
2021-06-22 16:27 ` Ulf Hansson
2021-06-23  8:31   ` Stephen Boyd
2021-06-23  9:55     ` Ulf Hansson [this message]
2021-06-24 20:33       ` Stephen Boyd

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPDyKFoxg0OhHUONm4dsOTyJperfM7bkkHpK0ikqP8u9mgi97w@mail.gmail.com \
    --to=ulf.hansson@linaro.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=khilman@kernel.org \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=rjw@rjwysocki.net \
    --cc=swboyd@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).