All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rafael@kernel.org>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
	Schspa Shi <schspa@gmail.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux PM <linux-pm@vger.kernel.org>
Subject: Re: [PATCH v3] cpufreq: fix race on cpufreq online
Date: Tue, 24 May 2022 13:48:08 +0200	[thread overview]
Message-ID: <CAJZ5v0jRtYcscWjUras9RC9LOTHf=qu1SPBhnC=52Gb3KKAQNw@mail.gmail.com> (raw)
In-Reply-To: <20220524112917.apcvvvblksg7jdu4@vireshk-i7>

On Tue, May 24, 2022 at 1:29 PM Viresh Kumar <viresh.kumar@linaro.org> wrote:
>
> On 24-05-22, 13:22, Rafael J. Wysocki wrote:
> > On Tue, May 24, 2022 at 1:15 PM Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > >
> > > On 13-05-22, 09:57, Viresh Kumar wrote:
> > > > On 12-05-22, 12:49, Rafael J. Wysocki wrote:
> > > > > > > Moreover, I'm not sure why the locking dance in store() is necessary.
> > > > > >
> > > > > > commit fdd320da84c6 ("cpufreq: Lock CPU online/offline in cpufreq_register_driver()")
> > > > >
> > > > > I get that, but I'm wondering if locking CPU hotplug from store() is
> > > > > needed at all.  I mean, if we are in store(), we are holding an active
> > > > > reference to the policy kobject, so the policy cannot go away until we
> > > > > are done anyway.  Thus it should be sufficient to use the policy rwsem
> > > > > for synchronization.
> > > >
> > > > I think after the current patchset is applied and we have the inactive
> > > > policy check in store(), we won't required the dance after all.
> > >
> > > I was writing a patch for this and then thought maybe look at mails
> > > around this time, when you sent the patch, and found the reason why we
> > > need the locking dance :)
> > >
> > > https://lore.kernel.org/lkml/20150729091136.GN7557@n2100.arm.linux.org.uk/
>
> Actually no, this is for the lock in cpufreq_driver_register().
>
> > Well, again, if we are in store(), we are holding a reference to the
> > policy kobject, so this is not initialization time.
>
> This is the commit which made the change.
>
> commit 4f750c930822 ("cpufreq: Synchronize the cpufreq store_*() routines with CPU hotplug")

So this was done before the entire CPU hotplug rework and it was
useful at that time.

The current code always runs cpufreq_set_policy() under policy->rwsem
and governors are stopped under policy->rwsem, so this particular race
cannot happen AFAICS.

Locking CPU hotplug prevents CPUs from going away while store() is
running, but in order to run store(), the caller must hold an active
reference to the policy kobject.  That prevents the policy from being
freed and so policy->rwsem can be acquired.  After policy->rwsem has
been acquired, policy->cpus can be checked to determine whether or not
there are any online CPUs for the given policy (there may be none),
because policy->cpus is only manipulated under policy->rwsem.

If a CPU that belongs to the given policy is going away,
cpufreq_offline() has to remove it from policy->cpus under
policy->rwsem, so either it has to wait for store() to release
policy->rwsem, or store() will acquire policy->rwsem after it and will
find that policy->cpus is empty.

  reply	other threads:[~2022-05-24 11:48 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-20 19:15 [PATCH] cpufreq: fix race on cpufreq online Schspa Shi
2022-04-22 14:38 ` Rafael J. Wysocki
2022-04-22 15:10   ` Schspa Shi
2022-04-22 15:59     ` Rafael J. Wysocki
2022-05-09  3:57 ` Viresh Kumar
2022-05-09 15:06   ` Schspa Shi
2022-05-10  3:52     ` Viresh Kumar
2022-05-10 15:28       ` [PATCH v2] " Schspa Shi
2022-05-10 15:34         ` Rafael J. Wysocki
2022-05-10 15:43           ` Schspa Shi
2022-05-10 15:42       ` [PATCH v3] " Schspa Shi
2022-05-11  4:35         ` Viresh Kumar
2022-05-11  8:10           ` Schspa Shi
2022-05-11 12:21             ` Viresh Kumar
2022-05-11 12:59               ` Rafael J. Wysocki
2022-05-11 13:19                 ` Rafael J. Wysocki
2022-05-11 13:42                   ` Rafael J. Wysocki
2022-05-11 13:42                   ` Schspa Shi
2022-05-11 13:50                     ` Rafael J. Wysocki
2022-05-12  6:56                   ` Viresh Kumar
2022-05-12 10:49                     ` Rafael J. Wysocki
2022-05-13  4:27                       ` Viresh Kumar
2022-05-24 11:14                         ` Viresh Kumar
2022-05-24 11:22                           ` Rafael J. Wysocki
2022-05-24 11:29                             ` Viresh Kumar
2022-05-24 11:48                               ` Rafael J. Wysocki [this message]
2022-05-24 11:53                                 ` Rafael J. Wysocki
2022-05-25  5:32                                   ` Viresh Kumar
2022-05-12  5:56                 ` Viresh Kumar
2022-05-11 13:12               ` Schspa Shi
2022-05-11 14:15         ` Rafael J. Wysocki
2022-05-12  5:51           ` Schspa Shi
2022-05-12 10:37             ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJZ5v0jRtYcscWjUras9RC9LOTHf=qu1SPBhnC=52Gb3KKAQNw@mail.gmail.com' \
    --to=rafael@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=schspa@gmail.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.