linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Heiner Kallweit <hkallweit1@gmail.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Linux PM <linux-pm@vger.kernel.org>
Subject: Re: cpufreq-related deadlock warning on recent linux-next
Date: Sat, 13 Jul 2019 13:24:28 +0200	[thread overview]
Message-ID: <edd74314-869c-e4e3-76bf-35962165153e@gmail.com> (raw)
In-Reply-To: <20190711022813.zfroyk3drfarvpwj@vireshk-i7>

On 11.07.2019 04:28, Viresh Kumar wrote:
> On 10-07-19, 22:53, Heiner Kallweit wrote:
>> I just got the following when manually suspending the system with
>> "systemctl suspend" and waking it up with the power button.
>>
>>
>> [  380.203172] Restarting tasks ... done.
>>
>> [  380.211714] ============================================
>> [  380.211719] WARNING: possible recursive locking detected
>> [  380.211726] 5.2.0-rc7-next-20190704+ #2 Not tainted
>> [  380.211731] --------------------------------------------
>> [  380.211737] systemd-sleep/2367 is trying to acquire lock:
>> [  380.211745] 0000000043cf69ce (&policy->rwsem){+.+.}, at: refresh_frequency_limits+0x36/0x90
>> [  380.211761]
>>                but task is already holding lock:
>> [  380.211767] 0000000043cf69ce (&policy->rwsem){+.+.}, at: cpufreq_cpu_acquire+0x25/0x50
>> [  380.211777]
>>                other info that might help us debug this:
>> [  380.211783]  Possible unsafe locking scenario:
>>
>> [  380.211789]        CPU0
>> [  380.211792]        ----
>> [  380.211795]   lock(&policy->rwsem);
>> [  380.211800]   lock(&policy->rwsem);
>> [  380.211805]
>>                 *** DEADLOCK ***
>>
>> [  380.211811]  May be due to missing lock nesting notation
>>
>> [  380.211818] 8 locks held by systemd-sleep/2367:
>> [  380.211823]  #0: 000000000e253e21 (sb_writers#5){.+.+}, at: vfs_write+0x16b/0x1d0
>> [  380.211835]  #1: 00000000d0140159 (&of->mutex){+.+.}, at: kernfs_fop_write+0xfd/0x1c0
>> [  380.211846]  #2: 00000000383c283a (kn->count#155){.+.+}, at: kernfs_fop_write+0x105/0x1c0
>> [  380.211857]  #3: 000000007e6f342b (system_transition_mutex){+.+.}, at: pm_suspend.cold+0xd0/0x36a
>> [  380.211869]  #4: 000000002ee59360 ((pm_chain_head).rwsem){++++}, at: __blocking_notifier_call_chain+0x46/0x80
>> [  380.211883]  #5: 000000003972eb2e (&tz->lock){+.+.}, at: step_wise_throttle+0x3f/0x90
>> [  380.211893]  #6: 0000000007747f02 (&cdev->lock){+.+.}, at: thermal_cdev_update+0x1e/0x16c
>> [  380.211904]  #7: 0000000043cf69ce (&policy->rwsem){+.+.}, at: cpufreq_cpu_acquire+0x25/0x50
> 
> This is already fixed in linux-next few days back. Can you try the
> latest stuff again ?
> 
linux-next from Jul 12th is fine.

Thanks, Heiner

      reply	other threads:[~2019-07-13 11:24 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-10 20:53 cpufreq-related deadlock warning on recent linux-next Heiner Kallweit
2019-07-11  2:28 ` Viresh Kumar
2019-07-13 11:24   ` Heiner Kallweit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=edd74314-869c-e4e3-76bf-35962165153e@gmail.com \
    --to=hkallweit1@gmail.com \
    --cc=linux-pm@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).