From: Prarit Bhargava <prarit@redhat.com>
To: Stephen Boyd <sboyd@codeaurora.org>
Cc: Saravana Kannan <skannan@codeaurora.org>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
linux-kernel@vger.kernel.org,
Viresh Kumar <viresh.kumar@linaro.org>,
Lenny Szubowicz <lszubowi@redhat.com>,
linux-pm@vger.kernel.org
Subject: Re: [PATCH] cpufreq, store_scaling_governor requires policy->rwsem to be held for duration of changing governors [v2]
Date: Fri, 01 Aug 2014 15:15:48 -0400 [thread overview]
Message-ID: <53DBE764.8050109@redhat.com> (raw)
In-Reply-To: <53DBCBE8.6010809@codeaurora.org>
On 08/01/2014 01:18 PM, Stephen Boyd wrote:
> On 08/01/14 03:27, Prarit Bhargava wrote:
>>
>> Can you send me the test and the trace of the deadlock? I'm not creating it with:
>>
>
> This was with conservative as the default, and switching to ondemand
>
> # cd /sys/devices/system/cpu/cpu2/cpufreq
> # ls
> affected_cpus scaling_available_governors
> conservative scaling_cur_freq
> cpuinfo_cur_freq scaling_driver
> cpuinfo_max_freq scaling_governor
> cpuinfo_min_freq scaling_max_freq
> cpuinfo_transition_latency scaling_min_freq
> related_cpus scaling_setspeed
> scaling_available_frequencies stats
> # cat conservative/down_threshold
> 20
> # echo ondemand > scaling_governor
Thanks Stephen,
There's obviously a difference in our .configs. I have a global conservative
directory, ie) /sys/devices/system/cpu/cpufreq/conservative instead of a per-cpu
governor file.
ie) what are your .config options for CPUFREQ?
Mine are:
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=m
CONFIG_CPU_FREQ_STAT_DETAILS=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
Is there some other config option I have to set?
P.
>
> ======================================================
> [ INFO: possible circular locking dependency detected ]
> 3.16.0-rc3-00039-ge1e38f124d87 #47 Not tainted
> -------------------------------------------------------
> sh/75 is trying to acquire lock:
> (s_active#9){++++..}, at: [<c0358a94>] kernfs_remove_by_name_ns+0x3c/0x84
>
> but task is already holding lock:
> (&policy->rwsem){+++++.}, at: [<c05ab1f0>] store+0x68/0xb8
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (&policy->rwsem){+++++.}:
> [<c0359234>] kernfs_fop_open+0x138/0x298
> [<c02fa3f4>] do_dentry_open.isra.12+0x1b0/0x2f0
> [<c02fa604>] finish_open+0x20/0x38
> [<c0308d34>] do_last.isra.37+0x5ac/0xb68
> [<c03093a4>] path_openat+0xb4/0x5d8
> [<c0309bcc>] do_filp_open+0x2c/0x80
> [<c02fb558>] do_sys_open+0x10c/0x1c8
> [<c020f0a0>] ret_fast_syscall+0x0/0x48
>
> -> #0 (s_active#9){++++..}:
> [<c0357d18>] __kernfs_remove+0x250/0x300
> [<c0358a94>] kernfs_remove_by_name_ns+0x3c/0x84
> [<c035aa78>] remove_files+0x34/0x78
> [<c035aee0>] sysfs_remove_group+0x40/0x98
> [<c05b0560>] cpufreq_governor_dbs+0x4c0/0x6ec
> [<c05abebc>] __cpufreq_governor+0x118/0x200
> [<c05ac0fc>] cpufreq_set_policy+0x158/0x2ac
> [<c05ad5e4>] store_scaling_governor+0x6c/0x94
> [<c05ab210>] store+0x88/0xb8
> [<c035a00c>] sysfs_kf_write+0x4c/0x50
> [<c03594d4>] kernfs_fop_write+0xc0/0x180
> [<c02fc5c8>] vfs_write+0xa0/0x1a8
> [<c02fc9d4>] SyS_write+0x40/0x8c
> [<c020f0a0>] ret_fast_syscall+0x0/0x48
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(&policy->rwsem);
> lock(s_active#9);
> lock(&policy->rwsem);
> lock(s_active#9);
>
> *** DEADLOCK ***
>
> 6 locks held by sh/75:
> #0: (sb_writers#4){.+.+..}, at: [<c02fc6a8>] vfs_write+0x180/0x1a8
> #1: (&of->mutex){+.+...}, at: [<c0359498>] kernfs_fop_write+0x84/0x180
> #2: (s_active#10){.+.+..}, at: [<c03594a0>] kernfs_fop_write+0x8c/0x180
> #3: (cpu_hotplug.lock){++++++}, at: [<c0221ef8>] get_online_cpus+0x38/0x9c
> #4: (cpufreq_rwsem){.+.+.+}, at: [<c05ab1d8>] store+0x50/0xb8
> #5: (&policy->rwsem){+++++.}, at: [<c05ab1f0>] store+0x68/0xb8
>
> stack backtrace:
> CPU: 0 PID: 75 Comm: sh Not tainted 3.16.0-rc3-00039-ge1e38f124d87 #47
> [<c0214de8>] (unwind_backtrace) from [<c02123f8>] (show_stack+0x10/0x14)
> [<c02123f8>] (show_stack) from [<c0709e5c>] (dump_stack+0x70/0xbc)
> [<c0709e5c>] (dump_stack) from [<c070722c>] (print_circular_bug+0x280/0x2d4)
> [<c070722c>] (print_circular_bug) from [<c02629cc>] (__lock_acquire+0x18d0/0x1abc)
> [<c02629cc>] (__lock_acquire) from [<c026310c>] (lock_acquire+0x9c/0x138)
> [<c026310c>] (lock_acquire) from [<c0357d18>] (__kernfs_remove+0x250/0x300)
> [<c0357d18>] (__kernfs_remove) from [<c0358a94>] (kernfs_remove_by_name_ns+0x3c/0x84)
> [<c0358a94>] (kernfs_remove_by_name_ns) from [<c035aa78>] (remove_files+0x34/0x78)
> [<c035aa78>] (remove_files) from [<c035aee0>] (sysfs_remove_group+0x40/0x98)
> [<c035aee0>] (sysfs_remove_group) from [<c05b0560>] (cpufreq_governor_dbs+0x4c0/0x6ec)
> [<c05b0560>] (cpufreq_governor_dbs) from [<c05abebc>] (__cpufreq_governor+0x118/0x200)
> [<c05abebc>] (__cpufreq_governor) from [<c05ac0fc>] (cpufreq_set_policy+0x158/0x2ac)
> [<c05ac0fc>] (cpufreq_set_policy) from [<c05ad5e4>] (store_scaling_governor+0x6c/0x94)
> [<c05ad5e4>] (store_scaling_governor) from [<c05ab210>] (store+0x88/0xb8)
> [<c05ab210>] (store) from [<c035a00c>] (sysfs_kf_write+0x4c/0x50)
> [<c035a00c>] (sysfs_kf_write) from [<c03594d4>] (kernfs_fop_write+0xc0/0x180)
> [<c03594d4>] (kernfs_fop_write) from [<c02fc5c8>] (vfs_write+0xa0/0x1a8)
> [<c02fc5c8>] (vfs_write) from [<c02fc9d4>] (SyS_write+0x40/0x8c)
> [<c02fc9d4>] (SyS_write) from [<c020f0a0>] (ret_fast_syscall+0x0/0x48)
>
>
next prev parent reply other threads:[~2014-08-01 19:16 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-29 11:46 [PATCH] cpufreq, store_scaling_governor requires policy->rwsem to be held for duration of changing governors [v2] Prarit Bhargava
2014-07-30 0:03 ` Rafael J. Wysocki
2014-07-30 14:18 ` Prarit Bhargava
2014-07-30 21:40 ` Rafael J. Wysocki
2014-07-31 1:36 ` Saravana Kannan
2014-07-31 2:16 ` Rafael J. Wysocki
2014-07-31 2:07 ` Saravana Kannan
2014-07-31 10:16 ` Prarit Bhargava
2014-07-31 10:21 ` Prarit Bhargava
2014-07-31 10:23 ` Prarit Bhargava
2014-07-31 16:36 ` Rafael J. Wysocki
2014-07-31 17:57 ` Prarit Bhargava
2014-07-31 18:38 ` Rafael J. Wysocki
2014-07-31 18:26 ` Prarit Bhargava
2014-07-31 20:24 ` Saravana Kannan
2014-07-31 20:30 ` Prarit Bhargava
2014-07-31 20:38 ` Saravana Kannan
2014-07-31 21:08 ` Prarit Bhargava
2014-07-31 22:13 ` Saravana Kannan
2014-07-31 22:58 ` Prarit Bhargava
2014-08-01 0:55 ` Saravana Kannan
2014-08-01 10:24 ` Prarit Bhargava
2014-08-01 10:27 ` Prarit Bhargava
2014-08-01 17:18 ` Stephen Boyd
2014-08-01 19:15 ` Prarit Bhargava [this message]
2014-08-01 19:36 ` Stephen Boyd
2014-08-01 19:43 ` Prarit Bhargava
2014-08-01 19:54 ` Stephen Boyd
2014-08-01 21:25 ` Saravana Kannan
2014-08-04 10:11 ` Prarit Bhargava
2014-08-05 7:46 ` Viresh Kumar
2014-08-05 10:47 ` Prarit Bhargava
2014-08-05 10:53 ` Viresh Kumar
2014-08-05 22:06 ` Saravana Kannan
2014-08-05 22:20 ` Prarit Bhargava
2014-08-05 22:40 ` Saravana Kannan
2014-08-05 22:42 ` Prarit Bhargava
2014-08-05 22:51 ` Saravana Kannan
2014-08-13 19:57 ` Prarit Bhargava
2014-08-13 19:57 ` Prarit Bhargava
2014-08-14 18:16 ` Saravana Kannan
2014-08-14 18:16 ` Saravana Kannan
2014-08-06 8:10 ` Viresh Kumar
2014-08-06 10:09 ` Prarit Bhargava
2014-08-06 10:09 ` Prarit Bhargava
2014-08-06 15:08 ` Stephen Boyd
2014-08-07 6:36 ` Viresh Kumar
2014-08-07 10:12 ` Prarit Bhargava
2014-08-07 10:15 ` Viresh Kumar
2014-08-12 9:03 ` Viresh Kumar
2014-08-12 11:33 ` Prarit Bhargava
2014-08-13 7:39 ` Viresh Kumar
2014-08-13 9:58 ` Prarit Bhargava
2014-08-13 9:58 ` Prarit Bhargava
2014-08-14 4:19 ` Viresh Kumar
2014-08-04 10:36 ` Viresh Kumar
2014-08-04 12:25 ` Prarit Bhargava
2014-08-04 13:38 ` Viresh Kumar
2014-08-04 14:00 ` Prarit Bhargava
2014-08-04 15:04 ` Prarit Bhargava
2014-08-04 20:16 ` Saravana Kannan
2014-08-05 6:14 ` Viresh Kumar
2014-08-05 6:29 ` skannan
2014-08-05 6:43 ` Viresh Kumar
2014-10-13 10:43 ` Viresh Kumar
2014-10-13 11:52 ` Prarit Bhargava
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53DBE764.8050109@redhat.com \
--to=prarit@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=lszubowi@redhat.com \
--cc=rjw@rjwysocki.net \
--cc=sboyd@codeaurora.org \
--cc=skannan@codeaurora.org \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.