linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 0/7] cpufreq: Locking fixes and cleanups
@ 2016-02-09  3:46 Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines Viresh Kumar
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

Hi Rafael,

These are rest of the patches that fix some more locking issues with
policy->rwsem and do some minor optimization/cleanups.

These were part of the 13 patch series which was sent earlier. The last
patch is new, and does what I suggested to one of your commits.

V3->V4:
- Reordered all the patches and created a new series.
- s/global/common for common tunables
- One new patch

@Juri/Shilpa: I have added your Tested-by for the first 6 patches,
please let me know if you don't want to add that.

Viresh Kumar (7):
  cpufreq: Merge cpufreq_offline_prepare/finish routines
  cpufreq: Call __cpufreq_governor() with policy->rwsem held
  cpufreq: Remove cpufreq_governor_lock
  cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c
  cpufreq: governor: No need to manage state machine now
  cpufreq: conservative: Update sample_delay_ns immediately
  cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay

 drivers/cpufreq/cpufreq.c              |  93 +++++++++---------
 drivers/cpufreq/cpufreq_conservative.c |  80 +++-------------
 drivers/cpufreq/cpufreq_governor.c     | 170 ++++++++++++++++++++++++++++-----
 drivers/cpufreq/cpufreq_governor.h     |  16 +++-
 drivers/cpufreq/cpufreq_ondemand.c     | 153 +++++------------------------
 5 files changed, 240 insertions(+), 272 deletions(-)

-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-11  0:59   ` Rafael J. Wysocki
  2016-02-09  3:46 ` [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held Viresh Kumar
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

The offline routine was separated into two halves earlier by
'commit 1aee40ac9c86 ("cpufreq: Invoke __cpufreq_remove_dev_finish()
after releasing cpu_hotplug.lock");.

And the reasons cited were, race issues between accessing policy's sysfs
files and policy kobject's cleanup.

That race isn't valid anymore, as we don't remove the policy & its
kobject completely on hotplugs, but do that from ->remove() callback of
subsys framework.

These two routines can be merged back now.

This is a preparatory step for the next patch, that will enforce
policy->rwsem lock around __cpufreq_governor() routines STOP/EXIT
sequence.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq.c | 36 ++++++++++--------------------------
 1 file changed, 10 insertions(+), 26 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 9c62bf35b9dc..863ac26c4ecf 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1361,9 +1361,10 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
 	return ret;
 }
 
-static void cpufreq_offline_prepare(unsigned int cpu)
+static void cpufreq_offline(unsigned int cpu)
 {
 	struct cpufreq_policy *policy;
+	int ret;
 
 	pr_debug("%s: unregistering CPU %u\n", __func__, cpu);
 
@@ -1374,7 +1375,7 @@ static void cpufreq_offline_prepare(unsigned int cpu)
 	}
 
 	if (has_target()) {
-		int ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
+		ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
 		if (ret)
 			pr_err("%s: Failed to stop governor\n", __func__);
 	}
@@ -1397,34 +1398,23 @@ static void cpufreq_offline_prepare(unsigned int cpu)
 	/* Start governor again for active policy */
 	if (!policy_is_inactive(policy)) {
 		if (has_target()) {
-			int ret = __cpufreq_governor(policy, CPUFREQ_GOV_START);
+			ret = __cpufreq_governor(policy, CPUFREQ_GOV_START);
 			if (!ret)
 				ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
 
 			if (ret)
 				pr_err("%s: Failed to start governor\n", __func__);
 		}
-	} else if (cpufreq_driver->stop_cpu) {
-		cpufreq_driver->stop_cpu(policy);
-	}
-}
 
-static void cpufreq_offline_finish(unsigned int cpu)
-{
-	struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
-
-	if (!policy) {
-		pr_debug("%s: No cpu_data found\n", __func__);
 		return;
 	}
 
-	/* Only proceed for inactive policies */
-	if (!policy_is_inactive(policy))
-		return;
+	if (cpufreq_driver->stop_cpu)
+		cpufreq_driver->stop_cpu(policy);
 
 	/* If cpu is last user of policy, free policy */
 	if (has_target()) {
-		int ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
+		ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
 		if (ret)
 			pr_err("%s: Failed to exit governor\n", __func__);
 	}
@@ -1453,10 +1443,8 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
 	if (!policy)
 		return;
 
-	if (cpu_online(cpu)) {
-		cpufreq_offline_prepare(cpu);
-		cpufreq_offline_finish(cpu);
-	}
+	if (cpu_online(cpu))
+		cpufreq_offline(cpu);
 
 	cpumask_clear_cpu(cpu, policy->real_cpus);
 	remove_cpu_dev_symlink(policy, cpu);
@@ -2304,11 +2292,7 @@ static int cpufreq_cpu_callback(struct notifier_block *nfb,
 		break;
 
 	case CPU_DOWN_PREPARE:
-		cpufreq_offline_prepare(cpu);
-		break;
-
-	case CPU_POST_DEAD:
-		cpufreq_offline_finish(cpu);
+		cpufreq_offline(cpu);
 		break;
 
 	case CPU_DOWN_FAILED:
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-11  9:48   ` Rafael J. Wysocki
  2016-02-09  3:46 ` [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock Viresh Kumar
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

This isn't followed properly by all parts of the core code, some follow
it, whereas others don't.

Enforcing it will also enable us to remove cpufreq_governor_lock, that
is used today because we can't guarantee that __cpufreq_governor() isn't
executed in parallel.

We should also ensure that the lock is held across state changes to the
governors.

For example, while adding a CPU to the policy on cpu-online path, we
need to stop the governor, change policy->cpus, start the governor and
then refresh its limits. The complete sequence must be guaranteed to
execute without any concurrent races. And that can be achieved using
policy->rwsem around these use cases.

Also note that cpufreq_driver->stop_cpu() and ->exit() can get called
while policy->rwsem is held. That shouldn't have any side effects
though.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq.c | 49 +++++++++++++++++++++++++++++++----------------
 1 file changed, 33 insertions(+), 16 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 863ac26c4ecf..51fb47cd38a0 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1048,30 +1048,29 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cp
 	if (cpumask_test_cpu(cpu, policy->cpus))
 		return 0;
 
+	down_write(&policy->rwsem);
 	if (has_target()) {
 		ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
 		if (ret) {
 			pr_err("%s: Failed to stop governor\n", __func__);
-			return ret;
+			goto unlock;
 		}
 	}
 
-	down_write(&policy->rwsem);
 	cpumask_set_cpu(cpu, policy->cpus);
-	up_write(&policy->rwsem);
 
 	if (has_target()) {
 		ret = __cpufreq_governor(policy, CPUFREQ_GOV_START);
 		if (!ret)
 			ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
 
-		if (ret) {
+		if (ret)
 			pr_err("%s: Failed to start governor\n", __func__);
-			return ret;
-		}
 	}
 
-	return 0;
+unlock:
+	up_write(&policy->rwsem);
+	return ret;
 }
 
 static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
@@ -1374,13 +1373,13 @@ static void cpufreq_offline(unsigned int cpu)
 		return;
 	}
 
+	down_write(&policy->rwsem);
 	if (has_target()) {
 		ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
 		if (ret)
 			pr_err("%s: Failed to stop governor\n", __func__);
 	}
 
-	down_write(&policy->rwsem);
 	cpumask_clear_cpu(cpu, policy->cpus);
 
 	if (policy_is_inactive(policy)) {
@@ -1393,7 +1392,6 @@ static void cpufreq_offline(unsigned int cpu)
 		/* Nominate new CPU */
 		policy->cpu = cpumask_any(policy->cpus);
 	}
-	up_write(&policy->rwsem);
 
 	/* Start governor again for active policy */
 	if (!policy_is_inactive(policy)) {
@@ -1406,7 +1404,7 @@ static void cpufreq_offline(unsigned int cpu)
 				pr_err("%s: Failed to start governor\n", __func__);
 		}
 
-		return;
+		goto unlock;
 	}
 
 	if (cpufreq_driver->stop_cpu)
@@ -1428,6 +1426,9 @@ static void cpufreq_offline(unsigned int cpu)
 		cpufreq_driver->exit(policy);
 		policy->freq_table = NULL;
 	}
+
+unlock:
+	up_write(&policy->rwsem);
 }
 
 /**
@@ -1624,6 +1625,7 @@ EXPORT_SYMBOL(cpufreq_generic_suspend);
 void cpufreq_suspend(void)
 {
 	struct cpufreq_policy *policy;
+	int ret;
 
 	if (!cpufreq_driver)
 		return;
@@ -1634,7 +1636,11 @@ void cpufreq_suspend(void)
 	pr_debug("%s: Suspending Governors\n", __func__);
 
 	for_each_active_policy(policy) {
-		if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP))
+		down_write(&policy->rwsem);
+		ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
+		up_write(&policy->rwsem);
+
+		if (ret)
 			pr_err("%s: Failed to stop governor for policy: %p\n",
 				__func__, policy);
 		else if (cpufreq_driver->suspend
@@ -1656,6 +1662,7 @@ void cpufreq_suspend(void)
 void cpufreq_resume(void)
 {
 	struct cpufreq_policy *policy;
+	int ret;
 
 	if (!cpufreq_driver)
 		return;
@@ -1668,13 +1675,20 @@ void cpufreq_resume(void)
 	pr_debug("%s: Resuming Governors\n", __func__);
 
 	for_each_active_policy(policy) {
-		if (cpufreq_driver->resume && cpufreq_driver->resume(policy))
+		if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) {
 			pr_err("%s: Failed to resume driver: %p\n", __func__,
 				policy);
-		else if (__cpufreq_governor(policy, CPUFREQ_GOV_START)
-		    || __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))
-			pr_err("%s: Failed to start governor for policy: %p\n",
-				__func__, policy);
+		} else {
+			down_write(&policy->rwsem);
+			ret = __cpufreq_governor(policy, CPUFREQ_GOV_START);
+			if (!ret)
+				__cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
+			up_write(&policy->rwsem);
+
+			if (ret)
+				pr_err("%s: Failed to start governor for policy: %p\n",
+				       __func__, policy);
+		}
 	}
 
 	/*
@@ -2325,8 +2339,11 @@ static int cpufreq_boost_set_sw(int state)
 				       __func__);
 				break;
 			}
+
+			down_write(&policy->rwsem);
 			policy->user_policy.max = policy->max;
 			__cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
+			up_write(&policy->rwsem);
 		}
 	}
 
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-11  9:53   ` Rafael J. Wysocki
  2016-02-09  3:46 ` [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c Viresh Kumar
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

We used to drop policy->rwsem just before calling __cpufreq_governor()
in some cases earlier and so it was possible that __cpufreq_governor()
runs concurrently via separate threads.

In order to guarantee valid state transitions for governors,
'governor_enabled' was required to be protected using some locking and
we created cpufreq_governor_lock for that.

But now, __cpufreq_governor() is always called from within policy->rwsem
held and so 'governor_enabled' is protected against races even without
cpufreq_governor_lock.

Get rid of the extra lock now.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq.c          | 8 --------
 drivers/cpufreq/cpufreq_governor.h | 1 -
 2 files changed, 9 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 51fb47cd38a0..745da90d7b38 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -146,8 +146,6 @@ void cpufreq_update_util(u64 time, unsigned long util, unsigned long max)
 	rcu_read_unlock();
 }
 
-DEFINE_MUTEX(cpufreq_governor_lock);
-
 /* Flag to suspend/resume CPUFreq governors */
 static bool cpufreq_suspended;
 
@@ -2014,11 +2012,9 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
 
 	pr_debug("%s: for CPU %u, event %u\n", __func__, policy->cpu, event);
 
-	mutex_lock(&cpufreq_governor_lock);
 	if ((policy->governor_enabled && event == CPUFREQ_GOV_START)
 	    || (!policy->governor_enabled
 	    && (event == CPUFREQ_GOV_LIMITS || event == CPUFREQ_GOV_STOP))) {
-		mutex_unlock(&cpufreq_governor_lock);
 		return -EBUSY;
 	}
 
@@ -2027,8 +2023,6 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
 	else if (event == CPUFREQ_GOV_START)
 		policy->governor_enabled = true;
 
-	mutex_unlock(&cpufreq_governor_lock);
-
 	ret = policy->governor->governor(policy, event);
 
 	if (!ret) {
@@ -2038,12 +2032,10 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
 			policy->governor->initialized--;
 	} else {
 		/* Restore original values */
-		mutex_lock(&cpufreq_governor_lock);
 		if (event == CPUFREQ_GOV_STOP)
 			policy->governor_enabled = true;
 		else if (event == CPUFREQ_GOV_START)
 			policy->governor_enabled = false;
-		mutex_unlock(&cpufreq_governor_lock);
 	}
 
 	if (((event == CPUFREQ_GOV_POLICY_INIT) && ret) ||
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 4e77efb7db67..02885e353dfc 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -232,7 +232,6 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate)
 }
 
 extern struct mutex dbs_data_mutex;
-extern struct mutex cpufreq_governor_lock;
 void dbs_check_cpu(struct cpufreq_policy *policy);
 int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event);
 void od_register_powersave_bias_handler(unsigned int (*f)
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
                   ` (2 preceding siblings ...)
  2016-02-09  3:46 ` [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-10  0:26   ` Rafael J. Wysocki
  2016-02-09  3:46 ` [PATCH V4 5/7] " Viresh Kumar
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

We have got five common sysfs tunables between ondemand and conservative
governors, move their callbacks to cpufreq_governor.c to get rid of
redundant code.

Because of minor differences in the implementation of the callbacks,
some more per-governor callbacks are introduced in order to not
introduce any more "governor == ONDEMAND/CONSERVATIVE" like checks.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq_conservative.c |  80 +++++---------------------
 drivers/cpufreq/cpufreq_governor.c     | 100 ++++++++++++++++++++++++++++++++
 drivers/cpufreq/cpufreq_governor.h     |  16 +++++-
 drivers/cpufreq/cpufreq_ondemand.c     | 102 ++++++++-------------------------
 4 files changed, 151 insertions(+), 147 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index ed081dbce00c..5c54041015d4 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -122,47 +122,17 @@ static struct notifier_block cs_cpufreq_notifier_block = {
 /************************** sysfs interface ************************/
 static struct dbs_governor cs_dbs_gov;
 
-static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
-		const char *buf, size_t count)
+static bool invalid_up_threshold(struct dbs_data *dbs_data,
+				 unsigned int threshold)
 {
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1)
-		return -EINVAL;
+	struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
 
-	dbs_data->sampling_down_factor = input;
-	return count;
+	return threshold > 100 || threshold <= cs_tuners->down_threshold;
 }
 
-static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
+static bool invalid_sampling_down_factor(unsigned int factor)
 {
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1)
-		return -EINVAL;
-
-	dbs_data->sampling_rate = max(input, dbs_data->min_sampling_rate);
-	return count;
-}
-
-static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
-{
-	struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1 || input > 100 || input <= cs_tuners->down_threshold)
-		return -EINVAL;
-
-	dbs_data->up_threshold = input;
-	return count;
+	return factor > MAX_SAMPLING_DOWN_FACTOR;
 }
 
 static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
@@ -182,27 +152,13 @@ static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
 	return count;
 }
 
-static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
-		const char *buf, size_t count)
+static void update_ignore_nice_load(struct dbs_data *dbs_data)
 {
-	unsigned int input, j;
-	int ret;
-
-	ret = sscanf(buf, "%u", &input);
-	if (ret != 1)
-		return -EINVAL;
-
-	if (input > 1)
-		input = 1;
-
-	if (input == dbs_data->ignore_nice_load) /* nothing to do */
-		return count;
-
-	dbs_data->ignore_nice_load = input;
+	struct cs_cpu_dbs_info_s *dbs_info;
+	unsigned int j;
 
 	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
-		struct cs_cpu_dbs_info_s *dbs_info;
 		dbs_info = &per_cpu(cs_cpu_dbs_info, j);
 		dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j,
 					&dbs_info->cdbs.prev_cpu_wall, 0);
@@ -210,7 +166,6 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
 			dbs_info->cdbs.prev_cpu_nice =
 				kcpustat_cpu(j).cpustat[CPUTIME_NICE];
 	}
-	return count;
 }
 
 static ssize_t store_freq_step(struct dbs_data *dbs_data, const char *buf,
@@ -235,21 +190,11 @@ static ssize_t store_freq_step(struct dbs_data *dbs_data, const char *buf,
 	return count;
 }
 
-gov_show_one_common(sampling_rate);
-gov_show_one_common(sampling_down_factor);
-gov_show_one_common(up_threshold);
-gov_show_one_common(ignore_nice_load);
-gov_show_one_common(min_sampling_rate);
 gov_show_one(cs, down_threshold);
 gov_show_one(cs, freq_step);
 
-gov_attr_rw(sampling_rate);
-gov_attr_rw(sampling_down_factor);
-gov_attr_rw(up_threshold);
-gov_attr_rw(ignore_nice_load);
-gov_attr_ro(min_sampling_rate);
-gov_attr_rw(down_threshold);
-gov_attr_rw(freq_step);
+static gov_attr_rw(down_threshold);
+static gov_attr_rw(freq_step);
 
 static struct attribute *cs_attributes[] = {
 	&min_sampling_rate.attr,
@@ -315,6 +260,9 @@ static struct dbs_governor cs_dbs_gov = {
 	.get_cpu_dbs_info_s = get_cpu_dbs_info_s,
 	.gov_dbs_timer = cs_dbs_timer,
 	.gov_check_cpu = cs_check_cpu,
+	.invalid_up_threshold = invalid_up_threshold,
+	.invalid_sampling_down_factor = invalid_sampling_down_factor,
+	.update_ignore_nice_load = update_ignore_nice_load,
 	.init = cs_init,
 	.exit = cs_exit,
 };
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 8e53f804a5af..7038ada3915d 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -25,6 +25,105 @@
 DEFINE_MUTEX(dbs_data_mutex);
 EXPORT_SYMBOL_GPL(dbs_data_mutex);
 
+/* Common sysfs tunables */
+static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
+				   size_t count)
+{
+	struct dbs_governor *gov = dbs_data->gov;
+	unsigned int rate;
+	int ret;
+	ret = sscanf(buf, "%u", &rate);
+	if (ret != 1)
+		return -EINVAL;
+
+	dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate);
+
+	if (gov->update_sampling_rate)
+		gov->update_sampling_rate(dbs_data);
+
+	return count;
+}
+
+static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
+				  size_t count)
+{
+	struct dbs_governor *gov = dbs_data->gov;
+	unsigned int input;
+	int ret;
+	ret = sscanf(buf, "%u", &input);
+
+	if (ret != 1 || gov->invalid_up_threshold(dbs_data, input))
+		return -EINVAL;
+
+	dbs_data->up_threshold = input;
+	return count;
+}
+
+static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
+					  const char *buf, size_t count)
+{
+	struct dbs_governor *gov = dbs_data->gov;
+	unsigned int input;
+	int ret;
+	ret = sscanf(buf, "%u", &input);
+
+	if (ret != 1 || gov->invalid_sampling_down_factor(input) || input < 1)
+		return -EINVAL;
+
+	dbs_data->sampling_down_factor = input;
+
+	if (gov->update_sampling_down_factor)
+		gov->update_sampling_down_factor(dbs_data);
+
+	return count;
+}
+
+static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
+				      const char *buf, size_t count)
+{
+	struct dbs_governor *gov = dbs_data->gov;
+	unsigned int input;
+	int ret;
+
+	ret = sscanf(buf, "%u", &input);
+	if (ret != 1)
+		return -EINVAL;
+
+	if (input > 1)
+		input = 1;
+
+	if (input == dbs_data->ignore_nice_load) { /* nothing to do */
+		return count;
+	}
+
+	dbs_data->ignore_nice_load = input;
+
+	gov->update_ignore_nice_load(dbs_data);
+	return count;
+}
+
+gov_show_one_common(sampling_rate);
+gov_show_one_common(up_threshold);
+gov_show_one_common(sampling_down_factor);
+gov_show_one_common(ignore_nice_load);
+gov_show_one_common(min_sampling_rate);
+
+gov_attr_rw(sampling_rate);
+EXPORT_SYMBOL_GPL(sampling_rate);
+
+gov_attr_rw(up_threshold);
+EXPORT_SYMBOL_GPL(up_threshold);
+
+gov_attr_rw(sampling_down_factor);
+EXPORT_SYMBOL_GPL(sampling_down_factor);
+
+gov_attr_rw(ignore_nice_load);
+EXPORT_SYMBOL_GPL(ignore_nice_load);
+
+gov_attr_ro(min_sampling_rate);
+EXPORT_SYMBOL_GPL(min_sampling_rate);
+
+/* Governor routines */
 static inline struct dbs_data *to_dbs_data(struct kobject *kobj)
 {
 	return container_of(kobj, struct dbs_data, kobj);
@@ -401,6 +500,7 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
 		goto free_policy_dbs_info;
 	}
 
+	dbs_data->gov = gov;
 	INIT_LIST_HEAD(&dbs_data->policy_dbs_list);
 	mutex_init(&dbs_data->mutex);
 
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 02885e353dfc..8aff218f73a4 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -64,6 +64,7 @@ static void *get_cpu_dbs_info_s(int cpu)				\
 
 /* Governor demand based switching data (per-policy or global). */
 struct dbs_data {
+	struct dbs_governor *gov;
 	int usage_count;
 	void *tuners;
 	unsigned int min_sampling_rate;
@@ -106,13 +107,19 @@ static ssize_t show_##file_name						\
 }
 
 #define gov_attr_ro(_name)						\
-static struct governor_attr _name =					\
+struct governor_attr _name =						\
 __ATTR(_name, 0444, show_##_name, NULL)
 
 #define gov_attr_rw(_name)						\
-static struct governor_attr _name =					\
+struct governor_attr _name =						\
 __ATTR(_name, 0644, show_##_name, store_##_name)
 
+extern struct governor_attr sampling_rate;
+extern struct governor_attr up_threshold;
+extern struct governor_attr sampling_down_factor;
+extern struct governor_attr ignore_nice_load;
+extern struct governor_attr min_sampling_rate;
+
 /* Common to all CPUs of a policy */
 struct policy_dbs_info {
 	struct cpufreq_policy *policy;
@@ -202,6 +209,11 @@ struct dbs_governor {
 	void (*gov_check_cpu)(int cpu, unsigned int load);
 	int (*init)(struct dbs_data *dbs_data, bool notify);
 	void (*exit)(struct dbs_data *dbs_data, bool notify);
+	bool (*invalid_up_threshold)(struct dbs_data *dbs_data, unsigned int threshold);
+	bool (*invalid_sampling_down_factor)(unsigned int factor);
+	void (*update_sampling_rate)(struct dbs_data *dbs_data);
+	void (*update_sampling_down_factor)(struct dbs_data *dbs_data);
+	void (*update_ignore_nice_load)(struct dbs_data *dbs_data);
 
 	/* Governor specific ops, see below */
 	void *gov_ops;
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 38301c6b31c7..caef7c9f631d 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -223,7 +223,6 @@ static struct dbs_governor od_dbs_gov;
 
 /**
  * update_sampling_rate - update sampling rate effective immediately if needed.
- * @new_rate: new sampling rate
  *
  * If new rate is smaller than the old, simply updating
  * dbs.sampling_rate might not be appropriate. For example, if the
@@ -241,14 +240,10 @@ static struct dbs_governor od_dbs_gov;
  * This must be called with dbs_data->mutex held, otherwise traversing
  * policy_dbs_list isn't safe.
  */
-static void update_sampling_rate(struct dbs_data *dbs_data,
-		unsigned int new_rate)
+static void update_sampling_rate(struct dbs_data *dbs_data)
 {
 	struct policy_dbs_info *policy_dbs;
 
-	dbs_data->sampling_rate = new_rate = max(new_rate,
-			dbs_data->min_sampling_rate);
-
 	/*
 	 * We are operating under dbs_data->mutex and so the list and its
 	 * entries can't be freed concurrently.
@@ -272,22 +267,21 @@ static void update_sampling_rate(struct dbs_data *dbs_data,
 		 * be corrected next time a sample is taken, so it shouldn't be
 		 * significant.
 		 */
-		gov_update_sample_delay(policy_dbs, new_rate);
+		gov_update_sample_delay(policy_dbs, dbs_data->sampling_rate);
 		mutex_unlock(&policy_dbs->timer_mutex);
 	}
 }
 
-static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
+static bool invalid_up_threshold(struct dbs_data *dbs_data,
+				 unsigned int threshold)
 {
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-	if (ret != 1)
-		return -EINVAL;
+	return threshold > MAX_FREQUENCY_UP_THRESHOLD ||
+	       threshold < MIN_FREQUENCY_UP_THRESHOLD;
+}
 
-	update_sampling_rate(dbs_data, input);
-	return count;
+static bool invalid_sampling_down_factor(unsigned int factor)
+{
+	return factor > MAX_SAMPLING_DOWN_FACTOR;
 }
 
 static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
@@ -313,66 +307,22 @@ static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
 	return count;
 }
 
-static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
-{
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD ||
-			input < MIN_FREQUENCY_UP_THRESHOLD) {
-		return -EINVAL;
-	}
-
-	dbs_data->up_threshold = input;
-	return count;
-}
-
-static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
-		const char *buf, size_t count)
+static void update_sampling_down_factor(struct dbs_data *dbs_data)
 {
-	unsigned int input, j;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1)
-		return -EINVAL;
-	dbs_data->sampling_down_factor = input;
+	unsigned int j;
 
 	/* Reset down sampling multiplier in case it was active */
-	for_each_online_cpu(j) {
-		struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
-				j);
-		dbs_info->rate_mult = 1;
-	}
-	return count;
+	for_each_online_cpu(j)
+		per_cpu(od_cpu_dbs_info, j).rate_mult = 1;
 }
 
-static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
-		const char *buf, size_t count)
+static void update_ignore_nice_load(struct dbs_data *dbs_data)
 {
 	struct od_dbs_tuners *od_tuners = dbs_data->tuners;
-	unsigned int input;
-	int ret;
-
+	struct od_cpu_dbs_info_s *dbs_info;
 	unsigned int j;
 
-	ret = sscanf(buf, "%u", &input);
-	if (ret != 1)
-		return -EINVAL;
-
-	if (input > 1)
-		input = 1;
-
-	if (input == dbs_data->ignore_nice_load) { /* nothing to do */
-		return count;
-	}
-	dbs_data->ignore_nice_load = input;
-
-	/* we need to re-evaluate prev_cpu_idle */
 	for_each_online_cpu(j) {
-		struct od_cpu_dbs_info_s *dbs_info;
 		dbs_info = &per_cpu(od_cpu_dbs_info, j);
 		dbs_info->cdbs.prev_cpu_idle = get_cpu_idle_time(j,
 			&dbs_info->cdbs.prev_cpu_wall, od_tuners->io_is_busy);
@@ -381,7 +331,6 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
 				kcpustat_cpu(j).cpustat[CPUTIME_NICE];
 
 	}
-	return count;
 }
 
 static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf,
@@ -403,21 +352,11 @@ static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf,
 	return count;
 }
 
-gov_show_one_common(sampling_rate);
-gov_show_one_common(up_threshold);
-gov_show_one_common(sampling_down_factor);
-gov_show_one_common(ignore_nice_load);
-gov_show_one_common(min_sampling_rate);
 gov_show_one(od, io_is_busy);
 gov_show_one(od, powersave_bias);
 
-gov_attr_rw(sampling_rate);
-gov_attr_rw(io_is_busy);
-gov_attr_rw(up_threshold);
-gov_attr_rw(sampling_down_factor);
-gov_attr_rw(ignore_nice_load);
-gov_attr_rw(powersave_bias);
-gov_attr_ro(min_sampling_rate);
+static gov_attr_rw(io_is_busy);
+static gov_attr_rw(powersave_bias);
 
 static struct attribute *od_attributes[] = {
 	&min_sampling_rate.attr,
@@ -499,6 +438,11 @@ static struct dbs_governor od_dbs_gov = {
 	.get_cpu_dbs_info_s = get_cpu_dbs_info_s,
 	.gov_dbs_timer = od_dbs_timer,
 	.gov_check_cpu = od_check_cpu,
+	.update_sampling_rate = update_sampling_rate,
+	.invalid_up_threshold = invalid_up_threshold,
+	.invalid_sampling_down_factor = invalid_sampling_down_factor,
+	.update_sampling_down_factor = update_sampling_down_factor,
+	.update_ignore_nice_load = update_ignore_nice_load,
 	.gov_ops = &od_ops,
 	.init = od_init,
 	.exit = od_exit,
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 5/7] cpufreq: governor: No need to manage state machine now
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
                   ` (3 preceding siblings ...)
  2016-02-09  3:46 ` [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-10  0:36   ` Rafael J. Wysocki
  2016-02-09  3:46 ` [PATCH V4 6/7] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay Viresh Kumar
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

cpufreq core now guarantees that policy->rwsem wouldn't get dropped
while calling CPUFREQ_GOV_POLICY_EXIT governor event and will be kept
acquired until the complete sequence of governor state changes has
finished.

And so we can remove the state machine checks that were put in place
earlier.

This also means that policy_dbs->policy can be initialized while
policy_dbs is allocated, to move all initialization together.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq_governor.c | 27 +++++----------------------
 1 file changed, 5 insertions(+), 22 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 7038ada3915d..464f346815e0 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -332,8 +332,10 @@ static inline void gov_clear_update_util(struct cpufreq_policy *policy)
 	synchronize_rcu();
 }
 
-static void gov_cancel_work(struct policy_dbs_info *policy_dbs)
+static void gov_cancel_work(struct cpufreq_policy *policy)
 {
+	struct policy_dbs_info *policy_dbs = policy->governor_data;
+
 	/* Tell dbs_update_util_handler() to skip queuing up work items. */
 	atomic_inc(&policy_dbs->skip_work);
 	/*
@@ -429,6 +431,7 @@ static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *poli
 	if (!policy_dbs)
 		return NULL;
 
+	policy_dbs->policy = policy;
 	mutex_init(&policy_dbs->timer_mutex);
 	atomic_set(&policy_dbs->skip_work, 0);
 	init_irq_work(&policy_dbs->irq_work, dbs_irq_work);
@@ -560,10 +563,6 @@ static int cpufreq_governor_exit(struct cpufreq_policy *policy)
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	int count;
 
-	/* State should be equivalent to INIT */
-	if (policy_dbs->policy)
-		return -EBUSY;
-
 	mutex_lock(&dbs_data->mutex);
 	list_del(&policy_dbs->list);
 	count = dbs_data->usage_count--;
@@ -599,10 +598,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 	if (!policy->cur)
 		return -EINVAL;
 
-	/* State should be equivalent to INIT */
-	if (policy_dbs->policy)
-		return -EBUSY;
-
 	sampling_rate = dbs_data->sampling_rate;
 	ignore_nice = dbs_data->ignore_nice_load;
 
@@ -627,7 +622,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 		if (ignore_nice)
 			j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
 	}
-	policy_dbs->policy = policy;
 
 	if (gov->governor == GOV_CONSERVATIVE) {
 		struct cs_cpu_dbs_info_s *cs_dbs_info =
@@ -650,14 +644,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 
 static int cpufreq_governor_stop(struct cpufreq_policy *policy)
 {
-	struct policy_dbs_info *policy_dbs = policy->governor_data;
-
-	/* State should be equivalent to START */
-	if (!policy_dbs->policy)
-		return -EBUSY;
-
-	gov_cancel_work(policy_dbs);
-	policy_dbs->policy = NULL;
+	gov_cancel_work(policy);
 
 	return 0;
 }
@@ -666,10 +653,6 @@ static int cpufreq_governor_limits(struct cpufreq_policy *policy)
 {
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 
-	/* State should be equivalent to START */
-	if (!policy_dbs->policy)
-		return -EBUSY;
-
 	mutex_lock(&policy_dbs->timer_mutex);
 	if (policy->max < policy->cur)
 		__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 6/7] cpufreq: conservative: Update sample_delay_ns immediately
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
                   ` (4 preceding siblings ...)
  2016-02-09  3:46 ` [PATCH V4 5/7] " Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-09  3:46 ` [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay Viresh Kumar
  6 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

Ondemand governor already updates sample_delay_ns immediately on updates
to sampling rate, but conservative isn't doing that.

It was left out earlier as the code has been really complex to get that
done easily. But now things are sorted out very well, and we can follow
the same for conservative governor as well.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq_governor.c | 49 ++++++++++++++++++++++++++++++++---
 drivers/cpufreq/cpufreq_governor.h |  1 -
 drivers/cpufreq/cpufreq_ondemand.c | 52 --------------------------------------
 3 files changed, 46 insertions(+), 56 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 464f346815e0..c0f9151e1045 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -26,10 +26,29 @@ DEFINE_MUTEX(dbs_data_mutex);
 EXPORT_SYMBOL_GPL(dbs_data_mutex);
 
 /* Common sysfs tunables */
+/**
+ * update_sampling_rate - update sampling rate effective immediately if needed.
+ *
+ * If new rate is smaller than the old, simply updating
+ * dbs.sampling_rate might not be appropriate. For example, if the
+ * original sampling_rate was 1 second and the requested new sampling rate is 10
+ * ms because the user needs immediate reaction from ondemand governor, but not
+ * sure if higher frequency will be required or not, then, the governor may
+ * change the sampling rate too late; up to 1 second later. Thus, if we are
+ * reducing the sampling rate, we need to make the new value effective
+ * immediately.
+ *
+ * On the other hand, if new rate is larger than the old, then we may evaluate
+ * the load too soon, and it might we worth updating sample_delay_ns then as
+ * well.
+ *
+ * This must be called with dbs_data->mutex held, otherwise traversing
+ * policy_dbs_list isn't safe.
+ */
 static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
 				   size_t count)
 {
-	struct dbs_governor *gov = dbs_data->gov;
+	struct policy_dbs_info *policy_dbs;
 	unsigned int rate;
 	int ret;
 	ret = sscanf(buf, "%u", &rate);
@@ -38,8 +57,32 @@ static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
 
 	dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate);
 
-	if (gov->update_sampling_rate)
-		gov->update_sampling_rate(dbs_data);
+	/*
+	 * We are operating under dbs_data->mutex and so the list and its
+	 * entries can't be freed concurrently.
+	 */
+	list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+		mutex_lock(&policy_dbs->timer_mutex);
+		/*
+		 * On 32-bit architectures this may race with the
+		 * sample_delay_ns read in dbs_update_util_handler(), but that
+		 * really doesn't matter.  If the read returns a value that's
+		 * too big, the sample will be skipped, but the next invocation
+		 * of dbs_update_util_handler() (when the update has been
+		 * completed) will take a sample.  If the returned value is too
+		 * small, the sample will be taken immediately, but that isn't a
+		 * problem, as we want the new rate to take effect immediately
+		 * anyway.
+		 *
+		 * If this runs in parallel with dbs_work_handler(), we may end
+		 * up overwriting the sample_delay_ns value that it has just
+		 * written, but the difference should not be too big and it will
+		 * be corrected next time a sample is taken, so it shouldn't be
+		 * significant.
+		 */
+		gov_update_sample_delay(policy_dbs, dbs_data->sampling_rate);
+		mutex_unlock(&policy_dbs->timer_mutex);
+	}
 
 	return count;
 }
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 8aff218f73a4..9ab1a05712d8 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -211,7 +211,6 @@ struct dbs_governor {
 	void (*exit)(struct dbs_data *dbs_data, bool notify);
 	bool (*invalid_up_threshold)(struct dbs_data *dbs_data, unsigned int threshold);
 	bool (*invalid_sampling_down_factor)(unsigned int factor);
-	void (*update_sampling_rate)(struct dbs_data *dbs_data);
 	void (*update_sampling_down_factor)(struct dbs_data *dbs_data);
 	void (*update_ignore_nice_load)(struct dbs_data *dbs_data);
 
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index caef7c9f631d..d9f323f150c4 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -221,57 +221,6 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 /************************** sysfs interface ************************/
 static struct dbs_governor od_dbs_gov;
 
-/**
- * update_sampling_rate - update sampling rate effective immediately if needed.
- *
- * If new rate is smaller than the old, simply updating
- * dbs.sampling_rate might not be appropriate. For example, if the
- * original sampling_rate was 1 second and the requested new sampling rate is 10
- * ms because the user needs immediate reaction from ondemand governor, but not
- * sure if higher frequency will be required or not, then, the governor may
- * change the sampling rate too late; up to 1 second later. Thus, if we are
- * reducing the sampling rate, we need to make the new value effective
- * immediately.
- *
- * On the other hand, if new rate is larger than the old, then we may evaluate
- * the load too soon, and it might we worth updating sample_delay_ns then as
- * well.
- *
- * This must be called with dbs_data->mutex held, otherwise traversing
- * policy_dbs_list isn't safe.
- */
-static void update_sampling_rate(struct dbs_data *dbs_data)
-{
-	struct policy_dbs_info *policy_dbs;
-
-	/*
-	 * We are operating under dbs_data->mutex and so the list and its
-	 * entries can't be freed concurrently.
-	 */
-	list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
-		mutex_lock(&policy_dbs->timer_mutex);
-		/*
-		 * On 32-bit architectures this may race with the
-		 * sample_delay_ns read in dbs_update_util_handler(), but that
-		 * really doesn't matter.  If the read returns a value that's
-		 * too big, the sample will be skipped, but the next invocation
-		 * of dbs_update_util_handler() (when the update has been
-		 * completed) will take a sample.  If the returned value is too
-		 * small, the sample will be taken immediately, but that isn't a
-		 * problem, as we want the new rate to take effect immediately
-		 * anyway.
-		 *
-		 * If this runs in parallel with dbs_work_handler(), we may end
-		 * up overwriting the sample_delay_ns value that it has just
-		 * written, but the difference should not be too big and it will
-		 * be corrected next time a sample is taken, so it shouldn't be
-		 * significant.
-		 */
-		gov_update_sample_delay(policy_dbs, dbs_data->sampling_rate);
-		mutex_unlock(&policy_dbs->timer_mutex);
-	}
-}
-
 static bool invalid_up_threshold(struct dbs_data *dbs_data,
 				 unsigned int threshold)
 {
@@ -438,7 +387,6 @@ static struct dbs_governor od_dbs_gov = {
 	.get_cpu_dbs_info_s = get_cpu_dbs_info_s,
 	.gov_dbs_timer = od_dbs_timer,
 	.gov_check_cpu = od_check_cpu,
-	.update_sampling_rate = update_sampling_rate,
 	.invalid_up_threshold = invalid_up_threshold,
 	.invalid_sampling_down_factor = invalid_sampling_down_factor,
 	.update_sampling_down_factor = update_sampling_down_factor,
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay
  2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
                   ` (5 preceding siblings ...)
  2016-02-09  3:46 ` [PATCH V4 6/7] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
@ 2016-02-09  3:46 ` Viresh Kumar
  2016-02-10  0:28   ` Rafael J. Wysocki
  6 siblings, 1 reply; 22+ messages in thread
From: Viresh Kumar @ 2016-02-09  3:46 UTC (permalink / raw)
  To: Rafael Wysocki, juri.lelli
  Cc: linaro-kernel, linux-pm, shilpa.bhat, linux-kernel, Viresh Kumar

'delay' is updated properly in all paths of the routine od_dbs_timer(),
leaving just one. And can be 0 only in that case.

Move the update to 'delay' as an else part of the if block.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 drivers/cpufreq/cpufreq_ondemand.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index d9f323f150c4..388ae07ce413 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -194,7 +194,7 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
-	int delay = 0, sample_type = dbs_info->sample_type;
+	int delay, sample_type = dbs_info->sample_type;
 
 	/* Common NORMAL_SAMPLE setup */
 	dbs_info->sample_type = OD_NORMAL_SAMPLE;
@@ -208,13 +208,12 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 			/* Setup timer for SUB_SAMPLE */
 			dbs_info->sample_type = OD_SUB_SAMPLE;
 			delay = dbs_info->freq_hi_jiffies;
+		} else {
+			delay = delay_for_sampling_rate(dbs_data->sampling_rate
+							* dbs_info->rate_mult);
 		}
 	}
 
-	if (!delay)
-		delay = delay_for_sampling_rate(dbs_data->sampling_rate
-				* dbs_info->rate_mult);
-
 	return delay;
 }
 
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c
  2016-02-09  3:46 ` [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c Viresh Kumar
@ 2016-02-10  0:26   ` Rafael J. Wysocki
  2016-02-10  7:00     ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Viresh Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-10  0:26 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> We have got five common sysfs tunables between ondemand and conservative
> governors, move their callbacks to cpufreq_governor.c to get rid of
> redundant code.
>
> Because of minor differences in the implementation of the callbacks,
> some more per-governor callbacks are introduced in order to not
> introduce any more "governor == ONDEMAND/CONSERVATIVE" like checks.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Tested-by: Juri Lelli <juri.lelli@arm.com>
> Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>

To me, the benefit from this patch is marginal and the cost is quite
substantial.

The code is really only duplicate if both governors are non-modular or
their modules are both loaded at the same time and it's not worth
adding the new governor callbacks IMO.

If the implementation of the given show/store pair is different enough
that you need an extra callback to move them to _governor.c, I won't
bother doing that.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay
  2016-02-09  3:46 ` [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay Viresh Kumar
@ 2016-02-10  0:28   ` Rafael J. Wysocki
  0 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-10  0:28 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

s/void/avoid/ in the subject.

Apart from that it looks good.

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> 'delay' is updated properly in all paths of the routine od_dbs_timer(),
> leaving just one. And can be 0 only in that case.
>
> Move the update to 'delay' as an else part of the if block.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
>  drivers/cpufreq/cpufreq_ondemand.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
> index d9f323f150c4..388ae07ce413 100644
> --- a/drivers/cpufreq/cpufreq_ondemand.c
> +++ b/drivers/cpufreq/cpufreq_ondemand.c
> @@ -194,7 +194,7 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
>         struct policy_dbs_info *policy_dbs = policy->governor_data;
>         struct dbs_data *dbs_data = policy_dbs->dbs_data;
>         struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
> -       int delay = 0, sample_type = dbs_info->sample_type;
> +       int delay, sample_type = dbs_info->sample_type;
>
>         /* Common NORMAL_SAMPLE setup */
>         dbs_info->sample_type = OD_NORMAL_SAMPLE;
> @@ -208,13 +208,12 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
>                         /* Setup timer for SUB_SAMPLE */
>                         dbs_info->sample_type = OD_SUB_SAMPLE;
>                         delay = dbs_info->freq_hi_jiffies;
> +               } else {
> +                       delay = delay_for_sampling_rate(dbs_data->sampling_rate
> +                                                       * dbs_info->rate_mult);
>                 }
>         }
>
> -       if (!delay)
> -               delay = delay_for_sampling_rate(dbs_data->sampling_rate
> -                               * dbs_info->rate_mult);
> -
>         return delay;
>  }
>
> --
> 2.7.1.370.gb2aa7f8
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 5/7] cpufreq: governor: No need to manage state machine now
  2016-02-09  3:46 ` [PATCH V4 5/7] " Viresh Kumar
@ 2016-02-10  0:36   ` Rafael J. Wysocki
  2016-02-10  5:36     ` Viresh Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-10  0:36 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> cpufreq core now guarantees that policy->rwsem wouldn't get dropped

"The cpufreq core ..." and "won't be dropped"

> while calling CPUFREQ_GOV_POLICY_EXIT governor event and will be kept

"while running the ->governor callback for the CPUFREQ_GOV_POLICY_EXIT
event and will be held"

> acquired until the complete sequence of governor state changes has
> finished.
>
> And so we can remove the state machine checks that were put in place
> earlier.

"This allows governor state machine checks to be dropped from multiple
functions in cpufreq_governor.c."

>
> This also means that policy_dbs->policy can be initialized while

"initialized upfront"

> policy_dbs is allocated, to move all initialization together.

"so the entire initialization of struct policy_dbs is carried out in one place."

>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Tested-by: Juri Lelli <juri.lelli@arm.com>
> Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
> ---
>  drivers/cpufreq/cpufreq_governor.c | 27 +++++----------------------
>  1 file changed, 5 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
> index 7038ada3915d..464f346815e0 100644

[cut]

> @@ -650,14 +644,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
>
>  static int cpufreq_governor_stop(struct cpufreq_policy *policy)
>  {
> -       struct policy_dbs_info *policy_dbs = policy->governor_data;
> -
> -       /* State should be equivalent to START */
> -       if (!policy_dbs->policy)
> -               return -EBUSY;
> -
> -       gov_cancel_work(policy_dbs);
> -       policy_dbs->policy = NULL;
> +       gov_cancel_work(policy);
>
>         return 0;
>  }

So maybe we can call gov_cancel_work(policy) from
cpufreq_governor_dbs() directly and get rid of this wrapper too?

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 5/7] cpufreq: governor: No need to manage state machine now
  2016-02-10  0:36   ` Rafael J. Wysocki
@ 2016-02-10  5:36     ` Viresh Kumar
  0 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-10  5:36 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On 10-02-16, 01:36, Rafael J. Wysocki wrote:
> >  static int cpufreq_governor_stop(struct cpufreq_policy *policy)
> >  {
> > -       struct policy_dbs_info *policy_dbs = policy->governor_data;
> > -
> > -       /* State should be equivalent to START */
> > -       if (!policy_dbs->policy)
> > -               return -EBUSY;
> > -
> > -       gov_cancel_work(policy_dbs);
> > -       policy_dbs->policy = NULL;
> > +       gov_cancel_work(policy);
> >
> >         return 0;
> >  }
> 
> So maybe we can call gov_cancel_work(policy) from
> cpufreq_governor_dbs() directly and get rid of this wrapper too?

I thought about it, but left it for consistency. It wouldn't hurt, the
compiler will anyway make it inline I believe.

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now
  2016-02-10  0:26   ` Rafael J. Wysocki
@ 2016-02-10  7:00     ` Viresh Kumar
  2016-02-10  7:00       ` [PATCH V5 2/3] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
                         ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-10  7:00 UTC (permalink / raw)
  To: Rafael Wysocki
  Cc: linaro-kernel, linux-pm, Viresh Kumar, Juri Lelli,
	Shilpasri G Bhat, open list

The cpufreq core now guarantees that policy->rwsem won't be dropped
while running the ->governor callback for the CPUFREQ_GOV_POLICY_EXIT
event and will be held acquired until the complete sequence of governor
state changes has finished.

This allows governor state machine checks to be dropped from multiple
functions in cpufreq_governor.c.

This also means that policy_dbs->policy can be initialized upfront, so
the entire initialization of struct policy_dbs is carried out in one
place.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
Rafael,

The 4th patch is dropped and 5-7 are updated on top of that.

 drivers/cpufreq/cpufreq_governor.c | 27 +++++----------------------
 1 file changed, 5 insertions(+), 22 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 3a9dab752148..481585611097 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -233,8 +233,10 @@ static inline void gov_clear_update_util(struct cpufreq_policy *policy)
 	synchronize_rcu();
 }
 
-static void gov_cancel_work(struct policy_dbs_info *policy_dbs)
+static void gov_cancel_work(struct cpufreq_policy *policy)
 {
+	struct policy_dbs_info *policy_dbs = policy->governor_data;
+
 	/* Tell dbs_update_util_handler() to skip queuing up work items. */
 	atomic_inc(&policy_dbs->skip_work);
 	/*
@@ -330,6 +332,7 @@ static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *poli
 	if (!policy_dbs)
 		return NULL;
 
+	policy_dbs->policy = policy;
 	mutex_init(&policy_dbs->timer_mutex);
 	atomic_set(&policy_dbs->skip_work, 0);
 	init_irq_work(&policy_dbs->irq_work, dbs_irq_work);
@@ -460,10 +463,6 @@ static int cpufreq_governor_exit(struct cpufreq_policy *policy)
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	int count;
 
-	/* State should be equivalent to INIT */
-	if (policy_dbs->policy)
-		return -EBUSY;
-
 	mutex_lock(&dbs_data->mutex);
 	list_del(&policy_dbs->list);
 	count = --dbs_data->usage_count;
@@ -499,10 +498,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 	if (!policy->cur)
 		return -EINVAL;
 
-	/* State should be equivalent to INIT */
-	if (policy_dbs->policy)
-		return -EBUSY;
-
 	sampling_rate = dbs_data->sampling_rate;
 	ignore_nice = dbs_data->ignore_nice_load;
 
@@ -527,7 +522,6 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 		if (ignore_nice)
 			j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
 	}
-	policy_dbs->policy = policy;
 
 	if (gov->governor == GOV_CONSERVATIVE) {
 		struct cs_cpu_dbs_info_s *cs_dbs_info =
@@ -550,14 +544,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
 
 static int cpufreq_governor_stop(struct cpufreq_policy *policy)
 {
-	struct policy_dbs_info *policy_dbs = policy->governor_data;
-
-	/* State should be equivalent to START */
-	if (!policy_dbs->policy)
-		return -EBUSY;
-
-	gov_cancel_work(policy_dbs);
-	policy_dbs->policy = NULL;
+	gov_cancel_work(policy);
 
 	return 0;
 }
@@ -566,10 +553,6 @@ static int cpufreq_governor_limits(struct cpufreq_policy *policy)
 {
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 
-	/* State should be equivalent to START */
-	if (!policy_dbs->policy)
-		return -EBUSY;
-
 	mutex_lock(&policy_dbs->timer_mutex);
 	if (policy->max < policy->cur)
 		__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V5 2/3] cpufreq: conservative: Update sample_delay_ns immediately
  2016-02-10  7:00     ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Viresh Kumar
@ 2016-02-10  7:00       ` Viresh Kumar
  2016-02-10  7:00       ` [PATCH V5 3/3] cpufreq: ondemand: Rearrange od_dbs_timer() to avoid updating delay Viresh Kumar
  2016-02-11  9:58       ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Rafael J. Wysocki
  2 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-10  7:00 UTC (permalink / raw)
  To: Rafael Wysocki
  Cc: linaro-kernel, linux-pm, Viresh Kumar, Juri Lelli,
	Shilpasri G Bhat, open list

Ondemand governor already updates sample_delay_ns immediately on updates
to sampling rate, but conservative isn't doing that.

It was left out earlier as the code has been really complex to get that
done easily. But now things are sorted out very well, and we can follow
the same for conservative governor as well.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Juri Lelli <juri.lelli@arm.com>
Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
---
 drivers/cpufreq/cpufreq_conservative.c | 14 -------
 drivers/cpufreq/cpufreq_governor.c     | 63 +++++++++++++++++++++++++++++++
 drivers/cpufreq/cpufreq_governor.h     |  2 +
 drivers/cpufreq/cpufreq_ondemand.c     | 69 ----------------------------------
 4 files changed, 65 insertions(+), 83 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index ed081dbce00c..6243502ce24d 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -136,20 +136,6 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
 	return count;
 }
 
-static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
-{
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-
-	if (ret != 1)
-		return -EINVAL;
-
-	dbs_data->sampling_rate = max(input, dbs_data->min_sampling_rate);
-	return count;
-}
-
 static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
 		size_t count)
 {
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 481585611097..17c51bca2df1 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -25,6 +25,69 @@
 DEFINE_MUTEX(dbs_data_mutex);
 EXPORT_SYMBOL_GPL(dbs_data_mutex);
 
+/* Common sysfs tunables */
+/**
+ * store_sampling_rate - update sampling rate effective immediately if needed.
+ *
+ * If new rate is smaller than the old, simply updating
+ * dbs.sampling_rate might not be appropriate. For example, if the
+ * original sampling_rate was 1 second and the requested new sampling rate is 10
+ * ms because the user needs immediate reaction from ondemand governor, but not
+ * sure if higher frequency will be required or not, then, the governor may
+ * change the sampling rate too late; up to 1 second later. Thus, if we are
+ * reducing the sampling rate, we need to make the new value effective
+ * immediately.
+ *
+ * On the other hand, if new rate is larger than the old, then we may evaluate
+ * the load too soon, and it might we worth updating sample_delay_ns then as
+ * well.
+ *
+ * This must be called with dbs_data->mutex held, otherwise traversing
+ * policy_dbs_list isn't safe.
+ */
+ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
+			    size_t count)
+{
+	struct policy_dbs_info *policy_dbs;
+	unsigned int rate;
+	int ret;
+	ret = sscanf(buf, "%u", &rate);
+	if (ret != 1)
+		return -EINVAL;
+
+	dbs_data->sampling_rate = max(rate, dbs_data->min_sampling_rate);
+
+	/*
+	 * We are operating under dbs_data->mutex and so the list and its
+	 * entries can't be freed concurrently.
+	 */
+	list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+		mutex_lock(&policy_dbs->timer_mutex);
+		/*
+		 * On 32-bit architectures this may race with the
+		 * sample_delay_ns read in dbs_update_util_handler(), but that
+		 * really doesn't matter.  If the read returns a value that's
+		 * too big, the sample will be skipped, but the next invocation
+		 * of dbs_update_util_handler() (when the update has been
+		 * completed) will take a sample.  If the returned value is too
+		 * small, the sample will be taken immediately, but that isn't a
+		 * problem, as we want the new rate to take effect immediately
+		 * anyway.
+		 *
+		 * If this runs in parallel with dbs_work_handler(), we may end
+		 * up overwriting the sample_delay_ns value that it has just
+		 * written, but the difference should not be too big and it will
+		 * be corrected next time a sample is taken, so it shouldn't be
+		 * significant.
+		 */
+		gov_update_sample_delay(policy_dbs, dbs_data->sampling_rate);
+		mutex_unlock(&policy_dbs->timer_mutex);
+	}
+
+	return count;
+}
+EXPORT_SYMBOL_GPL(store_sampling_rate);
+
 static inline struct dbs_data *to_dbs_data(struct kobject *kobj)
 {
 	return container_of(kobj, struct dbs_data, kobj);
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 02885e353dfc..2f5ca7393653 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -238,4 +238,6 @@ void od_register_powersave_bias_handler(unsigned int (*f)
 		(struct cpufreq_policy *, unsigned int, unsigned int),
 		unsigned int powersave_bias);
 void od_unregister_powersave_bias_handler(void);
+ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
+			    size_t count);
 #endif /* _CPUFREQ_GOVERNOR_H */
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 38301c6b31c7..12213823cc93 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -221,75 +221,6 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 /************************** sysfs interface ************************/
 static struct dbs_governor od_dbs_gov;
 
-/**
- * update_sampling_rate - update sampling rate effective immediately if needed.
- * @new_rate: new sampling rate
- *
- * If new rate is smaller than the old, simply updating
- * dbs.sampling_rate might not be appropriate. For example, if the
- * original sampling_rate was 1 second and the requested new sampling rate is 10
- * ms because the user needs immediate reaction from ondemand governor, but not
- * sure if higher frequency will be required or not, then, the governor may
- * change the sampling rate too late; up to 1 second later. Thus, if we are
- * reducing the sampling rate, we need to make the new value effective
- * immediately.
- *
- * On the other hand, if new rate is larger than the old, then we may evaluate
- * the load too soon, and it might we worth updating sample_delay_ns then as
- * well.
- *
- * This must be called with dbs_data->mutex held, otherwise traversing
- * policy_dbs_list isn't safe.
- */
-static void update_sampling_rate(struct dbs_data *dbs_data,
-		unsigned int new_rate)
-{
-	struct policy_dbs_info *policy_dbs;
-
-	dbs_data->sampling_rate = new_rate = max(new_rate,
-			dbs_data->min_sampling_rate);
-
-	/*
-	 * We are operating under dbs_data->mutex and so the list and its
-	 * entries can't be freed concurrently.
-	 */
-	list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
-		mutex_lock(&policy_dbs->timer_mutex);
-		/*
-		 * On 32-bit architectures this may race with the
-		 * sample_delay_ns read in dbs_update_util_handler(), but that
-		 * really doesn't matter.  If the read returns a value that's
-		 * too big, the sample will be skipped, but the next invocation
-		 * of dbs_update_util_handler() (when the update has been
-		 * completed) will take a sample.  If the returned value is too
-		 * small, the sample will be taken immediately, but that isn't a
-		 * problem, as we want the new rate to take effect immediately
-		 * anyway.
-		 *
-		 * If this runs in parallel with dbs_work_handler(), we may end
-		 * up overwriting the sample_delay_ns value that it has just
-		 * written, but the difference should not be too big and it will
-		 * be corrected next time a sample is taken, so it shouldn't be
-		 * significant.
-		 */
-		gov_update_sample_delay(policy_dbs, new_rate);
-		mutex_unlock(&policy_dbs->timer_mutex);
-	}
-}
-
-static ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
-		size_t count)
-{
-	unsigned int input;
-	int ret;
-	ret = sscanf(buf, "%u", &input);
-	if (ret != 1)
-		return -EINVAL;
-
-	update_sampling_rate(dbs_data, input);
-	return count;
-}
-
 static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
 		size_t count)
 {
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH V5 3/3] cpufreq: ondemand: Rearrange od_dbs_timer() to avoid updating delay
  2016-02-10  7:00     ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Viresh Kumar
  2016-02-10  7:00       ` [PATCH V5 2/3] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
@ 2016-02-10  7:00       ` Viresh Kumar
  2016-02-11  9:58       ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Rafael J. Wysocki
  2 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-10  7:00 UTC (permalink / raw)
  To: Rafael Wysocki; +Cc: linaro-kernel, linux-pm, Viresh Kumar, open list

'delay' is updated properly in all paths of the routine od_dbs_timer(),
leaving just one. And can be 0 only in that case.

Move the update to 'delay' as an else part of the if block.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 drivers/cpufreq/cpufreq_ondemand.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 12213823cc93..0b79f1488be4 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -194,7 +194,7 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
-	int delay = 0, sample_type = dbs_info->sample_type;
+	int delay, sample_type = dbs_info->sample_type;
 
 	/* Common NORMAL_SAMPLE setup */
 	dbs_info->sample_type = OD_NORMAL_SAMPLE;
@@ -208,13 +208,12 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
 			/* Setup timer for SUB_SAMPLE */
 			dbs_info->sample_type = OD_SUB_SAMPLE;
 			delay = dbs_info->freq_hi_jiffies;
+		} else {
+			delay = delay_for_sampling_rate(dbs_data->sampling_rate
+							* dbs_info->rate_mult);
 		}
 	}
 
-	if (!delay)
-		delay = delay_for_sampling_rate(dbs_data->sampling_rate
-				* dbs_info->rate_mult);
-
 	return delay;
 }
 
-- 
2.7.1.370.gb2aa7f8

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines
  2016-02-09  3:46 ` [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines Viresh Kumar
@ 2016-02-11  0:59   ` Rafael J. Wysocki
  2016-02-11  1:15     ` Rafael J. Wysocki
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-11  0:59 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> The offline routine was separated into two halves earlier by
> 'commit 1aee40ac9c86 ("cpufreq: Invoke __cpufreq_remove_dev_finish()
> after releasing cpu_hotplug.lock");.
>
> And the reasons cited were, race issues between accessing policy's sysfs
> files and policy kobject's cleanup.
>
> That race isn't valid anymore, as we don't remove the policy & its
> kobject completely on hotplugs, but do that from ->remove() callback of
> subsys framework.

Governor sysfs attributes are still removed in
__cpufreq_governor(_EXIT), though, so had store() been used for them,
the deadlock described in the changelog of commit 1aee40ac9c86 would
have been possible.

Fortunately, we don't use store() (which still does get_online_cpus())
for those attributes now.  We use governor_store() for them and that
doesn't call get_online_cpus().  So in fact this patch is only correct
after the recent rework of the governor attributes handling.

Please modify the changelog to explain that more thoroughly.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines
  2016-02-11  0:59   ` Rafael J. Wysocki
@ 2016-02-11  1:15     ` Rafael J. Wysocki
  2016-02-11 11:46       ` Viresh Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-11  1:15 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Viresh Kumar, Rafael Wysocki, Juri Lelli, Lists linaro-kernel,
	linux-pm, Shilpasri G Bhat, Linux Kernel Mailing List

On Thu, Feb 11, 2016 at 1:59 AM, Rafael J. Wysocki <rafael@kernel.org> wrote:
> On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
>> The offline routine was separated into two halves earlier by
>> 'commit 1aee40ac9c86 ("cpufreq: Invoke __cpufreq_remove_dev_finish()
>> after releasing cpu_hotplug.lock");.
>>
>> And the reasons cited were, race issues between accessing policy's sysfs
>> files and policy kobject's cleanup.
>>
>> That race isn't valid anymore, as we don't remove the policy & its
>> kobject completely on hotplugs, but do that from ->remove() callback of
>> subsys framework.
>
> Governor sysfs attributes are still removed in
> __cpufreq_governor(_EXIT), though, so had store() been used for them,
> the deadlock described in the changelog of commit 1aee40ac9c86 would
> have been possible.
>
> Fortunately, we don't use store() (which still does get_online_cpus())
> for those attributes now.  We use governor_store() for them and that
> doesn't call get_online_cpus().  So in fact this patch is only correct
> after the recent rework of the governor attributes handling.
>
> Please modify the changelog to explain that more thoroughly.

And one question tangentially related to this patch: Would it be
possible to avoid calling __cpufreq_governor(_EXIT) for CPU offline?

The fact that we still carry out the whole governor teardown at that
point is slightly disturbing, as in theory it should be possible to
keep the governor attributes in place across offline/online.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held
  2016-02-09  3:46 ` [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held Viresh Kumar
@ 2016-02-11  9:48   ` Rafael J. Wysocki
  2016-02-11 11:52     ` Viresh Kumar
  0 siblings, 1 reply; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-11  9:48 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> This isn't followed properly by all parts of the core code, some follow
> it, whereas others don't.

"The cpufreq core code is not consistent with respect to invoking
__cpufreq_governor() under policy->rwsem."

> Enforcing it will also enable us to remove cpufreq_governor_lock, that
> is used today because we can't guarantee that __cpufreq_governor() isn't
> executed in parallel.

"Changing all code to always hold policy->rwsem around
__cpufreq_governor() invocations will allow us to ..."

> We should also ensure that the lock is held across state changes to the
> governors.
>
> For example, while adding a CPU to the policy on cpu-online path, we
> need to stop the governor, change policy->cpus, start the governor and
> then refresh its limits. The complete sequence must be guaranteed to
> execute without any concurrent races. And that can be achieved using
> policy->rwsem around these use cases.
>
> Also note that cpufreq_driver->stop_cpu() and ->exit() can get called
> while policy->rwsem is held. That shouldn't have any side effects
> though.

The last paragraph is unclear.

Is it supposed to mean that the change will cause
cpufreq_driver->stop_cpu() and ->exit() to be called under
policy->rwsem sometimes?

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock
  2016-02-09  3:46 ` [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock Viresh Kumar
@ 2016-02-11  9:53   ` Rafael J. Wysocki
  0 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-11  9:53 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> We used to drop policy->rwsem just before calling __cpufreq_governor()
> in some cases earlier and so it was possible that __cpufreq_governor()
> runs concurrently via separate threads.
>
> In order to guarantee valid state transitions for governors,
> 'governor_enabled' was required to be protected using some locking and
> we created cpufreq_governor_lock for that.
>
> But now, __cpufreq_governor() is always called from within policy->rwsem
> held and so 'governor_enabled' is protected against races even without
> cpufreq_governor_lock.
>
> Get rid of the extra lock now.

This one looks good, but depends on the [2/7].

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now
  2016-02-10  7:00     ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Viresh Kumar
  2016-02-10  7:00       ` [PATCH V5 2/3] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
  2016-02-10  7:00       ` [PATCH V5 3/3] cpufreq: ondemand: Rearrange od_dbs_timer() to avoid updating delay Viresh Kumar
@ 2016-02-11  9:58       ` Rafael J. Wysocki
  2 siblings, 0 replies; 22+ messages in thread
From: Rafael J. Wysocki @ 2016-02-11  9:58 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Rafael Wysocki, Lists linaro-kernel, linux-pm, Juri Lelli,
	Shilpasri G Bhat, open list

On Wed, Feb 10, 2016 at 8:00 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> The cpufreq core now guarantees that policy->rwsem won't be dropped
> while running the ->governor callback for the CPUFREQ_GOV_POLICY_EXIT
> event and will be held acquired until the complete sequence of governor
> state changes has finished.
>
> This allows governor state machine checks to be dropped from multiple
> functions in cpufreq_governor.c.
>
> This also means that policy_dbs->policy can be initialized upfront, so
> the entire initialization of struct policy_dbs is carried out in one
> place.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> Tested-by: Juri Lelli <juri.lelli@arm.com>
> Tested-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
> ---
> Rafael,
>
> The 4th patch is dropped and 5-7 are updated on top of that.

There are some comments to address on the changelogs of patches
[1-2/7], so can you address them and resend the whole series, please?

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines
  2016-02-11  1:15     ` Rafael J. Wysocki
@ 2016-02-11 11:46       ` Viresh Kumar
  0 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-11 11:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On 11-02-16, 02:15, Rafael J. Wysocki wrote:
> And one question tangentially related to this patch: Would it be
> possible to avoid calling __cpufreq_governor(_EXIT) for CPU offline?
> 
> The fact that we still carry out the whole governor teardown at that
> point is slightly disturbing, as in theory it should be possible to
> keep the governor attributes in place across offline/online.

Will think about that after the current code is stable a bit. It should be
possible, but need to see if it is worth it.

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held
  2016-02-11  9:48   ` Rafael J. Wysocki
@ 2016-02-11 11:52     ` Viresh Kumar
  0 siblings, 0 replies; 22+ messages in thread
From: Viresh Kumar @ 2016-02-11 11:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael Wysocki, Juri Lelli, Lists linaro-kernel, linux-pm,
	Shilpasri G Bhat, Linux Kernel Mailing List

On 11-02-16, 10:48, Rafael J. Wysocki wrote:
> On Tue, Feb 9, 2016 at 4:46 AM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > Also note that cpufreq_driver->stop_cpu() and ->exit() can get called
> > while policy->rwsem is held. That shouldn't have any side effects
> > though.
> 
> The last paragraph is unclear.
> 
> Is it supposed to mean that the change will cause
> cpufreq_driver->stop_cpu() and ->exit() to be called under
> policy->rwsem sometimes?

Yeah, reworded it a bit ..

Also note that, after this patch cpufreq_driver->stop_cpu() and ->exit()
will get called while policy->rwsem is held, which wasn't the case
earlier. That shouldn't have any side effects though.

-- 
viresh

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-02-11 11:52 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-09  3:46 [PATCH V4 0/7] cpufreq: Locking fixes and cleanups Viresh Kumar
2016-02-09  3:46 ` [PATCH V4 1/7] cpufreq: Merge cpufreq_offline_prepare/finish routines Viresh Kumar
2016-02-11  0:59   ` Rafael J. Wysocki
2016-02-11  1:15     ` Rafael J. Wysocki
2016-02-11 11:46       ` Viresh Kumar
2016-02-09  3:46 ` [PATCH V4 2/7] cpufreq: Call __cpufreq_governor() with policy->rwsem held Viresh Kumar
2016-02-11  9:48   ` Rafael J. Wysocki
2016-02-11 11:52     ` Viresh Kumar
2016-02-09  3:46 ` [PATCH V4 3/7] cpufreq: Remove cpufreq_governor_lock Viresh Kumar
2016-02-11  9:53   ` Rafael J. Wysocki
2016-02-09  3:46 ` [PATCH V4 4/7] cpufreq: governor: Move common sysfs tunables to cpufreq_governor.c Viresh Kumar
2016-02-10  0:26   ` Rafael J. Wysocki
2016-02-10  7:00     ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Viresh Kumar
2016-02-10  7:00       ` [PATCH V5 2/3] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
2016-02-10  7:00       ` [PATCH V5 3/3] cpufreq: ondemand: Rearrange od_dbs_timer() to avoid updating delay Viresh Kumar
2016-02-11  9:58       ` [PATCH V5 1/3] cpufreq: governor: No need to manage state machine now Rafael J. Wysocki
2016-02-09  3:46 ` [PATCH V4 5/7] " Viresh Kumar
2016-02-10  0:36   ` Rafael J. Wysocki
2016-02-10  5:36     ` Viresh Kumar
2016-02-09  3:46 ` [PATCH V4 6/7] cpufreq: conservative: Update sample_delay_ns immediately Viresh Kumar
2016-02-09  3:46 ` [PATCH V4 7/7] cpufreq: ondemand: Rearrange od_dbs_timer() to void updating delay Viresh Kumar
2016-02-10  0:28   ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).