All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] cpufreq governor improvements
@ 2016-02-15  1:08 Rafael J. Wysocki
  2016-02-15  1:12 ` [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly Rafael J. Wysocki
                   ` (8 more replies)
  0 siblings, 9 replies; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:08 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

Hi All,

Here's a buch of patches that slightly improve the ondemand/conservative code
on top of the current linux-next branch of my tree (linux-pm.git).

They don't change the way things work fundamentally, but some minor differences
may be noticeable.

[1/9] Do not do atomic_inc() in gov_cancel_work() as there's no reason to do it
      (new version).

[2/9] Avoid atomic ops in scheduler paths if not absolutely necessary (new version).

[3/9] Fix computation of the contribution from nice in dbs_check_cpu().

[4/9] Clean up load-related computations in the common governor code.

[5/9] Get rid of the ->gov_check_cpu callback.

[6/9] Make store_sampling_rate() reset the sample delay to 0 (instead of setting
      it to the new sampling rate) to avoid weird interactions with the ondemand
      governor.

[7/9] Move rate_mult to struct policy_dbs_info.

[8/9] Simplify conditionals in od_dbs_timer().

[9/9] Use microseconds in computations related to sample delay.

The series have been (lightly) tested on Toshiba Portege R500.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
@ 2016-02-15  1:12 ` Rafael J. Wysocki
  2016-02-15  5:40   ` Viresh Kumar
  2016-02-15  1:13 ` [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths Rafael J. Wysocki
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:12 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The atomic work counter incrementation in gov_cancel_work() is not
necessary any more, because work items won't be queued up after
gov_clear_update_util() anyway, so drop it along with the comment
about how it may be missed by the gov_clear_update_util().

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

This is a new version of https://patchwork.kernel.org/patch/8291021/ .

Changes from the previous version:
- Rebase.

---
 drivers/cpufreq/cpufreq_governor.c |    8 --------
 1 file changed, 8 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -300,13 +300,6 @@ static void gov_cancel_work(struct cpufr
 {
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 
-	/* Tell dbs_update_util_handler() to skip queuing up work items. */
-	atomic_inc(&policy_dbs->work_count);
-	/*
-	 * If dbs_update_util_handler() is already running, it may not notice
-	 * the incremented work_count, so wait for it to complete to prevent its
-	 * work item from being queued up after the cancel_work_sync() below.
-	 */
 	gov_clear_update_util(policy_dbs->policy);
 	irq_work_sync(&policy_dbs->irq_work);
 	cancel_work_sync(&policy_dbs->work);
@@ -369,7 +362,6 @@ static void dbs_update_util_handler(stru
 	 * The work may not be allowed to be queued up right now.
 	 * Possible reasons:
 	 * - Work has already been queued up or is in progress.
-	 * - The governor is being stopped.
 	 * - It is too early (too little time from the previous sample).
 	 */
 	if (atomic_inc_return(&policy_dbs->work_count) == 1) {

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
  2016-02-15  1:12 ` [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly Rafael J. Wysocki
@ 2016-02-15  1:13 ` Rafael J. Wysocki
  2016-02-15  6:17   ` Viresh Kumar
  2016-02-15  8:20   ` Viresh Kumar
  2016-02-15  1:15 ` [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu() Rafael J. Wysocki
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:13 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Rework the handling of work items by dbs_update_util_handler() and
dbs_work_handler() so the former (which is executed in scheduler
paths) only uses atomic operations when absolutely necessary.  That
is, when the policy is shared and dbs_update_util_handler() has
already decided that this is the time to queue up a work item.

In particular, this avoids the atomic ops entirely on platforms where
policy objects are never shared.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

This is a new version of https://patchwork.kernel.org/patch/8291051/ .

Changes from the previous version:
- Added a new "is_shared" field to struct policy_dbs_info to be set for
  shared policies to avoid evaluating cpumask_weight() every time
  dbs_update_util_handler() decides to take a sample.

---
 drivers/cpufreq/cpufreq_governor.c |   49 +++++++++++++++++++++++++------------
 drivers/cpufreq/cpufreq_governor.h |    3 ++
 2 files changed, 37 insertions(+), 15 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -304,6 +304,7 @@ static void gov_cancel_work(struct cpufr
 	irq_work_sync(&policy_dbs->irq_work);
 	cancel_work_sync(&policy_dbs->work);
 	atomic_set(&policy_dbs->work_count, 0);
+	policy_dbs->work_in_progress = false;
 }
 
 static void dbs_work_handler(struct work_struct *work)
@@ -326,13 +327,15 @@ static void dbs_work_handler(struct work
 	policy_dbs->sample_delay_ns = jiffies_to_nsecs(delay);
 	mutex_unlock(&policy_dbs->timer_mutex);
 
+	/* Allow the utilization update handler to queue up more work. */
+	atomic_set(&policy_dbs->work_count, 0);
 	/*
-	 * If the atomic operation below is reordered with respect to the
-	 * sample delay modification, the utilization update handler may end
-	 * up using a stale sample delay value.
+	 * If the update below is reordered with respect to the sample delay
+	 * modification, the utilization update handler may end up using a stale
+	 * sample delay value.
 	 */
-	smp_mb__before_atomic();
-	atomic_dec(&policy_dbs->work_count);
+	smp_wmb();
+	policy_dbs->work_in_progress = false;
 }
 
 static void dbs_irq_work(struct irq_work *irq_work)
@@ -357,6 +360,7 @@ static void dbs_update_util_handler(stru
 {
 	struct cpu_dbs_info *cdbs = container_of(data, struct cpu_dbs_info, update_util);
 	struct policy_dbs_info *policy_dbs = cdbs->policy_dbs;
+	u64 delta_ns;
 
 	/*
 	 * The work may not be allowed to be queued up right now.
@@ -364,17 +368,30 @@ static void dbs_update_util_handler(stru
 	 * - Work has already been queued up or is in progress.
 	 * - It is too early (too little time from the previous sample).
 	 */
-	if (atomic_inc_return(&policy_dbs->work_count) == 1) {
-		u64 delta_ns;
+	if (policy_dbs->work_in_progress)
+		return;
 
-		delta_ns = time - policy_dbs->last_sample_time;
-		if ((s64)delta_ns >= policy_dbs->sample_delay_ns) {
-			policy_dbs->last_sample_time = time;
-			gov_queue_irq_work(policy_dbs);
-			return;
-		}
-	}
-	atomic_dec(&policy_dbs->work_count);
+	/*
+	 * If the reads below are reordered before the check above, the value
+	 * of sample_delay_ns used in the computation may be stale.
+	 */
+	smp_rmb();
+	delta_ns = time - policy_dbs->last_sample_time;
+	if ((s64)delta_ns < policy_dbs->sample_delay_ns)
+		return;
+
+	/*
+	 * If the policy is not shared, the irq_work may be queued up right away
+	 * at this point.  Otherwise, we need to ensure that only one of the
+	 * CPUs sharing the policy will do that.
+	 */
+	if (policy_dbs->is_shared &&
+	    !atomic_add_unless(&policy_dbs->work_count, 1, 1))
+		return;
+
+	policy_dbs->last_sample_time = time;
+	policy_dbs->work_in_progress = true;
+	gov_queue_irq_work(policy_dbs);
 }
 
 static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy,
@@ -551,6 +568,8 @@ static int cpufreq_governor_start(struct
 	if (!policy->cur)
 		return -EINVAL;
 
+	policy_dbs->is_shared = policy_is_shared(policy);
+
 	sampling_rate = dbs_data->sampling_rate;
 	ignore_nice = dbs_data->ignore_nice_load;
 
Index: linux-pm/drivers/cpufreq/cpufreq_governor.h
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.h
+++ linux-pm/drivers/cpufreq/cpufreq_governor.h
@@ -130,6 +130,9 @@ struct policy_dbs_info {
 	/* dbs_data may be shared between multiple policy objects */
 	struct dbs_data *dbs_data;
 	struct list_head list;
+	/* Status indicators */
+	bool is_shared;		/* This object is used by multiple CPUs */
+	bool work_in_progress;	/* Work is being queued up or in progress */
 };
 
 static inline void gov_update_sample_delay(struct policy_dbs_info *policy_dbs,

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu()
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
  2016-02-15  1:12 ` [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly Rafael J. Wysocki
  2016-02-15  1:13 ` [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths Rafael J. Wysocki
@ 2016-02-15  1:15 ` Rafael J. Wysocki
  2016-02-15  8:29   ` Viresh Kumar
  2016-02-15  1:18 ` [PATCH 4/9] cpufreq: governor: Clean up load-related computations Rafael J. Wysocki
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:15 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The contribution of the CPU nice time to the idle time in dbs_check_cpu()
is computed in a bogus way, as the code may subtract current and previous
nice values for different CPUs.

That doesn't matter for cases when cpufreq policies are not shared,
but may lead to problems otherwise.

Fix the computation and simplify it to avoid taking unnecessary steps.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_governor.c |   18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -198,22 +198,10 @@ void dbs_check_cpu(struct cpufreq_policy
 		j_cdbs->prev_cpu_idle = cur_idle_time;
 
 		if (ignore_nice) {
-			struct cpu_dbs_info *cdbs = gov->get_cpu_cdbs(cpu);
-			u64 cur_nice;
-			unsigned long cur_nice_jiffies;
+			u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
 
-			cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE] -
-					 cdbs->prev_cpu_nice;
-			/*
-			 * Assumption: nice time between sampling periods will
-			 * be less than 2^32 jiffies for 32 bit sys
-			 */
-			cur_nice_jiffies = (unsigned long)
-					cputime64_to_jiffies64(cur_nice);
-
-			cdbs->prev_cpu_nice =
-				kcpustat_cpu(j).cpustat[CPUTIME_NICE];
-			idle_time += jiffies_to_usecs(cur_nice_jiffies);
+			idle_time += cputime_to_usecs(cur_nice - j_cdbs->prev_cpu_nice);
+			j_cdbs->prev_cpu_nice = cur_nice;
 		}
 
 		if (unlikely(!wall_time || wall_time < idle_time))

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 4/9] cpufreq: governor: Clean up load-related computations
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (2 preceding siblings ...)
  2016-02-15  1:15 ` [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu() Rafael J. Wysocki
@ 2016-02-15  1:18 ` Rafael J. Wysocki
  2016-02-15  8:33   ` Viresh Kumar
  2016-02-15  1:19 ` [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback Rafael J. Wysocki
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:18 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Clean up some load-related computations in dbs_check_cpu() and
cpufreq_governor_start() to get rid of unnecessary operations and
type casts and make the code easier to read.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_governor.c |   24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -186,16 +186,15 @@ void dbs_check_cpu(struct cpufreq_policy
 			io_busy = od_tuners->io_is_busy;
 		cur_idle_time = get_cpu_idle_time(j, &cur_wall_time, io_busy);
 
-		wall_time = (unsigned int)
-			(cur_wall_time - j_cdbs->prev_cpu_wall);
+		wall_time = cur_wall_time - j_cdbs->prev_cpu_wall;
 		j_cdbs->prev_cpu_wall = cur_wall_time;
 
-		if (cur_idle_time < j_cdbs->prev_cpu_idle)
-			cur_idle_time = j_cdbs->prev_cpu_idle;
-
-		idle_time = (unsigned int)
-			(cur_idle_time - j_cdbs->prev_cpu_idle);
-		j_cdbs->prev_cpu_idle = cur_idle_time;
+		if (cur_idle_time <= j_cdbs->prev_cpu_idle) {
+			idle_time = 0;
+		} else {
+			idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
+			j_cdbs->prev_cpu_idle = cur_idle_time;
+		}
 
 		if (ignore_nice) {
 			u64 cur_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
@@ -571,13 +570,10 @@ static int cpufreq_governor_start(struct
 		struct cpu_dbs_info *j_cdbs = gov->get_cpu_cdbs(j);
 		unsigned int prev_load;
 
-		j_cdbs->prev_cpu_idle =
-			get_cpu_idle_time(j, &j_cdbs->prev_cpu_wall, io_busy);
+		j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, &j_cdbs->prev_cpu_wall, io_busy);
 
-		prev_load = (unsigned int)(j_cdbs->prev_cpu_wall -
-					    j_cdbs->prev_cpu_idle);
-		j_cdbs->prev_load = 100 * prev_load /
-				    (unsigned int)j_cdbs->prev_cpu_wall;
+		prev_load = j_cdbs->prev_cpu_wall - j_cdbs->prev_cpu_idle;
+		j_cdbs->prev_load = 100 * prev_load / j_cdbs->prev_cpu_wall;
 
 		if (ignore_nice)
 			j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (3 preceding siblings ...)
  2016-02-15  1:18 ` [PATCH 4/9] cpufreq: governor: Clean up load-related computations Rafael J. Wysocki
@ 2016-02-15  1:19 ` Rafael J. Wysocki
  2016-02-15  8:52   ` Viresh Kumar
  2016-02-15  1:20 ` [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate() Rafael J. Wysocki
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:19 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The way the ->gov_check_cpu governor callback is used by the ondemand
and conservative governors is not really straightforward.  Namely, the
governor calls dbs_check_cpu() that updates the load information for
the policy and the invokes ->gov_check_cpu() for the governor.

To get rid of that entanglement, notice that cpufreq_governor_limits()
doesn't need to call dbs_check_cpu() directly.  Instead, it can simply
reset the sample delay to 0 which will cause a sample to be taken
immediately.  The result of that is practically equivalent to calling
dbs_check_cpu() except that it will trigger a full update of governor
internal state and not just the ->gov_check_cpu() part.

Following that observation, make cpufreq_governor_limits() reset
the sample delay and turn dbs_check_cpu() into a function that will
simply evaluate the load and return the result called dbs_update().

That function can now be called by governors from the routines that
previously were pointed to by ->gov_check_cpu and those routines
can be called directly by each governor instead of dbs_check_cpu().
This way ->gov_check_cpu becomes unnecessary, so drop it.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_conservative.c |   26 +++++++++-----------------
 drivers/cpufreq/cpufreq_governor.c     |   15 ++++++++-------
 drivers/cpufreq/cpufreq_governor.h     |    3 +--
 drivers/cpufreq/cpufreq_ondemand.c     |   15 +++++++++------
 4 files changed, 27 insertions(+), 32 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -140,9 +140,8 @@ static const struct sysfs_ops governor_s
 	.store	= governor_store,
 };
 
-void dbs_check_cpu(struct cpufreq_policy *policy)
+unsigned int dbs_update(struct cpufreq_policy *policy)
 {
-	int cpu = policy->cpu;
 	struct dbs_governor *gov = dbs_governor_of(policy);
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
@@ -154,7 +153,7 @@ void dbs_check_cpu(struct cpufreq_policy
 
 	if (gov->governor == GOV_ONDEMAND) {
 		struct od_cpu_dbs_info_s *od_dbs_info =
-				gov->get_cpu_dbs_info_s(cpu);
+				gov->get_cpu_dbs_info_s(policy->cpu);
 
 		/*
 		 * Sometimes, the ondemand governor uses an additional
@@ -250,10 +249,9 @@ void dbs_check_cpu(struct cpufreq_policy
 		if (load > max_load)
 			max_load = load;
 	}
-
-	gov->gov_check_cpu(cpu, max_load);
+	return max_load;
 }
-EXPORT_SYMBOL_GPL(dbs_check_cpu);
+EXPORT_SYMBOL_GPL(dbs_update);
 
 void gov_set_update_util(struct policy_dbs_info *policy_dbs,
 			 unsigned int delay_us)
@@ -610,11 +608,14 @@ static int cpufreq_governor_limits(struc
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 
 	mutex_lock(&policy_dbs->timer_mutex);
+
 	if (policy->max < policy->cur)
 		__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
 	else if (policy->min > policy->cur)
 		__cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L);
-	dbs_check_cpu(policy);
+
+	gov_update_sample_delay(policy_dbs, 0);
+
 	mutex_unlock(&policy_dbs->timer_mutex);
 
 	return 0;
Index: linux-pm/drivers/cpufreq/cpufreq_governor.h
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.h
+++ linux-pm/drivers/cpufreq/cpufreq_governor.h
@@ -202,7 +202,6 @@ struct dbs_governor {
 	struct cpu_dbs_info *(*get_cpu_cdbs)(int cpu);
 	void *(*get_cpu_dbs_info_s)(int cpu);
 	unsigned int (*gov_dbs_timer)(struct cpufreq_policy *policy);
-	void (*gov_check_cpu)(int cpu, unsigned int load);
 	int (*init)(struct dbs_data *dbs_data, bool notify);
 	void (*exit)(struct dbs_data *dbs_data, bool notify);
 
@@ -235,7 +234,7 @@ static inline int delay_for_sampling_rat
 }
 
 extern struct mutex dbs_data_mutex;
-void dbs_check_cpu(struct cpufreq_policy *policy);
+unsigned int dbs_update(struct cpufreq_policy *policy);
 int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event);
 void od_register_powersave_bias_handler(unsigned int (*f)
 		(struct cpufreq_policy *, unsigned int, unsigned int),
Index: linux-pm/drivers/cpufreq/cpufreq_conservative.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_conservative.c
+++ linux-pm/drivers/cpufreq/cpufreq_conservative.c
@@ -44,20 +44,20 @@ static inline unsigned int get_freq_targ
  * Any frequency increase takes it to the maximum frequency. Frequency reduction
  * happens at minimum steps of 5% (default) of maximum frequency
  */
-static void cs_check_cpu(int cpu, unsigned int load)
+static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
 {
-	struct cs_cpu_dbs_info_s *dbs_info = &per_cpu(cs_cpu_dbs_info, cpu);
-	struct cpufreq_policy *policy = dbs_info->cdbs.policy_dbs->policy;
+	struct cs_cpu_dbs_info_s *dbs_info = &per_cpu(cs_cpu_dbs_info, policy->cpu);
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
+	unsigned int load = dbs_update(policy);
 
 	/*
 	 * break out if we 'cannot' reduce the speed as the user might
 	 * want freq_step to be zero
 	 */
 	if (cs_tuners->freq_step == 0)
-		return;
+		goto out;
 
 	/* Check for frequency increase */
 	if (load > dbs_data->up_threshold) {
@@ -65,7 +65,7 @@ static void cs_check_cpu(int cpu, unsign
 
 		/* if we are already at full speed then break out early */
 		if (dbs_info->requested_freq == policy->max)
-			return;
+			goto out;
 
 		dbs_info->requested_freq += get_freq_target(cs_tuners, policy);
 
@@ -74,12 +74,12 @@ static void cs_check_cpu(int cpu, unsign
 
 		__cpufreq_driver_target(policy, dbs_info->requested_freq,
 			CPUFREQ_RELATION_H);
-		return;
+		goto out;
 	}
 
 	/* if sampling_down_factor is active break out early */
 	if (++dbs_info->down_skip < dbs_data->sampling_down_factor)
-		return;
+		goto out;
 	dbs_info->down_skip = 0;
 
 	/* Check for frequency decrease */
@@ -89,7 +89,7 @@ static void cs_check_cpu(int cpu, unsign
 		 * if we cannot reduce the frequency anymore, break out early
 		 */
 		if (policy->cur == policy->min)
-			return;
+			goto out;
 
 		freq_target = get_freq_target(cs_tuners, policy);
 		if (dbs_info->requested_freq > freq_target)
@@ -99,16 +99,9 @@ static void cs_check_cpu(int cpu, unsign
 
 		__cpufreq_driver_target(policy, dbs_info->requested_freq,
 				CPUFREQ_RELATION_L);
-		return;
 	}
-}
-
-static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
-{
-	struct policy_dbs_info *policy_dbs = policy->governor_data;
-	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 
-	dbs_check_cpu(policy);
+ out:
 	return delay_for_sampling_rate(dbs_data->sampling_rate);
 }
 
@@ -300,7 +293,6 @@ static struct dbs_governor cs_dbs_gov =
 	.get_cpu_cdbs = get_cpu_cdbs,
 	.get_cpu_dbs_info_s = get_cpu_dbs_info_s,
 	.gov_dbs_timer = cs_dbs_timer,
-	.gov_check_cpu = cs_check_cpu,
 	.init = cs_init,
 	.exit = cs_exit,
 };
Index: linux-pm/drivers/cpufreq/cpufreq_ondemand.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ linux-pm/drivers/cpufreq/cpufreq_ondemand.c
@@ -150,13 +150,13 @@ static void dbs_freq_increase(struct cpu
  * (default), then we try to increase frequency. Else, we adjust the frequency
  * proportional to load.
  */
-static void od_check_cpu(int cpu, unsigned int load)
+static void od_update(struct cpufreq_policy *policy)
 {
-	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
+	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
 	struct policy_dbs_info *policy_dbs = dbs_info->cdbs.policy_dbs;
-	struct cpufreq_policy *policy = policy_dbs->policy;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct od_dbs_tuners *od_tuners = dbs_data->tuners;
+	unsigned int load = dbs_update(policy);
 
 	dbs_info->freq_lo = 0;
 
@@ -198,12 +198,16 @@ static unsigned int od_dbs_timer(struct
 
 	/* Common NORMAL_SAMPLE setup */
 	dbs_info->sample_type = OD_NORMAL_SAMPLE;
-	if (sample_type == OD_SUB_SAMPLE) {
+	/*
+	 * OD_SUB_SAMPLE doesn't make sense if sample_delay_ns is 0, so ignore
+	 * it then.
+	 */
+	if (sample_type == OD_SUB_SAMPLE && policy_dbs->sample_delay_ns > 0) {
 		delay = dbs_info->freq_lo_jiffies;
 		__cpufreq_driver_target(policy, dbs_info->freq_lo,
 					CPUFREQ_RELATION_H);
 	} else {
-		dbs_check_cpu(policy);
+		od_update(policy);
 		if (dbs_info->freq_lo) {
 			/* Setup timer for SUB_SAMPLE */
 			dbs_info->sample_type = OD_SUB_SAMPLE;
@@ -428,7 +432,6 @@ static struct dbs_governor od_dbs_gov =
 	.get_cpu_cdbs = get_cpu_cdbs,
 	.get_cpu_dbs_info_s = get_cpu_dbs_info_s,
 	.gov_dbs_timer = od_dbs_timer,
-	.gov_check_cpu = od_check_cpu,
 	.gov_ops = &od_ops,
 	.init = od_init,
 	.exit = od_exit,

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate()
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (4 preceding siblings ...)
  2016-02-15  1:19 ` [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback Rafael J. Wysocki
@ 2016-02-15  1:20 ` Rafael J. Wysocki
  2016-02-15  8:53   ` Viresh Kumar
  2016-02-15  1:20 ` [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs Rafael J. Wysocki
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:20 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

If store_sampling_rate() updates the sample delay when the ondemand
governor is in the middle of its high/low dance (OD_SUB_SAMPLE sample
type is set), the governor will still do the bottom half of the
previous sample which may take too much time.

To prevent that from happening, change store_sampling_rate() to always
reset the sample delay to 0 which also is consistent with the new
behavior of cpufreq_governor_limits().

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_governor.c |   16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -38,10 +38,6 @@ EXPORT_SYMBOL_GPL(dbs_data_mutex);
  * reducing the sampling rate, we need to make the new value effective
  * immediately.
  *
- * On the other hand, if new rate is larger than the old, then we may evaluate
- * the load too soon, and it might we worth updating sample_delay_ns then as
- * well.
- *
  * This must be called with dbs_data->mutex held, otherwise traversing
  * policy_dbs_list isn't safe.
  */
@@ -69,18 +65,14 @@ ssize_t store_sampling_rate(struct dbs_d
 		 * really doesn't matter.  If the read returns a value that's
 		 * too big, the sample will be skipped, but the next invocation
 		 * of dbs_update_util_handler() (when the update has been
-		 * completed) will take a sample.  If the returned value is too
-		 * small, the sample will be taken immediately, but that isn't a
-		 * problem, as we want the new rate to take effect immediately
-		 * anyway.
+		 * completed) will take a sample.
 		 *
 		 * If this runs in parallel with dbs_work_handler(), we may end
 		 * up overwriting the sample_delay_ns value that it has just
-		 * written, but the difference should not be too big and it will
-		 * be corrected next time a sample is taken, so it shouldn't be
-		 * significant.
+		 * written, but it will be corrected next time a sample is
+		 * taken, so it shouldn't be significant.
 		 */
-		gov_update_sample_delay(policy_dbs, dbs_data->sampling_rate);
+		gov_update_sample_delay(policy_dbs, 0);
 		mutex_unlock(&policy_dbs->timer_mutex);
 	}
 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (5 preceding siblings ...)
  2016-02-15  1:20 ` [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate() Rafael J. Wysocki
@ 2016-02-15  1:20 ` Rafael J. Wysocki
  2016-02-15  8:56   ` Viresh Kumar
  2016-02-15  1:21 ` [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer() Rafael J. Wysocki
  2016-02-15  1:22 ` [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations Rafael J. Wysocki
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:20 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The rate_mult field in struct od_cpu_dbs_info_s is used by the code
shared with the conservative governor and to access it that code
has to do an ugly governor type check.  However, first of all it
is ever only used for policy->cpu, so it is per-policy rather than
per-CPU and second, it is initialized to 1 by cpufreq_governor_start(),
so if the conservative governor never modifies it, it will have no
effect on the results of any computations.

For these reasons, move rate_mult to struct policy_dbs_info (as a
common field).

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_governor.c |   25 +++++++++----------------
 drivers/cpufreq/cpufreq_governor.h |    3 ++-
 drivers/cpufreq/cpufreq_ondemand.c |   23 +++++++++++++++--------
 3 files changed, 26 insertions(+), 25 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -138,24 +138,17 @@ unsigned int dbs_update(struct cpufreq_p
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct od_dbs_tuners *od_tuners = dbs_data->tuners;
-	unsigned int sampling_rate = dbs_data->sampling_rate;
 	unsigned int ignore_nice = dbs_data->ignore_nice_load;
 	unsigned int max_load = 0;
-	unsigned int j;
+	unsigned int sampling_rate, j;
 
-	if (gov->governor == GOV_ONDEMAND) {
-		struct od_cpu_dbs_info_s *od_dbs_info =
-				gov->get_cpu_dbs_info_s(policy->cpu);
-
-		/*
-		 * Sometimes, the ondemand governor uses an additional
-		 * multiplier to give long delays. So apply this multiplier to
-		 * the 'sampling_rate', so as to keep the wake-up-from-idle
-		 * detection logic a bit conservative.
-		 */
-		sampling_rate *= od_dbs_info->rate_mult;
-
-	}
+	/*
+	 * Sometimes governors may use an additional multiplier to increase
+	 * sample delays temporarily.  Apply that multiplier to sampling_rate
+	 * so as to keep the wake-up-from-idle detection logic a bit
+	 * conservative.
+	 */
+	sampling_rate = dbs_data->sampling_rate * policy_dbs->rate_mult;
 
 	/* Get Absolute Load */
 	for_each_cpu(j, policy->cpus) {
@@ -546,6 +539,7 @@ static int cpufreq_governor_start(struct
 		return -EINVAL;
 
 	policy_dbs->is_shared = policy_is_shared(policy);
+	policy_dbs->rate_mult = 1;
 
 	sampling_rate = dbs_data->sampling_rate;
 	ignore_nice = dbs_data->ignore_nice_load;
@@ -579,7 +573,6 @@ static int cpufreq_governor_start(struct
 		struct od_ops *od_ops = gov->gov_ops;
 		struct od_cpu_dbs_info_s *od_dbs_info = gov->get_cpu_dbs_info_s(cpu);
 
-		od_dbs_info->rate_mult = 1;
 		od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
 		od_ops->powersave_bias_init_cpu(cpu);
 	}
Index: linux-pm/drivers/cpufreq/cpufreq_governor.h
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.h
+++ linux-pm/drivers/cpufreq/cpufreq_governor.h
@@ -130,6 +130,8 @@ struct policy_dbs_info {
 	/* dbs_data may be shared between multiple policy objects */
 	struct dbs_data *dbs_data;
 	struct list_head list;
+	/* Multiplier for increasing sample delay temporarily. */
+	unsigned int rate_mult;
 	/* Status indicators */
 	bool is_shared;		/* This object is used by multiple CPUs */
 	bool work_in_progress;	/* Work is being queued up or in progress */
@@ -163,7 +165,6 @@ struct od_cpu_dbs_info_s {
 	unsigned int freq_lo;
 	unsigned int freq_lo_jiffies;
 	unsigned int freq_hi_jiffies;
-	unsigned int rate_mult;
 	unsigned int sample_type:1;
 };
 
Index: linux-pm/drivers/cpufreq/cpufreq_ondemand.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ linux-pm/drivers/cpufreq/cpufreq_ondemand.c
@@ -164,7 +164,7 @@ static void od_update(struct cpufreq_pol
 	if (load > dbs_data->up_threshold) {
 		/* If switching to max speed, apply sampling_down_factor */
 		if (policy->cur < policy->max)
-			dbs_info->rate_mult = dbs_data->sampling_down_factor;
+			policy_dbs->rate_mult = dbs_data->sampling_down_factor;
 		dbs_freq_increase(policy, policy->max);
 	} else {
 		/* Calculate the next frequency proportional to load */
@@ -175,7 +175,7 @@ static void od_update(struct cpufreq_pol
 		freq_next = min_f + load * (max_f - min_f) / 100;
 
 		/* No longer fully busy, reset rate_mult */
-		dbs_info->rate_mult = 1;
+		policy_dbs->rate_mult = 1;
 
 		if (!od_tuners->powersave_bias) {
 			__cpufreq_driver_target(policy, freq_next,
@@ -214,7 +214,7 @@ static unsigned int od_dbs_timer(struct
 			delay = dbs_info->freq_hi_jiffies;
 		} else {
 			delay = delay_for_sampling_rate(dbs_data->sampling_rate
-							* dbs_info->rate_mult);
+							* policy_dbs->rate_mult);
 		}
 	}
 
@@ -266,20 +266,27 @@ static ssize_t store_up_threshold(struct
 static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
 		const char *buf, size_t count)
 {
-	unsigned int input, j;
+	struct policy_dbs_info *policy_dbs;
+	unsigned int input;
 	int ret;
 	ret = sscanf(buf, "%u", &input);
 
 	if (ret != 1 || input > MAX_SAMPLING_DOWN_FACTOR || input < 1)
 		return -EINVAL;
+
 	dbs_data->sampling_down_factor = input;
 
 	/* Reset down sampling multiplier in case it was active */
-	for_each_online_cpu(j) {
-		struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
-				j);
-		dbs_info->rate_mult = 1;
+	list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+		/*
+		 * Doing this without locking might lead to using different
+		 * rate_mult values in od_update() and od_dbs_timer().
+		 */
+		mutex_lock(&policy_dbs->timer_mutex);
+		policy_dbs->rate_mult = 1;
+		mutex_unlock(&policy_dbs->timer_mutex);
 	}
+
 	return count;
 }
 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer()
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (6 preceding siblings ...)
  2016-02-15  1:20 ` [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs Rafael J. Wysocki
@ 2016-02-15  1:21 ` Rafael J. Wysocki
  2016-02-15  8:57   ` Viresh Kumar
  2016-02-15  1:22 ` [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations Rafael J. Wysocki
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:21 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Reduce the indentation level in the conditionals in od_dbs_timer()
and drop the delay variable from it.

No functional changes.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_ondemand.c |   24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_ondemand.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ linux-pm/drivers/cpufreq/cpufreq_ondemand.c
@@ -194,7 +194,7 @@ static unsigned int od_dbs_timer(struct
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
 	struct dbs_data *dbs_data = policy_dbs->dbs_data;
 	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
-	int delay, sample_type = dbs_info->sample_type;
+	int sample_type = dbs_info->sample_type;
 
 	/* Common NORMAL_SAMPLE setup */
 	dbs_info->sample_type = OD_NORMAL_SAMPLE;
@@ -203,22 +203,20 @@ static unsigned int od_dbs_timer(struct
 	 * it then.
 	 */
 	if (sample_type == OD_SUB_SAMPLE && policy_dbs->sample_delay_ns > 0) {
-		delay = dbs_info->freq_lo_jiffies;
 		__cpufreq_driver_target(policy, dbs_info->freq_lo,
 					CPUFREQ_RELATION_H);
-	} else {
-		od_update(policy);
-		if (dbs_info->freq_lo) {
-			/* Setup timer for SUB_SAMPLE */
-			dbs_info->sample_type = OD_SUB_SAMPLE;
-			delay = dbs_info->freq_hi_jiffies;
-		} else {
-			delay = delay_for_sampling_rate(dbs_data->sampling_rate
-							* policy_dbs->rate_mult);
-		}
+		return dbs_info->freq_lo_jiffies;
 	}
 
-	return delay;
+	od_update(policy);
+
+	if (dbs_info->freq_lo) {
+		/* Setup timer for SUB_SAMPLE */
+		dbs_info->sample_type = OD_SUB_SAMPLE;
+		return dbs_info->freq_hi_jiffies;
+	}
+
+	return delay_for_sampling_rate(dbs_data->sampling_rate * policy_dbs->rate_mult);
 }
 
 /************************** sysfs interface ************************/

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations
  2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
                   ` (7 preceding siblings ...)
  2016-02-15  1:21 ` [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer() Rafael J. Wysocki
@ 2016-02-15  1:22 ` Rafael J. Wysocki
  2016-02-15  8:58   ` Viresh Kumar
  8 siblings, 1 reply; 20+ messages in thread
From: Rafael J. Wysocki @ 2016-02-15  1:22 UTC (permalink / raw)
  To: Linux PM list; +Cc: Linux Kernel Mailing List, Viresh Kumar, Juri Lelli

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Do not convert microseconds to jiffies and the other way around
in governor computations related to the sampling rate and sample
delay and drop delay_for_sampling_rate() which isn't of any use
then.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/cpufreq_conservative.c |    2 +-
 drivers/cpufreq/cpufreq_governor.c     |    4 +---
 drivers/cpufreq/cpufreq_governor.h     |   15 ++-------------
 drivers/cpufreq/cpufreq_ondemand.c     |   28 +++++++++++++---------------
 4 files changed, 17 insertions(+), 32 deletions(-)

Index: linux-pm/drivers/cpufreq/cpufreq_governor.h
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.h
+++ linux-pm/drivers/cpufreq/cpufreq_governor.h
@@ -163,8 +163,8 @@ struct od_cpu_dbs_info_s {
 	struct cpu_dbs_info cdbs;
 	struct cpufreq_frequency_table *freq_table;
 	unsigned int freq_lo;
-	unsigned int freq_lo_jiffies;
-	unsigned int freq_hi_jiffies;
+	unsigned int freq_lo_delay_us;
+	unsigned int freq_hi_delay_us;
 	unsigned int sample_type:1;
 };
 
@@ -223,17 +223,6 @@ struct od_ops {
 	void (*freq_increase)(struct cpufreq_policy *policy, unsigned int freq);
 };
 
-static inline int delay_for_sampling_rate(unsigned int sampling_rate)
-{
-	int delay = usecs_to_jiffies(sampling_rate);
-
-	/* We want all CPUs to do sampling nearly on same jiffy */
-	if (num_online_cpus() > 1)
-		delay -= jiffies % delay;
-
-	return delay;
-}
-
 extern struct mutex dbs_data_mutex;
 unsigned int dbs_update(struct cpufreq_policy *policy);
 int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event);
Index: linux-pm/drivers/cpufreq/cpufreq_conservative.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_conservative.c
+++ linux-pm/drivers/cpufreq/cpufreq_conservative.c
@@ -102,7 +102,7 @@ static unsigned int cs_dbs_timer(struct
 	}
 
  out:
-	return delay_for_sampling_rate(dbs_data->sampling_rate);
+	return dbs_data->sampling_rate;
 }
 
 static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
Index: linux-pm/drivers/cpufreq/cpufreq_ondemand.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ linux-pm/drivers/cpufreq/cpufreq_ondemand.c
@@ -66,8 +66,8 @@ static int should_io_be_busy(void)
 
 /*
  * Find right freq to be set now with powersave_bias on.
- * Returns the freq_hi to be used right now and will set freq_hi_jiffies,
- * freq_lo, and freq_lo_jiffies in percpu area for averaging freqs.
+ * Returns the freq_hi to be used right now and will set freq_hi_delay_us,
+ * freq_lo, and freq_lo_delay_us in percpu area for averaging freqs.
  */
 static unsigned int generic_powersave_bias_target(struct cpufreq_policy *policy,
 		unsigned int freq_next, unsigned int relation)
@@ -75,7 +75,7 @@ static unsigned int generic_powersave_bi
 	unsigned int freq_req, freq_reduc, freq_avg;
 	unsigned int freq_hi, freq_lo;
 	unsigned int index = 0;
-	unsigned int jiffies_total, jiffies_hi, jiffies_lo;
+	unsigned int delay_hi_us;
 	struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info,
 						   policy->cpu);
 	struct policy_dbs_info *policy_dbs = policy->governor_data;
@@ -84,7 +84,7 @@ static unsigned int generic_powersave_bi
 
 	if (!dbs_info->freq_table) {
 		dbs_info->freq_lo = 0;
-		dbs_info->freq_lo_jiffies = 0;
+		dbs_info->freq_lo_delay_us = 0;
 		return freq_next;
 	}
 
@@ -107,17 +107,15 @@ static unsigned int generic_powersave_bi
 	/* Find out how long we have to be in hi and lo freqs */
 	if (freq_hi == freq_lo) {
 		dbs_info->freq_lo = 0;
-		dbs_info->freq_lo_jiffies = 0;
+		dbs_info->freq_lo_delay_us = 0;
 		return freq_lo;
 	}
-	jiffies_total = usecs_to_jiffies(dbs_data->sampling_rate);
-	jiffies_hi = (freq_avg - freq_lo) * jiffies_total;
-	jiffies_hi += ((freq_hi - freq_lo) / 2);
-	jiffies_hi /= (freq_hi - freq_lo);
-	jiffies_lo = jiffies_total - jiffies_hi;
+	delay_hi_us = (freq_avg - freq_lo) * dbs_data->sampling_rate;
+	delay_hi_us += (freq_hi - freq_lo) / 2;
+	delay_hi_us /= freq_hi - freq_lo;
+	dbs_info->freq_hi_delay_us = delay_hi_us;
 	dbs_info->freq_lo = freq_lo;
-	dbs_info->freq_lo_jiffies = jiffies_lo;
-	dbs_info->freq_hi_jiffies = jiffies_hi;
+	dbs_info->freq_lo_delay_us = dbs_data->sampling_rate - delay_hi_us;
 	return freq_hi;
 }
 
@@ -205,7 +203,7 @@ static unsigned int od_dbs_timer(struct
 	if (sample_type == OD_SUB_SAMPLE && policy_dbs->sample_delay_ns > 0) {
 		__cpufreq_driver_target(policy, dbs_info->freq_lo,
 					CPUFREQ_RELATION_H);
-		return dbs_info->freq_lo_jiffies;
+		return dbs_info->freq_lo_delay_us;
 	}
 
 	od_update(policy);
@@ -213,10 +211,10 @@ static unsigned int od_dbs_timer(struct
 	if (dbs_info->freq_lo) {
 		/* Setup timer for SUB_SAMPLE */
 		dbs_info->sample_type = OD_SUB_SAMPLE;
-		return dbs_info->freq_hi_jiffies;
+		return dbs_info->freq_hi_delay_us;
 	}
 
-	return delay_for_sampling_rate(dbs_data->sampling_rate * policy_dbs->rate_mult);
+	return dbs_data->sampling_rate * policy_dbs->rate_mult;
 }
 
 /************************** sysfs interface ************************/
Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
+++ linux-pm/drivers/cpufreq/cpufreq_governor.c
@@ -282,7 +282,6 @@ static void dbs_work_handler(struct work
 	struct policy_dbs_info *policy_dbs;
 	struct cpufreq_policy *policy;
 	struct dbs_governor *gov;
-	unsigned int delay;
 
 	policy_dbs = container_of(work, struct policy_dbs_info, work);
 	policy = policy_dbs->policy;
@@ -293,8 +292,7 @@ static void dbs_work_handler(struct work
 	 * ondemand governor isn't updating the sampling rate in parallel.
 	 */
 	mutex_lock(&policy_dbs->timer_mutex);
-	delay = gov->gov_dbs_timer(policy);
-	policy_dbs->sample_delay_ns = jiffies_to_nsecs(delay);
+	gov_update_sample_delay(policy_dbs, gov->gov_dbs_timer(policy));
 	mutex_unlock(&policy_dbs->timer_mutex);
 
 	/* Allow the utilization update handler to queue up more work. */

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly
  2016-02-15  1:12 ` [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly Rafael J. Wysocki
@ 2016-02-15  5:40   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  5:40 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:12, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> The atomic work counter incrementation in gov_cancel_work() is not
> necessary any more, because work items won't be queued up after
> gov_clear_update_util() anyway, so drop it along with the comment
> about how it may be missed by the gov_clear_update_util().
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
> 
> This is a new version of https://patchwork.kernel.org/patch/8291021/ .
> 
> Changes from the previous version:
> - Rebase.
> 
> ---
>  drivers/cpufreq/cpufreq_governor.c |    8 --------
>  1 file changed, 8 deletions(-)
> 
> Index: linux-pm/drivers/cpufreq/cpufreq_governor.c
> ===================================================================
> --- linux-pm.orig/drivers/cpufreq/cpufreq_governor.c
> +++ linux-pm/drivers/cpufreq/cpufreq_governor.c
> @@ -300,13 +300,6 @@ static void gov_cancel_work(struct cpufr
>  {
>  	struct policy_dbs_info *policy_dbs = policy->governor_data;
>  
> -	/* Tell dbs_update_util_handler() to skip queuing up work items. */
> -	atomic_inc(&policy_dbs->work_count);
> -	/*
> -	 * If dbs_update_util_handler() is already running, it may not notice
> -	 * the incremented work_count, so wait for it to complete to prevent its
> -	 * work item from being queued up after the cancel_work_sync() below.
> -	 */
>  	gov_clear_update_util(policy_dbs->policy);
>  	irq_work_sync(&policy_dbs->irq_work);
>  	cancel_work_sync(&policy_dbs->work);
> @@ -369,7 +362,6 @@ static void dbs_update_util_handler(stru
>  	 * The work may not be allowed to be queued up right now.
>  	 * Possible reasons:
>  	 * - Work has already been queued up or is in progress.
> -	 * - The governor is being stopped.
>  	 * - It is too early (too little time from the previous sample).
>  	 */
>  	if (atomic_inc_return(&policy_dbs->work_count) == 1) {


Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths
  2016-02-15  1:13 ` [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths Rafael J. Wysocki
@ 2016-02-15  6:17   ` Viresh Kumar
  2016-02-15  8:20   ` Viresh Kumar
  1 sibling, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  6:17 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:13, Rafael J. Wysocki wrote:
>  static void dbs_irq_work(struct irq_work *irq_work)
> @@ -357,6 +360,7 @@ static void dbs_update_util_handler(stru
>  {
>  	struct cpu_dbs_info *cdbs = container_of(data, struct cpu_dbs_info, update_util);
>  	struct policy_dbs_info *policy_dbs = cdbs->policy_dbs;
> +	u64 delta_ns;
>  
>  	/*
>  	 * The work may not be allowed to be queued up right now.
> @@ -364,17 +368,30 @@ static void dbs_update_util_handler(stru
>  	 * - Work has already been queued up or is in progress.
>  	 * - It is too early (too little time from the previous sample).
>  	 */
> -	if (atomic_inc_return(&policy_dbs->work_count) == 1) {
> -		u64 delta_ns;
> +	if (policy_dbs->work_in_progress)
> +		return;
>  
> -		delta_ns = time - policy_dbs->last_sample_time;
> -		if ((s64)delta_ns >= policy_dbs->sample_delay_ns) {
> -			policy_dbs->last_sample_time = time;
> -			gov_queue_irq_work(policy_dbs);
> -			return;
> -		}
> -	}
> -	atomic_dec(&policy_dbs->work_count);
> +	/*
> +	 * If the reads below are reordered before the check above, the value
> +	 * of sample_delay_ns used in the computation may be stale.
> +	 */
> +	smp_rmb();
> +	delta_ns = time - policy_dbs->last_sample_time;
> +	if ((s64)delta_ns < policy_dbs->sample_delay_ns)
> +		return;
> +
> +	/*
> +	 * If the policy is not shared, the irq_work may be queued up right away
> +	 * at this point.  Otherwise, we need to ensure that only one of the
> +	 * CPUs sharing the policy will do that.
> +	 */
> +	if (policy_dbs->is_shared &&
> +	    !atomic_add_unless(&policy_dbs->work_count, 1, 1))
> +		return;
> +
> +	policy_dbs->last_sample_time = time;
> +	policy_dbs->work_in_progress = true;
> +	gov_queue_irq_work(policy_dbs);
>  }
>  
>  static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy,
> @@ -551,6 +568,8 @@ static int cpufreq_governor_start(struct
>  	if (!policy->cur)
>  		return -EINVAL;
>  
> +	policy_dbs->is_shared = policy_is_shared(policy);
> +
>  	sampling_rate = dbs_data->sampling_rate;
>  	ignore_nice = dbs_data->ignore_nice_load;
>  
> Index: linux-pm/drivers/cpufreq/cpufreq_governor.h
> ===================================================================
> --- linux-pm.orig/drivers/cpufreq/cpufreq_governor.h
> +++ linux-pm/drivers/cpufreq/cpufreq_governor.h
> @@ -130,6 +130,9 @@ struct policy_dbs_info {
>  	/* dbs_data may be shared between multiple policy objects */
>  	struct dbs_data *dbs_data;
>  	struct list_head list;
> +	/* Status indicators */
> +	bool is_shared;		/* This object is used by multiple CPUs */
> +	bool work_in_progress;	/* Work is being queued up or in progress */
>  };
>  
>  static inline void gov_update_sample_delay(struct policy_dbs_info *policy_dbs,

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths
  2016-02-15  1:13 ` [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths Rafael J. Wysocki
  2016-02-15  6:17   ` Viresh Kumar
@ 2016-02-15  8:20   ` Viresh Kumar
  1 sibling, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:20 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:13, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> Rework the handling of work items by dbs_update_util_handler() and
> dbs_work_handler() so the former (which is executed in scheduler
> paths) only uses atomic operations when absolutely necessary.  That
> is, when the policy is shared and dbs_update_util_handler() has
> already decided that this is the time to queue up a work item.
> 
> In particular, this avoids the atomic ops entirely on platforms where
> policy objects are never shared.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
> 
> This is a new version of https://patchwork.kernel.org/patch/8291051/ .
> 
> Changes from the previous version:
> - Added a new "is_shared" field to struct policy_dbs_info to be set for
>   shared policies to avoid evaluating cpumask_weight() every time
>   dbs_update_util_handler() decides to take a sample.

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu()
  2016-02-15  1:15 ` [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu() Rafael J. Wysocki
@ 2016-02-15  8:29   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:29 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:15, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> The contribution of the CPU nice time to the idle time in dbs_check_cpu()
> is computed in a bogus way, as the code may subtract current and previous
> nice values for different CPUs.
> 
> That doesn't matter for cases when cpufreq policies are not shared,
> but may lead to problems otherwise.
> 
> Fix the computation and simplify it to avoid taking unnecessary steps.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_governor.c |   18 +++---------------
>  1 file changed, 3 insertions(+), 15 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 4/9] cpufreq: governor: Clean up load-related computations
  2016-02-15  1:18 ` [PATCH 4/9] cpufreq: governor: Clean up load-related computations Rafael J. Wysocki
@ 2016-02-15  8:33   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:33 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:18, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> Clean up some load-related computations in dbs_check_cpu() and
> cpufreq_governor_start() to get rid of unnecessary operations and
> type casts and make the code easier to read.
> 
> No functional changes.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_governor.c |   24 ++++++++++--------------
>  1 file changed, 10 insertions(+), 14 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback
  2016-02-15  1:19 ` [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback Rafael J. Wysocki
@ 2016-02-15  8:52   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:52 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:19, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> The way the ->gov_check_cpu governor callback is used by the ondemand
> and conservative governors is not really straightforward.  Namely, the
> governor calls dbs_check_cpu() that updates the load information for
> the policy and the invokes ->gov_check_cpu() for the governor.
> 
> To get rid of that entanglement, notice that cpufreq_governor_limits()
> doesn't need to call dbs_check_cpu() directly.  Instead, it can simply
> reset the sample delay to 0 which will cause a sample to be taken
> immediately.  The result of that is practically equivalent to calling
> dbs_check_cpu() except that it will trigger a full update of governor
> internal state and not just the ->gov_check_cpu() part.
> 
> Following that observation, make cpufreq_governor_limits() reset
> the sample delay and turn dbs_check_cpu() into a function that will
> simply evaluate the load and return the result called dbs_update().
> 
> That function can now be called by governors from the routines that
> previously were pointed to by ->gov_check_cpu and those routines
> can be called directly by each governor instead of dbs_check_cpu().
> This way ->gov_check_cpu becomes unnecessary, so drop it.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_conservative.c |   26 +++++++++-----------------
>  drivers/cpufreq/cpufreq_governor.c     |   15 ++++++++-------
>  drivers/cpufreq/cpufreq_governor.h     |    3 +--
>  drivers/cpufreq/cpufreq_ondemand.c     |   15 +++++++++------
>  4 files changed, 27 insertions(+), 32 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate()
  2016-02-15  1:20 ` [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate() Rafael J. Wysocki
@ 2016-02-15  8:53   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:53 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:20, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> If store_sampling_rate() updates the sample delay when the ondemand
> governor is in the middle of its high/low dance (OD_SUB_SAMPLE sample
> type is set), the governor will still do the bottom half of the
> previous sample which may take too much time.
> 
> To prevent that from happening, change store_sampling_rate() to always
> reset the sample delay to 0 which also is consistent with the new
> behavior of cpufreq_governor_limits().
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_governor.c |   16 ++++------------
>  1 file changed, 4 insertions(+), 12 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs
  2016-02-15  1:20 ` [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs Rafael J. Wysocki
@ 2016-02-15  8:56   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:56 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:20, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> The rate_mult field in struct od_cpu_dbs_info_s is used by the code
> shared with the conservative governor and to access it that code
> has to do an ugly governor type check.  However, first of all it
> is ever only used for policy->cpu, so it is per-policy rather than
> per-CPU and second, it is initialized to 1 by cpufreq_governor_start(),
> so if the conservative governor never modifies it, it will have no
> effect on the results of any computations.
> 
> For these reasons, move rate_mult to struct policy_dbs_info (as a
> common field).
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_governor.c |   25 +++++++++----------------
>  drivers/cpufreq/cpufreq_governor.h |    3 ++-
>  drivers/cpufreq/cpufreq_ondemand.c |   23 +++++++++++++++--------
>  3 files changed, 26 insertions(+), 25 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer()
  2016-02-15  1:21 ` [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer() Rafael J. Wysocki
@ 2016-02-15  8:57   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:57 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:21, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> Reduce the indentation level in the conditionals in od_dbs_timer()
> and drop the delay variable from it.
> 
> No functional changes.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_ondemand.c |   24 +++++++++++-------------
>  1 file changed, 11 insertions(+), 13 deletions(-)

Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations
  2016-02-15  1:22 ` [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations Rafael J. Wysocki
@ 2016-02-15  8:58   ` Viresh Kumar
  0 siblings, 0 replies; 20+ messages in thread
From: Viresh Kumar @ 2016-02-15  8:58 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: Linux PM list, Linux Kernel Mailing List, Juri Lelli

On 15-02-16, 02:22, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> Do not convert microseconds to jiffies and the other way around
> in governor computations related to the sampling rate and sample
> delay and drop delay_for_sampling_rate() which isn't of any use
> then.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/cpufreq/cpufreq_conservative.c |    2 +-
>  drivers/cpufreq/cpufreq_governor.c     |    4 +---
>  drivers/cpufreq/cpufreq_governor.h     |   15 ++-------------
>  drivers/cpufreq/cpufreq_ondemand.c     |   28 +++++++++++++---------------
>  4 files changed, 17 insertions(+), 32 deletions(-)

Acked-by: Viresh Kumar <viresh.kumar@linaro.org>

-- 
viresh

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2016-02-15  8:59 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-15  1:08 [PATCH 0/9] cpufreq governor improvements Rafael J. Wysocki
2016-02-15  1:12 ` [PATCH 1/9] cpufreq: governor: Simplify gov_cancel_work() slightly Rafael J. Wysocki
2016-02-15  5:40   ` Viresh Kumar
2016-02-15  1:13 ` [PATCH 2/9] cpufreq: governor: Avoid atomic operations in hot paths Rafael J. Wysocki
2016-02-15  6:17   ` Viresh Kumar
2016-02-15  8:20   ` Viresh Kumar
2016-02-15  1:15 ` [PATCH 3/9] cpufreq: governor: Fix nice contribution computation in dbs_check_cpu() Rafael J. Wysocki
2016-02-15  8:29   ` Viresh Kumar
2016-02-15  1:18 ` [PATCH 4/9] cpufreq: governor: Clean up load-related computations Rafael J. Wysocki
2016-02-15  8:33   ` Viresh Kumar
2016-02-15  1:19 ` [PATCH 5/9] cpufreq: governor: Get rid of the ->gov_check_cpu callback Rafael J. Wysocki
2016-02-15  8:52   ` Viresh Kumar
2016-02-15  1:20 ` [PATCH 6/9] cpufreq: governor: Reset sample delay in store_sampling_rate() Rafael J. Wysocki
2016-02-15  8:53   ` Viresh Kumar
2016-02-15  1:20 ` [PATCH 7/9] cpufreq: governor: Move rate_mult to struct policy_dbs Rafael J. Wysocki
2016-02-15  8:56   ` Viresh Kumar
2016-02-15  1:21 ` [PATCH 8/9] cpufreq: ondemand: Simplify conditionals in od_dbs_timer() Rafael J. Wysocki
2016-02-15  8:57   ` Viresh Kumar
2016-02-15  1:22 ` [PATCH 9/9] cpufreq: governor: Use microseconds in sample delay computations Rafael J. Wysocki
2016-02-15  8:58   ` Viresh Kumar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.