* [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
@ 2013-01-31 17:28 Viresh Kumar
2013-01-31 17:28 ` Viresh Kumar
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-01-31 17:28 UTC (permalink / raw)
To: rjw, fabio.baltieri
Cc: cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau, Viresh Kumar
CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we
don't need to adapt cpufreq_governor_dbs() routine for multiple calls.
So, this patch removes dbs_data->enable field entirely. And rearrange code a
bit.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
Hi Fabio,
I have fixed all the pending issues, but haven't checked these patches. Can you
please add your tested-by (obviously after testing them) ?
Compile tested only.
drivers/cpufreq/cpufreq_governor.c | 58 +++++++++++++-------------------------
drivers/cpufreq/cpufreq_governor.h | 1 -
2 files changed, 20 insertions(+), 39 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 46f1c78..29d6a59 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -182,6 +182,8 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
{
struct od_cpu_dbs_info_s *od_dbs_info = NULL;
struct cs_cpu_dbs_info_s *cs_dbs_info = NULL;
+ struct cs_ops *cs_ops = NULL;
+ struct od_ops *od_ops = NULL;
struct od_dbs_tuners *od_tuners = dbs_data->tuners;
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
struct cpu_dbs_common_info *cpu_cdbs;
@@ -194,10 +196,12 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
cs_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
sampling_rate = &cs_tuners->sampling_rate;
ignore_nice = cs_tuners->ignore_nice;
+ cs_ops = dbs_data->gov_ops;
} else {
od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
sampling_rate = &od_tuners->sampling_rate;
ignore_nice = od_tuners->ignore_nice;
+ od_ops = dbs_data->gov_ops;
}
switch (event) {
@@ -207,10 +211,9 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
mutex_lock(&dbs_data->mutex);
- dbs_data->enable++;
for_each_cpu(j, policy->cpus) {
- struct cpu_dbs_common_info *j_cdbs;
- j_cdbs = dbs_data->get_cpu_cdbs(j);
+ struct cpu_dbs_common_info *j_cdbs =
+ dbs_data->get_cpu_cdbs(j);
j_cdbs->cpu = j;
j_cdbs->cur_policy = policy;
@@ -225,13 +228,6 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
dbs_data->gov_dbs_timer);
}
- /*
- * Start the timerschedule work, when this governor is used for
- * first time
- */
- if (dbs_data->enable != 1)
- goto second_time;
-
rc = sysfs_create_group(cpufreq_global_kobject,
dbs_data->attr_group);
if (rc) {
@@ -249,17 +245,19 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
* governor, thus we are bound to jiffes/HZ
*/
if (dbs_data->governor == GOV_CONSERVATIVE) {
- struct cs_ops *ops = dbs_data->gov_ops;
-
- cpufreq_register_notifier(ops->notifier_block,
+ cs_dbs_info->down_skip = 0;
+ cs_dbs_info->enable = 1;
+ cs_dbs_info->requested_freq = policy->cur;
+ cpufreq_register_notifier(cs_ops->notifier_block,
CPUFREQ_TRANSITION_NOTIFIER);
dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
jiffies_to_usecs(10);
} else {
- struct od_ops *ops = dbs_data->gov_ops;
-
- od_tuners->io_is_busy = ops->io_busy();
+ od_dbs_info->rate_mult = 1;
+ od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
+ od_ops->powersave_bias_init_cpu(cpu);
+ od_tuners->io_is_busy = od_ops->io_busy();
}
/* Bring kernel and HW constraints together */
@@ -267,18 +265,6 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
MIN_LATENCY_MULTIPLIER * latency);
*sampling_rate = max(dbs_data->min_sampling_rate, latency *
LATENCY_MULTIPLIER);
-
-second_time:
- if (dbs_data->governor == GOV_CONSERVATIVE) {
- cs_dbs_info->down_skip = 0;
- cs_dbs_info->enable = 1;
- cs_dbs_info->requested_freq = policy->cur;
- } else {
- struct od_ops *ops = dbs_data->gov_ops;
- od_dbs_info->rate_mult = 1;
- od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
- ops->powersave_bias_init_cpu(cpu);
- }
mutex_unlock(&dbs_data->mutex);
/* Initiate timer time stamp */
@@ -297,16 +283,12 @@ second_time:
mutex_lock(&dbs_data->mutex);
mutex_destroy(&cpu_cdbs->timer_mutex);
- dbs_data->enable--;
- if (!dbs_data->enable) {
- struct cs_ops *ops = dbs_data->gov_ops;
-
- sysfs_remove_group(cpufreq_global_kobject,
- dbs_data->attr_group);
- if (dbs_data->governor == GOV_CONSERVATIVE)
- cpufreq_unregister_notifier(ops->notifier_block,
- CPUFREQ_TRANSITION_NOTIFIER);
- }
+
+ sysfs_remove_group(cpufreq_global_kobject,
+ dbs_data->attr_group);
+ if (dbs_data->governor == GOV_CONSERVATIVE)
+ cpufreq_unregister_notifier(cs_ops->notifier_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
mutex_unlock(&dbs_data->mutex);
break;
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index b72e628..c19a16c 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -130,7 +130,6 @@ struct dbs_data {
#define GOV_CONSERVATIVE 1
int governor;
unsigned int min_sampling_rate;
- unsigned int enable; /* number of CPUs using this policy */
struct attribute_group *attr_group;
void *tuners;
--
1.7.12.rc2.18.g61b472e
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
@ 2013-01-31 17:28 ` Viresh Kumar
0 siblings, 0 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-01-31 17:28 UTC (permalink / raw)
To: rjw, fabio.baltieri
Cc: cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau, Viresh Kumar
With the inclusion of following patches:
9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
code redundancy is introduced again. Get rid of it.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
drivers/cpufreq/cpufreq_conservative.c | 52 ++++-------------------
drivers/cpufreq/cpufreq_governor.c | 18 ++++++++
drivers/cpufreq/cpufreq_governor.h | 2 +
drivers/cpufreq/cpufreq_ondemand.c | 77 ++++++++++------------------------
4 files changed, 52 insertions(+), 97 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index c18a304..e8bb915 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -111,58 +111,24 @@ static void cs_check_cpu(int cpu, unsigned int load)
}
}
-static void cs_timer_update(struct cs_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void cs_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
+ struct cs_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
+ struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info,
+ cpu);
int delay = delay_for_sampling_rate(cs_tuners.sampling_rate);
- if (sample)
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ if (need_load_eval(&core_dbs_info->cdbs, cs_tuners.sampling_rate))
dbs_check_cpu(&cs_dbs_data, cpu);
schedule_delayed_work_on(smp_processor_id(), dw, delay);
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
-static void cs_timer_coordinated(struct cs_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct cs_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(cs_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(cs_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- cs_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void cs_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
- struct cs_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- cs_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- cs_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
-}
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 29d6a59..dc99472 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -177,6 +177,24 @@ static inline void dbs_timer_exit(struct dbs_data *dbs_data, int cpu)
cancel_delayed_work_sync(&cdbs->work);
}
+/* Will return if we need to evaluate cpu load again or not */
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate)
+{
+ if (policy_is_shared(cdbs->cur_policy)) {
+ ktime_t time_now = ktime_get();
+ s64 delta_us = ktime_us_delta(time_now, cdbs->time_stamp);
+
+ /* Do nothing if we recently have sampled */
+ if (delta_us < (s64)(sampling_rate / 2))
+ return false;
+ else
+ cdbs->time_stamp = time_now;
+ }
+
+ return true;
+}
+
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event)
{
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index c19a16c..16314b6 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -171,6 +171,8 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate)
u64 get_cpu_idle_time(unsigned int cpu, u64 *wall);
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event);
#endif /* _CPUFREQ_GOVERNER_H */
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 75efd5e..f38b8da 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -216,75 +216,44 @@ static void od_check_cpu(int cpu, unsigned int load_freq)
}
}
-static void od_timer_update(struct od_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void od_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct od_cpu_dbs_info_s *dbs_info =
+ container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
- int delay, sample_type = dbs_info->sample_type;
+ struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info,
+ cpu);
+ int delay, sample_type = core_dbs_info->sample_type;
+ bool eval_load;
+
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ eval_load = need_load_eval(&core_dbs_info->cdbs,
+ od_tuners.sampling_rate);
/* Common NORMAL_SAMPLE setup */
- dbs_info->sample_type = OD_NORMAL_SAMPLE;
+ core_dbs_info->sample_type = OD_NORMAL_SAMPLE;
if (sample_type == OD_SUB_SAMPLE) {
- delay = dbs_info->freq_lo_jiffies;
- if (sample)
- __cpufreq_driver_target(dbs_info->cdbs.cur_policy,
- dbs_info->freq_lo,
+ delay = core_dbs_info->freq_lo_jiffies;
+ if (eval_load)
+ __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy,
+ core_dbs_info->freq_lo,
CPUFREQ_RELATION_H);
} else {
- if (sample)
+ if (eval_load)
dbs_check_cpu(&od_dbs_data, cpu);
- if (dbs_info->freq_lo) {
+ if (core_dbs_info->freq_lo) {
/* Setup timer for SUB_SAMPLE */
- dbs_info->sample_type = OD_SUB_SAMPLE;
- delay = dbs_info->freq_hi_jiffies;
+ core_dbs_info->sample_type = OD_SUB_SAMPLE;
+ delay = core_dbs_info->freq_hi_jiffies;
} else {
delay = delay_for_sampling_rate(od_tuners.sampling_rate
- * dbs_info->rate_mult);
+ * core_dbs_info->rate_mult);
}
}
schedule_delayed_work_on(smp_processor_id(), dw, delay);
-}
-
-static void od_timer_coordinated(struct od_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct od_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(od_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(od_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- od_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void od_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct od_cpu_dbs_info_s *dbs_info =
- container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- od_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- od_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
/************************** sysfs interface ************************/
--
1.7.12.rc2.18.g61b472e
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
@ 2013-01-31 17:28 ` Viresh Kumar
0 siblings, 0 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-01-31 17:28 UTC (permalink / raw)
To: rjw-KKrjLPT3xs0, fabio.baltieri-QSEj5FYQhm4dnm+yROfE0A
Cc: Steve.Bannister-5wv7dgnIgG8, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
linux-pm-u79uwXL29TY76Z2rM5mHXA, Viresh Kumar,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
cpufreq-u79uwXL29TY76Z2rM5mHXA
With the inclusion of following patches:
9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
code redundancy is introduced again. Get rid of it.
Signed-off-by: Viresh Kumar <viresh.kumar-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
---
drivers/cpufreq/cpufreq_conservative.c | 52 ++++-------------------
drivers/cpufreq/cpufreq_governor.c | 18 ++++++++
drivers/cpufreq/cpufreq_governor.h | 2 +
drivers/cpufreq/cpufreq_ondemand.c | 77 ++++++++++------------------------
4 files changed, 52 insertions(+), 97 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index c18a304..e8bb915 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -111,58 +111,24 @@ static void cs_check_cpu(int cpu, unsigned int load)
}
}
-static void cs_timer_update(struct cs_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void cs_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
+ struct cs_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
+ struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info,
+ cpu);
int delay = delay_for_sampling_rate(cs_tuners.sampling_rate);
- if (sample)
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ if (need_load_eval(&core_dbs_info->cdbs, cs_tuners.sampling_rate))
dbs_check_cpu(&cs_dbs_data, cpu);
schedule_delayed_work_on(smp_processor_id(), dw, delay);
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
-static void cs_timer_coordinated(struct cs_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct cs_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(cs_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(cs_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- cs_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void cs_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
- struct cs_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- cs_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- cs_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
-}
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 29d6a59..dc99472 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -177,6 +177,24 @@ static inline void dbs_timer_exit(struct dbs_data *dbs_data, int cpu)
cancel_delayed_work_sync(&cdbs->work);
}
+/* Will return if we need to evaluate cpu load again or not */
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate)
+{
+ if (policy_is_shared(cdbs->cur_policy)) {
+ ktime_t time_now = ktime_get();
+ s64 delta_us = ktime_us_delta(time_now, cdbs->time_stamp);
+
+ /* Do nothing if we recently have sampled */
+ if (delta_us < (s64)(sampling_rate / 2))
+ return false;
+ else
+ cdbs->time_stamp = time_now;
+ }
+
+ return true;
+}
+
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event)
{
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index c19a16c..16314b6 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -171,6 +171,8 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate)
u64 get_cpu_idle_time(unsigned int cpu, u64 *wall);
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event);
#endif /* _CPUFREQ_GOVERNER_H */
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 75efd5e..f38b8da 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -216,75 +216,44 @@ static void od_check_cpu(int cpu, unsigned int load_freq)
}
}
-static void od_timer_update(struct od_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void od_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct od_cpu_dbs_info_s *dbs_info =
+ container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
- int delay, sample_type = dbs_info->sample_type;
+ struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info,
+ cpu);
+ int delay, sample_type = core_dbs_info->sample_type;
+ bool eval_load;
+
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ eval_load = need_load_eval(&core_dbs_info->cdbs,
+ od_tuners.sampling_rate);
/* Common NORMAL_SAMPLE setup */
- dbs_info->sample_type = OD_NORMAL_SAMPLE;
+ core_dbs_info->sample_type = OD_NORMAL_SAMPLE;
if (sample_type == OD_SUB_SAMPLE) {
- delay = dbs_info->freq_lo_jiffies;
- if (sample)
- __cpufreq_driver_target(dbs_info->cdbs.cur_policy,
- dbs_info->freq_lo,
+ delay = core_dbs_info->freq_lo_jiffies;
+ if (eval_load)
+ __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy,
+ core_dbs_info->freq_lo,
CPUFREQ_RELATION_H);
} else {
- if (sample)
+ if (eval_load)
dbs_check_cpu(&od_dbs_data, cpu);
- if (dbs_info->freq_lo) {
+ if (core_dbs_info->freq_lo) {
/* Setup timer for SUB_SAMPLE */
- dbs_info->sample_type = OD_SUB_SAMPLE;
- delay = dbs_info->freq_hi_jiffies;
+ core_dbs_info->sample_type = OD_SUB_SAMPLE;
+ delay = core_dbs_info->freq_hi_jiffies;
} else {
delay = delay_for_sampling_rate(od_tuners.sampling_rate
- * dbs_info->rate_mult);
+ * core_dbs_info->rate_mult);
}
}
schedule_delayed_work_on(smp_processor_id(), dw, delay);
-}
-
-static void od_timer_coordinated(struct od_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct od_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(od_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(od_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- od_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void od_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct od_cpu_dbs_info_s *dbs_info =
- container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- od_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- od_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
/************************** sysfs interface ************************/
--
1.7.12.rc2.18.g61b472e
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
@ 2013-01-31 17:29 ` Viresh Kumar
0 siblings, 0 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-01-31 17:29 UTC (permalink / raw)
To: rjw, fabio.baltieri
Cc: cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau, Viresh Kumar
On 31 January 2013 22:58, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we
> don't need to adapt cpufreq_governor_dbs() routine for multiple calls.
>
> So, this patch removes dbs_data->enable field entirely. And rearrange code a
> bit.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> Hi Fabio,
>
> I have fixed all the pending issues, but haven't checked these patches. Can you
> please add your tested-by (obviously after testing them) ?
BTW, these are rebased over your patches and git log shows:
af8a2e1 cpufreq: governors: Remove code redundancy between governors
09a94ff cpufreq: governors: Get rid of dbs_data->enable field
faa8107 cpufreq: governors: implement generic policy_is_shared
88fe0a9 cpufreq: governors: clean timer init and exit code
13c18e8 cpufreq: governors: fix misuse of cdbs.cpu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
@ 2013-01-31 17:29 ` Viresh Kumar
0 siblings, 0 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-01-31 17:29 UTC (permalink / raw)
To: rjw-KKrjLPT3xs0, fabio.baltieri-QSEj5FYQhm4dnm+yROfE0A
Cc: Steve.Bannister-5wv7dgnIgG8, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
linux-pm-u79uwXL29TY76Z2rM5mHXA, Viresh Kumar,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
cpufreq-u79uwXL29TY76Z2rM5mHXA
On 31 January 2013 22:58, Viresh Kumar <viresh.kumar-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> wrote:
> CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we
> don't need to adapt cpufreq_governor_dbs() routine for multiple calls.
>
> So, this patch removes dbs_data->enable field entirely. And rearrange code a
> bit.
>
> Signed-off-by: Viresh Kumar <viresh.kumar-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
> ---
> Hi Fabio,
>
> I have fixed all the pending issues, but haven't checked these patches. Can you
> please add your tested-by (obviously after testing them) ?
BTW, these are rebased over your patches and git log shows:
af8a2e1 cpufreq: governors: Remove code redundancy between governors
09a94ff cpufreq: governors: Get rid of dbs_data->enable field
faa8107 cpufreq: governors: implement generic policy_is_shared
88fe0a9 cpufreq: governors: clean timer init and exit code
13c18e8 cpufreq: governors: fix misuse of cdbs.cpu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
@ 2013-01-31 18:44 ` Fabio Baltieri
0 siblings, 0 replies; 15+ messages in thread
From: Fabio Baltieri @ 2013-01-31 18:44 UTC (permalink / raw)
To: Viresh Kumar
Cc: rjw, cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau
On Thu, Jan 31, 2013 at 10:58:01PM +0530, Viresh Kumar wrote:
> CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we
> don't need to adapt cpufreq_governor_dbs() routine for multiple calls.
>
> So, this patch removes dbs_data->enable field entirely. And rearrange code a
> bit.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
> Hi Fabio,
>
> I have fixed all the pending issues, but haven't checked these patches. Can you
> please add your tested-by (obviously after testing them) ?
>
> Compile tested only.
Hello Viresh, thanks for getting this done... looks much cleaner now!
I tested both patches on my ux500 setup (dual Cortex-A9) and it seems to
run correctly on both CPU load changes and CPU hotplug, so:
Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
As a sidenote, I noticed just now that since:
bc92bea cpufreq: Notify governors when cpus are hot-[un]plugged
governor's sampling_rate gets reset to default every time you hotplug a
CPU (the one you read/write on
/sys/devices/system/cpu/cpufreq/ondemand/sampling_rate).
If you need further tests, I'll be back on Monday.
Thanks,
Fabio
--
Fabio Baltieri
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
@ 2013-01-31 18:44 ` Fabio Baltieri
0 siblings, 0 replies; 15+ messages in thread
From: Fabio Baltieri @ 2013-01-31 18:44 UTC (permalink / raw)
To: Viresh Kumar
Cc: linaro-dev-cunTk1MwBs8s++Sfvej+rw,
linux-pm-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
cpufreq-u79uwXL29TY76Z2rM5mHXA, rjw-KKrjLPT3xs0,
Steve.Bannister-5wv7dgnIgG8
On Thu, Jan 31, 2013 at 10:58:01PM +0530, Viresh Kumar wrote:
> CPUFREQ_GOV_START/STOP are called only once for all policy->cpus and hence we
> don't need to adapt cpufreq_governor_dbs() routine for multiple calls.
>
> So, this patch removes dbs_data->enable field entirely. And rearrange code a
> bit.
>
> Signed-off-by: Viresh Kumar <viresh.kumar-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
> ---
> Hi Fabio,
>
> I have fixed all the pending issues, but haven't checked these patches. Can you
> please add your tested-by (obviously after testing them) ?
>
> Compile tested only.
Hello Viresh, thanks for getting this done... looks much cleaner now!
I tested both patches on my ux500 setup (dual Cortex-A9) and it seems to
run correctly on both CPU load changes and CPU hotplug, so:
Tested-by: Fabio Baltieri <fabio.baltieri-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
As a sidenote, I noticed just now that since:
bc92bea cpufreq: Notify governors when cpus are hot-[un]plugged
governor's sampling_rate gets reset to default every time you hotplug a
CPU (the one you read/write on
/sys/devices/system/cpu/cpufreq/ondemand/sampling_rate).
If you need further tests, I'll be back on Monday.
Thanks,
Fabio
--
Fabio Baltieri
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-01-31 17:28 ` Viresh Kumar
(?)
@ 2013-01-31 18:50 ` Fabio Baltieri
2013-01-31 22:23 ` Rafael J. Wysocki
-1 siblings, 1 reply; 15+ messages in thread
From: Fabio Baltieri @ 2013-01-31 18:50 UTC (permalink / raw)
To: Viresh Kumar
Cc: rjw, cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau
On Thu, Jan 31, 2013 at 10:58:02PM +0530, Viresh Kumar wrote:
> With the inclusion of following patches:
>
> 9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
> 772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
>
> code redundancy is introduced again. Get rid of it.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> ---
Hi,
Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Thanks,
Fabio
--
Fabio Baltieri
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-01-31 18:50 ` Fabio Baltieri
@ 2013-01-31 22:23 ` Rafael J. Wysocki
2013-01-31 22:51 ` Fabio Baltieri
0 siblings, 1 reply; 15+ messages in thread
From: Rafael J. Wysocki @ 2013-01-31 22:23 UTC (permalink / raw)
To: Fabio Baltieri, Viresh Kumar, Shawn Guo
Cc: cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau
On Thursday, January 31, 2013 07:50:04 PM Fabio Baltieri wrote:
> On Thu, Jan 31, 2013 at 10:58:02PM +0530, Viresh Kumar wrote:
> > With the inclusion of following patches:
> >
> > 9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
> > 772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
> >
> > code redundancy is introduced again. Get rid of it.
> >
> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> > ---
>
> Hi,
>
> Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
OK
Fabio, Viresh, Shawn,
This time I was *really* confused as to what patches I was supposed to take,
from whom and in what order, so I applied a number of them in the order given
by patchwork. That worked well enough, because (almost) all of them applied
for me without conflicts. That said I would appreciate it if you could look
into the bleeding-edge branch of my tree and see if there's anything missing
or something that shouldn't be there (cpufreq-wise).
Thanks,
Rafael
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-01-31 22:23 ` Rafael J. Wysocki
@ 2013-01-31 22:51 ` Fabio Baltieri
2013-02-01 2:31 ` Viresh Kumar
0 siblings, 1 reply; 15+ messages in thread
From: Fabio Baltieri @ 2013-01-31 22:51 UTC (permalink / raw)
To: Rafael J. Wysocki
Cc: Viresh Kumar, Shawn Guo, cpufreq, linux-pm, linux-kernel,
linaro-dev, robin.randhawa, Steve.Bannister, Liviu.Dudau
Hello Rafael,
On Thu, Jan 31, 2013 at 11:23:54PM +0100, Rafael J. Wysocki wrote:
> On Thursday, January 31, 2013 07:50:04 PM Fabio Baltieri wrote:
> > On Thu, Jan 31, 2013 at 10:58:02PM +0530, Viresh Kumar wrote:
> > > With the inclusion of following patches:
> > >
> > > 9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
> > > 772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
> > >
> > > code redundancy is introduced again. Get rid of it.
> > >
> > > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> > > ---
> >
> > Hi,
> >
> > Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
>
> OK
>
> Fabio, Viresh, Shawn,
>
> This time I was *really* confused as to what patches I was supposed to take,
> from whom and in what order, so I applied a number of them in the order given
> by patchwork. That worked well enough, because (almost) all of them applied
> for me without conflicts. That said I would appreciate it if you could look
> into the bleeding-edge branch of my tree and see if there's anything missing
> or something that shouldn't be there (cpufreq-wise).
Sorry for the confusion, your current bleeding-edge branch (eed52da)
looks good to me. I also did a quick build and run and it works fine on
my setup.
Many thanks,
Fabio
--
Fabio Baltieri
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-01-31 22:51 ` Fabio Baltieri
@ 2013-02-01 2:31 ` Viresh Kumar
2013-02-01 2:38 ` Viresh Kumar
0 siblings, 1 reply; 15+ messages in thread
From: Viresh Kumar @ 2013-02-01 2:31 UTC (permalink / raw)
To: Fabio Baltieri
Cc: Rafael J. Wysocki, Shawn Guo, cpufreq, linux-pm, linux-kernel,
linaro-dev, robin.randhawa, Steve.Bannister, Liviu.Dudau
On 1 February 2013 04:21, Fabio Baltieri <fabio.baltieri@linaro.org> wrote:
> On Thu, Jan 31, 2013 at 11:23:54PM +0100, Rafael J. Wysocki wrote:
>> This time I was *really* confused as to what patches I was supposed to take,
>> from whom and in what order, so I applied a number of them in the order given
>> by patchwork. That worked well enough, because (almost) all of them applied
>> for me without conflicts. That said I would appreciate it if you could look
>> into the bleeding-edge branch of my tree and see if there's anything missing
>> or something that shouldn't be there (cpufreq-wise).
>
> Sorry for the confusion, your current bleeding-edge branch (eed52da)
> looks good to me. I also did a quick build and run and it works fine on
> my setup.
Really!! I see bleeding edge as df0e3f4 and i don't see the $(subject) patch
in it :)
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-02-01 2:31 ` Viresh Kumar
@ 2013-02-01 2:38 ` Viresh Kumar
2013-02-01 22:32 ` Rafael J. Wysocki
0 siblings, 1 reply; 15+ messages in thread
From: Viresh Kumar @ 2013-02-01 2:38 UTC (permalink / raw)
To: Fabio Baltieri
Cc: Rafael J. Wysocki, Shawn Guo, cpufreq, linux-pm, linux-kernel,
linaro-dev, robin.randhawa, Steve.Bannister, Liviu.Dudau
[-- Attachment #1: Type: text/plain, Size: 748 bytes --]
On 1 February 2013 08:01, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> Really!! I see bleeding edge as df0e3f4 and i don't see the $(subject) patch
> in it :)
Well it might have been dropped by Rafael due to build error, which would
be fixed by:
diff --git a/drivers/cpufreq/cpufreq_governor.c
b/drivers/cpufreq/cpufreq_governor.c
index dc99472..7aaa9b1 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -194,6 +194,7 @@ bool need_load_eval(struct cpu_dbs_common_info *cdbs,
return true;
}
+EXPORT_SYMBOL_GPL(need_load_eval);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event)
Original patch attached with this change.
[-- Attachment #2: 0001-cpufreq-governors-Remove-code-redundancy-between-gov.patch --]
[-- Type: application/octet-stream, Size: 8255 bytes --]
From 5ea3e5b0ba17b9b241582ad9ec7aaf9ed76108af Mon Sep 17 00:00:00 2001
Message-Id: <5ea3e5b0ba17b9b241582ad9ec7aaf9ed76108af.1359686252.git.viresh.kumar@linaro.org>
From: Viresh Kumar <viresh.kumar@linaro.org>
Date: Thu, 31 Jan 2013 22:52:06 +0530
Subject: [PATCH] cpufreq: governors: Remove code redundancy between governors
With the inclusion of following patches:
9f4eb10 cpufreq: conservative: call dbs_check_cpu only when necessary
772b4b1 cpufreq: ondemand: call dbs_check_cpu only when necessary
code redundancy is introduced again. Get rid of it.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
drivers/cpufreq/cpufreq_conservative.c | 52 ++++-------------------
drivers/cpufreq/cpufreq_governor.c | 19 +++++++++
drivers/cpufreq/cpufreq_governor.h | 2 +
drivers/cpufreq/cpufreq_ondemand.c | 77 ++++++++++------------------------
4 files changed, 53 insertions(+), 97 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index c18a304..e8bb915 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -111,58 +111,24 @@ static void cs_check_cpu(int cpu, unsigned int load)
}
}
-static void cs_timer_update(struct cs_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void cs_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
+ struct cs_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
+ struct cs_cpu_dbs_info_s *core_dbs_info = &per_cpu(cs_cpu_dbs_info,
+ cpu);
int delay = delay_for_sampling_rate(cs_tuners.sampling_rate);
- if (sample)
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ if (need_load_eval(&core_dbs_info->cdbs, cs_tuners.sampling_rate))
dbs_check_cpu(&cs_dbs_data, cpu);
schedule_delayed_work_on(smp_processor_id(), dw, delay);
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
-static void cs_timer_coordinated(struct cs_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct cs_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(cs_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(cs_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- cs_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void cs_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
- struct cs_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- cs_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- cs_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
-}
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 29d6a59..7aaa9b1 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -177,6 +177,25 @@ static inline void dbs_timer_exit(struct dbs_data *dbs_data, int cpu)
cancel_delayed_work_sync(&cdbs->work);
}
+/* Will return if we need to evaluate cpu load again or not */
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate)
+{
+ if (policy_is_shared(cdbs->cur_policy)) {
+ ktime_t time_now = ktime_get();
+ s64 delta_us = ktime_us_delta(time_now, cdbs->time_stamp);
+
+ /* Do nothing if we recently have sampled */
+ if (delta_us < (s64)(sampling_rate / 2))
+ return false;
+ else
+ cdbs->time_stamp = time_now;
+ }
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(need_load_eval);
+
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event)
{
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index c19a16c..16314b6 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -171,6 +171,8 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate)
u64 get_cpu_idle_time(unsigned int cpu, u64 *wall);
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
+bool need_load_eval(struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event);
#endif /* _CPUFREQ_GOVERNER_H */
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index 75efd5e..f38b8da 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -216,75 +216,44 @@ static void od_check_cpu(int cpu, unsigned int load_freq)
}
}
-static void od_timer_update(struct od_cpu_dbs_info_s *dbs_info, bool sample,
- struct delayed_work *dw)
+static void od_dbs_timer(struct work_struct *work)
{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct od_cpu_dbs_info_s *dbs_info =
+ container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cur_policy->cpu;
- int delay, sample_type = dbs_info->sample_type;
+ struct od_cpu_dbs_info_s *core_dbs_info = &per_cpu(od_cpu_dbs_info,
+ cpu);
+ int delay, sample_type = core_dbs_info->sample_type;
+ bool eval_load;
+
+ mutex_lock(&core_dbs_info->cdbs.timer_mutex);
+ eval_load = need_load_eval(&core_dbs_info->cdbs,
+ od_tuners.sampling_rate);
/* Common NORMAL_SAMPLE setup */
- dbs_info->sample_type = OD_NORMAL_SAMPLE;
+ core_dbs_info->sample_type = OD_NORMAL_SAMPLE;
if (sample_type == OD_SUB_SAMPLE) {
- delay = dbs_info->freq_lo_jiffies;
- if (sample)
- __cpufreq_driver_target(dbs_info->cdbs.cur_policy,
- dbs_info->freq_lo,
+ delay = core_dbs_info->freq_lo_jiffies;
+ if (eval_load)
+ __cpufreq_driver_target(core_dbs_info->cdbs.cur_policy,
+ core_dbs_info->freq_lo,
CPUFREQ_RELATION_H);
} else {
- if (sample)
+ if (eval_load)
dbs_check_cpu(&od_dbs_data, cpu);
- if (dbs_info->freq_lo) {
+ if (core_dbs_info->freq_lo) {
/* Setup timer for SUB_SAMPLE */
- dbs_info->sample_type = OD_SUB_SAMPLE;
- delay = dbs_info->freq_hi_jiffies;
+ core_dbs_info->sample_type = OD_SUB_SAMPLE;
+ delay = core_dbs_info->freq_hi_jiffies;
} else {
delay = delay_for_sampling_rate(od_tuners.sampling_rate
- * dbs_info->rate_mult);
+ * core_dbs_info->rate_mult);
}
}
schedule_delayed_work_on(smp_processor_id(), dw, delay);
-}
-
-static void od_timer_coordinated(struct od_cpu_dbs_info_s *dbs_info_local,
- struct delayed_work *dw)
-{
- struct od_cpu_dbs_info_s *dbs_info;
- ktime_t time_now;
- s64 delta_us;
- bool sample = true;
-
- /* use leader CPU's dbs_info */
- dbs_info = &per_cpu(od_cpu_dbs_info,
- dbs_info_local->cdbs.cur_policy->cpu);
- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
- time_now = ktime_get();
- delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
-
- /* Do nothing if we recently have sampled */
- if (delta_us < (s64)(od_tuners.sampling_rate / 2))
- sample = false;
- else
- dbs_info->cdbs.time_stamp = time_now;
-
- od_timer_update(dbs_info, sample, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
-}
-
-static void od_dbs_timer(struct work_struct *work)
-{
- struct delayed_work *dw = to_delayed_work(work);
- struct od_cpu_dbs_info_s *dbs_info =
- container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
-
- if (policy_is_shared(dbs_info->cdbs.cur_policy)) {
- od_timer_coordinated(dbs_info, dw);
- } else {
- mutex_lock(&dbs_info->cdbs.timer_mutex);
- od_timer_update(dbs_info, true, dw);
- mutex_unlock(&dbs_info->cdbs.timer_mutex);
- }
+ mutex_unlock(&core_dbs_info->cdbs.timer_mutex);
}
/************************** sysfs interface ************************/
--
1.7.12.rc2.18.g61b472e
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
2013-01-31 18:44 ` Fabio Baltieri
(?)
@ 2013-02-01 3:52 ` Viresh Kumar
2013-02-01 6:44 ` Viresh Kumar
-1 siblings, 1 reply; 15+ messages in thread
From: Viresh Kumar @ 2013-02-01 3:52 UTC (permalink / raw)
To: Fabio Baltieri
Cc: rjw, cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau
On 1 February 2013 00:14, Fabio Baltieri <fabio.baltieri@linaro.org> wrote:
> Hello Viresh, thanks for getting this done... looks much cleaner now!
>
> I tested both patches on my ux500 setup (dual Cortex-A9) and it seems to
> run correctly on both CPU load changes and CPU hotplug, so:
>
> Tested-by: Fabio Baltieri <fabio.baltieri@linaro.org>
Thanks.
> As a sidenote, I noticed just now that since:
>
> bc92bea cpufreq: Notify governors when cpus are hot-[un]plugged
>
> governor's sampling_rate gets reset to default every time you hotplug a
> CPU (the one you read/write on
> /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate).
>
> If you need further tests, I'll be back on Monday.
I will have a look.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field
2013-02-01 3:52 ` Viresh Kumar
@ 2013-02-01 6:44 ` Viresh Kumar
0 siblings, 0 replies; 15+ messages in thread
From: Viresh Kumar @ 2013-02-01 6:44 UTC (permalink / raw)
To: Fabio Baltieri
Cc: rjw, cpufreq, linux-pm, linux-kernel, linaro-dev, robin.randhawa,
Steve.Bannister, Liviu.Dudau
On 1 February 2013 09:22, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> On 1 February 2013 00:14, Fabio Baltieri <fabio.baltieri@linaro.org> wrote:
>> As a sidenote, I noticed just now that since:
>>
>> bc92bea cpufreq: Notify governors when cpus are hot-[un]plugged
>>
>> governor's sampling_rate gets reset to default every time you hotplug a
>> CPU (the one you read/write on
>> /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate).
>>
>> If you need further tests, I'll be back on Monday.
>
> I will have a look.
Fixed with:
https://lkml.org/lkml/2013/2/1/6
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors
2013-02-01 2:38 ` Viresh Kumar
@ 2013-02-01 22:32 ` Rafael J. Wysocki
0 siblings, 0 replies; 15+ messages in thread
From: Rafael J. Wysocki @ 2013-02-01 22:32 UTC (permalink / raw)
To: Viresh Kumar
Cc: Fabio Baltieri, Shawn Guo, cpufreq, linux-pm, linux-kernel,
linaro-dev, robin.randhawa, Steve.Bannister, Liviu.Dudau
On Friday, February 01, 2013 08:08:42 AM Viresh Kumar wrote:
> On 1 February 2013 08:01, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > Really!! I see bleeding edge as df0e3f4 and i don't see the $(subject) patch
> > in it :)
>
> Well it might have been dropped by Rafael due to build error,
Precisely.
> which would be fixed by:
>
> diff --git a/drivers/cpufreq/cpufreq_governor.c
> b/drivers/cpufreq/cpufreq_governor.c
> index dc99472..7aaa9b1 100644
> --- a/drivers/cpufreq/cpufreq_governor.c
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -194,6 +194,7 @@ bool need_load_eval(struct cpu_dbs_common_info *cdbs,
>
> return true;
> }
> +EXPORT_SYMBOL_GPL(need_load_eval);
>
> int cpufreq_governor_dbs(struct dbs_data *dbs_data,
> struct cpufreq_policy *policy, unsigned int event)
>
>
> Original patch attached with this change.
OK, thanks!
Rafael
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-02-01 22:26 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-31 17:28 [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field Viresh Kumar
2013-01-31 17:28 ` [PATCH 2/2] cpufreq: governors: Remove code redundancy between governors Viresh Kumar
2013-01-31 17:28 ` Viresh Kumar
2013-01-31 18:50 ` Fabio Baltieri
2013-01-31 22:23 ` Rafael J. Wysocki
2013-01-31 22:51 ` Fabio Baltieri
2013-02-01 2:31 ` Viresh Kumar
2013-02-01 2:38 ` Viresh Kumar
2013-02-01 22:32 ` Rafael J. Wysocki
2013-01-31 17:29 ` [PATCH 1/2] cpufreq: governors: Get rid of dbs_data->enable field Viresh Kumar
2013-01-31 17:29 ` Viresh Kumar
2013-01-31 18:44 ` Fabio Baltieri
2013-01-31 18:44 ` Fabio Baltieri
2013-02-01 3:52 ` Viresh Kumar
2013-02-01 6:44 ` Viresh Kumar
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.