linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2
@ 2012-11-28 17:52 Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 1/4] cputime: Move thread_group_cputime() to sched code Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-28 17:52 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Paul Gortmaker

Hi,

Changes since v1 address Steven Rostedt and Paul Gortmaker reviews:

- More comments to distinguish struct cputime / struct task_cputime [3/4]
- Comment the reasons and the details for cputime adjustment [4/4]

Thanks.

Frederic Weisbecker (4):
  cputime: Move thread_group_cputime() to sched code
  cputime: Rename thread_group_times to thread_group_cputime_adjusted
  cputime: Consolidate cputime adjustment code
  cputime: Comment cputime's adjusting code

 fs/proc/array.c           |    4 +-
 include/linux/sched.h     |   27 ++++++++++---
 kernel/exit.c             |    4 +-
 kernel/fork.c             |    2 +-
 kernel/posix-cpu-timers.c |   24 -----------
 kernel/sched/cputime.c    |   98 ++++++++++++++++++++++++++++++++-------------
 kernel/sys.c              |    6 +-
 7 files changed, 99 insertions(+), 66 deletions(-)

-- 
1.7.5.4


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] cputime: Move thread_group_cputime() to sched code
  2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
@ 2012-11-28 17:52 ` Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 2/4] cputime: Rename thread_group_times to thread_group_cputime_adjusted Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-28 17:52 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Paul Gortmaker

thread_group_cputime() is a general cputime API that is not only
used by posix cpu timer. Let's move this helper to sched code.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 kernel/posix-cpu-timers.c |   24 ------------------------
 kernel/sched/cputime.c    |   28 ++++++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 24 deletions(-)

diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
index 125cb67..d738402 100644
--- a/kernel/posix-cpu-timers.c
+++ b/kernel/posix-cpu-timers.c
@@ -217,30 +217,6 @@ static int cpu_clock_sample(const clockid_t which_clock, struct task_struct *p,
 	return 0;
 }
 
-void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
-{
-	struct signal_struct *sig = tsk->signal;
-	struct task_struct *t;
-
-	times->utime = sig->utime;
-	times->stime = sig->stime;
-	times->sum_exec_runtime = sig->sum_sched_runtime;
-
-	rcu_read_lock();
-	/* make sure we can trust tsk->thread_group list */
-	if (!likely(pid_alive(tsk)))
-		goto out;
-
-	t = tsk;
-	do {
-		times->utime += t->utime;
-		times->stime += t->stime;
-		times->sum_exec_runtime += task_sched_runtime(t);
-	} while_each_thread(tsk, t);
-out:
-	rcu_read_unlock();
-}
-
 static void update_gt_cputime(struct task_cputime *a, struct task_cputime *b)
 {
 	if (b->utime > a->utime)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 8d859da..e56f138 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -288,6 +288,34 @@ static __always_inline bool steal_account_process_tick(void)
 	return false;
 }
 
+/*
+ * Accumulate raw cputime values of dead tasks (sig->[us]time) and live
+ * tasks (sum on group iteration) belonging to @tsk's group.
+ */
+void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
+{
+	struct signal_struct *sig = tsk->signal;
+	struct task_struct *t;
+
+	times->utime = sig->utime;
+	times->stime = sig->stime;
+	times->sum_exec_runtime = sig->sum_sched_runtime;
+
+	rcu_read_lock();
+	/* make sure we can trust tsk->thread_group list */
+	if (!likely(pid_alive(tsk)))
+		goto out;
+
+	t = tsk;
+	do {
+		times->utime += t->utime;
+		times->stime += t->stime;
+		times->sum_exec_runtime += task_sched_runtime(t);
+	} while_each_thread(tsk, t);
+out:
+	rcu_read_unlock();
+}
+
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
 
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] cputime: Rename thread_group_times to thread_group_cputime_adjusted
  2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 1/4] cputime: Move thread_group_cputime() to sched code Frederic Weisbecker
@ 2012-11-28 17:52 ` Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 3/4] cputime: Consolidate cputime adjustment code Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-28 17:52 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Paul Gortmaker

We have thread_group_cputime() and thread_group_times(). The naming
doesn't provide enough information about the difference between
these two APIs.

To lower the confusion, rename thread_group_times() to
thread_group_cputime_adjusted(). This name better suggests that
it's a version of thread_group_cputime() that does some stabilization
on the raw cputime values. ie here: scale on top of CFS runtime
stats and bound lower value for monotonicity.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 fs/proc/array.c        |    4 ++--
 include/linux/sched.h  |    4 ++--
 kernel/exit.c          |    4 ++--
 kernel/sched/cputime.c |    8 ++++----
 kernel/sys.c           |    6 +++---
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/proc/array.c b/fs/proc/array.c
index c1c207c..d369670 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -438,7 +438,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
 
 			min_flt += sig->min_flt;
 			maj_flt += sig->maj_flt;
-			thread_group_times(task, &utime, &stime);
+			thread_group_cputime_adjusted(task, &utime, &stime);
 			gtime += sig->gtime;
 		}
 
@@ -454,7 +454,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
 	if (!whole) {
 		min_flt = task->min_flt;
 		maj_flt = task->maj_flt;
-		task_times(task, &utime, &stime);
+		task_cputime_adjusted(task, &utime, &stime);
 		gtime = task->gtime;
 	}
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e1581a0..e75cab5 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1751,8 +1751,8 @@ static inline void put_task_struct(struct task_struct *t)
 		__put_task_struct(t);
 }
 
-extern void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st);
-extern void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st);
+extern void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st);
+extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st);
 
 /*
  * Per process flags
diff --git a/kernel/exit.c b/kernel/exit.c
index 346616c..618f7ee 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -1186,11 +1186,11 @@ static int wait_task_zombie(struct wait_opts *wo, struct task_struct *p)
 		 * as other threads in the parent group can be right
 		 * here reaping other children at the same time.
 		 *
-		 * We use thread_group_times() to get times for the thread
+		 * We use thread_group_cputime_adjusted() to get times for the thread
 		 * group, which consolidates times for all threads in the
 		 * group including the group leader.
 		 */
-		thread_group_times(p, &tgutime, &tgstime);
+		thread_group_cputime_adjusted(p, &tgutime, &tgstime);
 		spin_lock_irq(&p->real_parent->sighand->siglock);
 		psig = p->real_parent->signal;
 		sig = p->signal;
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index e56f138..7dc1553 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -445,13 +445,13 @@ void account_idle_ticks(unsigned long ticks)
  * Use precise platform statistics if available:
  */
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING
-void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
+void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 {
 	*ut = p->utime;
 	*st = p->stime;
 }
 
-void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
+void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 {
 	struct task_cputime cputime;
 
@@ -516,7 +516,7 @@ static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
 	return (__force cputime_t) temp;
 }
 
-void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
+void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 {
 	cputime_t rtime, utime = p->utime, total = utime + p->stime;
 
@@ -543,7 +543,7 @@ void task_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
 /*
  * Must be called with siglock held.
  */
-void thread_group_times(struct task_struct *p, cputime_t *ut, cputime_t *st)
+void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 {
 	struct signal_struct *sig = p->signal;
 	struct task_cputime cputime;
diff --git a/kernel/sys.c b/kernel/sys.c
index e6e0ece..265b376 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1046,7 +1046,7 @@ void do_sys_times(struct tms *tms)
 	cputime_t tgutime, tgstime, cutime, cstime;
 
 	spin_lock_irq(&current->sighand->siglock);
-	thread_group_times(current, &tgutime, &tgstime);
+	thread_group_cputime_adjusted(current, &tgutime, &tgstime);
 	cutime = current->signal->cutime;
 	cstime = current->signal->cstime;
 	spin_unlock_irq(&current->sighand->siglock);
@@ -1704,7 +1704,7 @@ static void k_getrusage(struct task_struct *p, int who, struct rusage *r)
 	utime = stime = 0;
 
 	if (who == RUSAGE_THREAD) {
-		task_times(current, &utime, &stime);
+		task_cputime_adjusted(current, &utime, &stime);
 		accumulate_thread_rusage(p, r);
 		maxrss = p->signal->maxrss;
 		goto out;
@@ -1730,7 +1730,7 @@ static void k_getrusage(struct task_struct *p, int who, struct rusage *r)
 				break;
 
 		case RUSAGE_SELF:
-			thread_group_times(p, &tgutime, &tgstime);
+			thread_group_cputime_adjusted(p, &tgutime, &tgstime);
 			utime += tgutime;
 			stime += tgstime;
 			r->ru_nvcsw += p->signal->nvcsw;
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] cputime: Consolidate cputime adjustment code
  2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 1/4] cputime: Move thread_group_cputime() to sched code Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 2/4] cputime: Rename thread_group_times to thread_group_cputime_adjusted Frederic Weisbecker
@ 2012-11-28 17:52 ` Frederic Weisbecker
  2012-11-28 17:52 ` [PATCH 4/4] cputime: Comment cputime's adjusting code Frederic Weisbecker
  2012-11-29 17:45 ` [GIT PULL] cputime: Cleanups on adjusted cputime code Frederic Weisbecker
  4 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-28 17:52 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Paul Gortmaker

task_cputime_adjusted() and thread_group_cputime_adjusted()
essentially share the same code. They just don't use the same
source:

* The first function uses the cputime in the task struct and the
previous adjusted snapshot that ensures monotonicity.

* The second adds the cputime of all tasks in the group and the
previous adjusted snapshot of the whole group from the signal
structure.

Just consolidate the common code that does the adjustment. These
functions just need to fetch the values from the appropriate
source.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 include/linux/sched.h  |   23 +++++++++++++++++++----
 kernel/fork.c          |    2 +-
 kernel/sched/cputime.c |   46 +++++++++++++++++++++++-----------------------
 3 files changed, 43 insertions(+), 28 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e75cab5..5dafac36 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -434,13 +434,28 @@ struct cpu_itimer {
 };
 
 /**
+ * struct cputime - snaphsot of system and user cputime
+ * @utime: time spent in user mode
+ * @stime: time spent in system mode
+ *
+ * Gathers a generic snapshot of user and system time.
+ */
+struct cputime {
+	cputime_t utime;
+	cputime_t stime;
+};
+
+/**
  * struct task_cputime - collected CPU time counts
  * @utime:		time spent in user mode, in &cputime_t units
  * @stime:		time spent in kernel mode, in &cputime_t units
  * @sum_exec_runtime:	total time spent on the CPU, in nanoseconds
  *
- * This structure groups together three kinds of CPU time that are
- * tracked for threads and thread groups.  Most things considering
+ * This is an extension of struct cputime that includes the total runtime
+ * spent by the task from the scheduler point of view.
+ *
+ * As a result, this structure groups together three kinds of CPU time
+ * that are tracked for threads and thread groups.  Most things considering
  * CPU time want to group these counts together and treat all three
  * of them in parallel.
  */
@@ -581,7 +596,7 @@ struct signal_struct {
 	cputime_t gtime;
 	cputime_t cgtime;
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
-	cputime_t prev_utime, prev_stime;
+	struct cputime prev_cputime;
 #endif
 	unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
 	unsigned long min_flt, maj_flt, cmin_flt, cmaj_flt;
@@ -1340,7 +1355,7 @@ struct task_struct {
 	cputime_t utime, stime, utimescaled, stimescaled;
 	cputime_t gtime;
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
-	cputime_t prev_utime, prev_stime;
+	struct cputime prev_cputime;
 #endif
 	unsigned long nvcsw, nivcsw; /* context switch counts */
 	struct timespec start_time; 		/* monotonic time */
diff --git a/kernel/fork.c b/kernel/fork.c
index 8b20ab7..0e7cdb9 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1222,7 +1222,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 	p->utime = p->stime = p->gtime = 0;
 	p->utimescaled = p->stimescaled = 0;
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
-	p->prev_utime = p->prev_stime = 0;
+	p->prev_cputime.utime = p->prev_cputime.stime = 0;
 #endif
 #if defined(SPLIT_RSS_COUNTING)
 	memset(&p->rss_stat, 0, sizeof(p->rss_stat));
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 7dc1553..220fdc4 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -516,14 +516,18 @@ static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
 	return (__force cputime_t) temp;
 }
 
-void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
+static void cputime_adjust(struct task_cputime *curr,
+			   struct cputime *prev,
+			   cputime_t *ut, cputime_t *st)
 {
-	cputime_t rtime, utime = p->utime, total = utime + p->stime;
+	cputime_t rtime, utime, total;
 
+	utime = curr->utime;
+	total = utime + curr->stime;
 	/*
 	 * Use CFS's precise accounting:
 	 */
-	rtime = nsecs_to_cputime(p->se.sum_exec_runtime);
+	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
 	if (total)
 		utime = scale_utime(utime, rtime, total);
@@ -533,11 +537,22 @@ void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 	/*
 	 * Compare with previous values, to keep monotonicity:
 	 */
-	p->prev_utime = max(p->prev_utime, utime);
-	p->prev_stime = max(p->prev_stime, rtime - p->prev_utime);
+	prev->utime = max(prev->utime, utime);
+	prev->stime = max(prev->stime, rtime - prev->utime);
+
+	*ut = prev->utime;
+	*st = prev->stime;
+}
 
-	*ut = p->prev_utime;
-	*st = p->prev_stime;
+void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
+{
+	struct task_cputime cputime = {
+		.utime = p->utime,
+		.stime = p->stime,
+		.sum_exec_runtime = p->se.sum_exec_runtime,
+	};
+
+	cputime_adjust(&cputime, &p->prev_cputime, ut, st);
 }
 
 /*
@@ -545,24 +560,9 @@ void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
  */
 void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)
 {
-	struct signal_struct *sig = p->signal;
 	struct task_cputime cputime;
-	cputime_t rtime, utime, total;
 
 	thread_group_cputime(p, &cputime);
-
-	total = cputime.utime + cputime.stime;
-	rtime = nsecs_to_cputime(cputime.sum_exec_runtime);
-
-	if (total)
-		utime = scale_utime(cputime.utime, rtime, total);
-	else
-		utime = rtime;
-
-	sig->prev_utime = max(sig->prev_utime, utime);
-	sig->prev_stime = max(sig->prev_stime, rtime - sig->prev_utime);
-
-	*ut = sig->prev_utime;
-	*st = sig->prev_stime;
+	cputime_adjust(&cputime, &p->signal->prev_cputime, ut, st);
 }
 #endif
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] cputime: Comment cputime's adjusting code
  2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2012-11-28 17:52 ` [PATCH 3/4] cputime: Consolidate cputime adjustment code Frederic Weisbecker
@ 2012-11-28 17:52 ` Frederic Weisbecker
  2012-11-29 17:45 ` [GIT PULL] cputime: Cleanups on adjusted cputime code Frederic Weisbecker
  4 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-28 17:52 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Ingo Molnar, Peter Zijlstra,
	Thomas Gleixner, Steven Rostedt, Paul Gortmaker

The reason for the scaling and monotonicity correction performed
by cputime_adjust() may not be immediately clear to the reviewer.

Add some comments to explain what happens there.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 kernel/sched/cputime.c |   18 ++++++++++++++++--
 1 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 220fdc4..b7f7317 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -516,6 +516,10 @@ static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
 	return (__force cputime_t) temp;
 }
 
+/*
+ * Adjust tick based cputime random precision against scheduler
+ * runtime accounting.
+ */
 static void cputime_adjust(struct task_cputime *curr,
 			   struct cputime *prev,
 			   cputime_t *ut, cputime_t *st)
@@ -524,8 +528,16 @@ static void cputime_adjust(struct task_cputime *curr,
 
 	utime = curr->utime;
 	total = utime + curr->stime;
+
 	/*
-	 * Use CFS's precise accounting:
+	 * Tick based cputime accounting depend on random scheduling
+	 * timeslices of a task to be interrupted or not by the timer.
+	 * Depending on these circumstances, the number of these interrupts
+	 * may be over or under-optimistic, matching the real user and system
+	 * cputime with a variable precision.
+	 *
+	 * Fix this by scaling these tick based values against the total
+	 * runtime accounted by the CFS scheduler.
 	 */
 	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
@@ -535,7 +547,9 @@ static void cputime_adjust(struct task_cputime *curr,
 		utime = rtime;
 
 	/*
-	 * Compare with previous values, to keep monotonicity:
+	 * If the tick based count grows faster than the scheduler one,
+	 * the result of the scaling may go backward.
+	 * Let's enforce monotonicity.
 	 */
 	prev->utime = max(prev->utime, utime);
 	prev->stime = max(prev->stime, rtime - prev->utime);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [GIT PULL] cputime: Cleanups on adjusted cputime code
  2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2012-11-28 17:52 ` [PATCH 4/4] cputime: Comment cputime's adjusting code Frederic Weisbecker
@ 2012-11-29 17:45 ` Frederic Weisbecker
  2012-12-08 14:33   ` Ingo Molnar
  4 siblings, 1 reply; 7+ messages in thread
From: Frederic Weisbecker @ 2012-11-29 17:45 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, Thomas Gleixner,
	Steven Rostedt, Paul Gortmaker


Ingo,

Please pull the latest cputime adjustment cleanups that can be found at:

  git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git cputime/adjustment-v2

for you to fetch changes up to fa09205783d11cc05122ad6e4ce06074624b2c0c:

  cputime: Comment cputime's adjusting code (2012-11-28 17:08:20 +0100)

Note it's not related to my previous cputime pull request. Both are independant.

Thanks.

----------------------------------------------------------------
Cputime cleanups on reader side:

* Improve naming and code location

* Consolidate adjustment code

* Comment the adjustement code

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>

----------------------------------------------------------------
Frederic Weisbecker (4):
      cputime: Move thread_group_cputime() to sched code
      cputime: Rename thread_group_times to thread_group_cputime_adjusted
      cputime: Consolidate cputime adjustment code
      cputime: Comment cputime's adjusting code

 fs/proc/array.c           |    4 +-
 include/linux/sched.h     |   27 ++++++++++---
 kernel/exit.c             |    4 +-
 kernel/fork.c             |    2 +-
 kernel/posix-cpu-timers.c |   24 -----------
 kernel/sched/cputime.c    |   98 ++++++++++++++++++++++++++++++++-------------
 kernel/sys.c              |    6 +--
 7 files changed, 99 insertions(+), 66 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [GIT PULL] cputime: Cleanups on adjusted cputime code
  2012-11-29 17:45 ` [GIT PULL] cputime: Cleanups on adjusted cputime code Frederic Weisbecker
@ 2012-12-08 14:33   ` Ingo Molnar
  0 siblings, 0 replies; 7+ messages in thread
From: Ingo Molnar @ 2012-12-08 14:33 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, Thomas Gleixner, Steven Rostedt, Paul Gortmaker


* Frederic Weisbecker <fweisbec@gmail.com> wrote:

> 
> Ingo,
> 
> Please pull the latest cputime adjustment cleanups that can be found at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git cputime/adjustment-v2
> 
> for you to fetch changes up to fa09205783d11cc05122ad6e4ce06074624b2c0c:
> 
>   cputime: Comment cputime's adjusting code (2012-11-28 17:08:20 +0100)
> 
> Note it's not related to my previous cputime pull request. Both are independant.
> 
> Thanks.
> 
> ----------------------------------------------------------------
> Cputime cleanups on reader side:
> 
> * Improve naming and code location
> 
> * Consolidate adjustment code
> 
> * Comment the adjustement code
> 
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> 
> ----------------------------------------------------------------
> Frederic Weisbecker (4):
>       cputime: Move thread_group_cputime() to sched code
>       cputime: Rename thread_group_times to thread_group_cputime_adjusted
>       cputime: Consolidate cputime adjustment code
>       cputime: Comment cputime's adjusting code
> 
>  fs/proc/array.c           |    4 +-
>  include/linux/sched.h     |   27 ++++++++++---
>  kernel/exit.c             |    4 +-
>  kernel/fork.c             |    2 +-
>  kernel/posix-cpu-timers.c |   24 -----------
>  kernel/sched/cputime.c    |   98 ++++++++++++++++++++++++++++++++-------------
>  kernel/sys.c              |    6 +--
>  7 files changed, 99 insertions(+), 66 deletions(-)

Pulled into sched/core, thanks a lot Frederic!

	Ingo

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-12-08 14:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-28 17:52 [PATCH 0/4] cputime: Cleanups on adjusted cputime code v2 Frederic Weisbecker
2012-11-28 17:52 ` [PATCH 1/4] cputime: Move thread_group_cputime() to sched code Frederic Weisbecker
2012-11-28 17:52 ` [PATCH 2/4] cputime: Rename thread_group_times to thread_group_cputime_adjusted Frederic Weisbecker
2012-11-28 17:52 ` [PATCH 3/4] cputime: Consolidate cputime adjustment code Frederic Weisbecker
2012-11-28 17:52 ` [PATCH 4/4] cputime: Comment cputime's adjusting code Frederic Weisbecker
2012-11-29 17:45 ` [GIT PULL] cputime: Cleanups on adjusted cputime code Frederic Weisbecker
2012-12-08 14:33   ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).