linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
@ 2013-01-07 11:31 Stanislaw Gruszka
  2013-01-09 18:33 ` Frederic Weisbecker
  0 siblings, 1 reply; 7+ messages in thread
From: Stanislaw Gruszka @ 2013-01-07 11:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, linux-kernel, Oleg Nesterov, Frederic Weisbecker, akpm

We scale stime, utime values based on rtime (sum_exec_runtime converted
to jiffies). During scaling we multiple rtime * utime, what seems to be
fine, since both values are converted to u64, but is not.

Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads, run
for 1 day, threads utilize 100% cpu on user space. Machine has 64 cpus.

Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies, what is
0x149970000. Multiplication rtime * utime result is 0x1a855771100000000,
which can not be covered in 64 bits.

Result of overflow is stall of utime values visible in user space
(prev_utime in kernel), even if application still consume lot of CPU
time.

Probably good fix for the problem, will be using 128 bit variable and
proper mul128 and div_u128_u64 primitives. While mul128 is on it's
way to kernel, there is no 128 bit division yet. I'm not sure, if we
want to add it to kernel. Perhaps we could also change the way how
stime and utime are calculated, but I don't know how, so I come with
the below solution for the problem.

To avoid overflow patch change value we scale to min(stime, utime). This
is more like workaround, but will work for processes, which perform
mostly on user space or mostly on kernel space. Unfortunately processes,
which perform on kernel and user space equally, and additionally utilize
lot of CPU time, still will hit this overflow pretty quickly. However
such processes seems to be uncommon.

Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
v1 -> v2: rebase to current Linus source

 kernel/sched/cputime.c |   61 +++++++++++++++++++++++++++++-------------------
 1 files changed, 37 insertions(+), 24 deletions(-)

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 293b202..5e2309a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -509,20 +509,6 @@ EXPORT_SYMBOL_GPL(vtime_account);
 # define nsecs_to_cputime(__nsecs)	nsecs_to_jiffies(__nsecs)
 #endif
 
-static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
-{
-	u64 temp = (__force u64) rtime;
-
-	temp *= (__force u64) utime;
-
-	if (sizeof(cputime_t) == 4)
-		temp = div_u64(temp, (__force u32) total);
-	else
-		temp = div64_u64(temp, (__force u64) total);
-
-	return (__force cputime_t) temp;
-}
-
 /*
  * Adjust tick based cputime random precision against scheduler
  * runtime accounting.
@@ -531,10 +517,11 @@ static void cputime_adjust(struct task_cputime *curr,
 			   struct cputime *prev,
 			   cputime_t *ut, cputime_t *st)
 {
-	cputime_t rtime, utime, total;
-
-	utime = curr->utime;
-	total = utime + curr->stime;
+	cputime_t utime = curr->utime;
+	cputime_t stime = curr->stime;
+	cputime_t rtime, total, scaled_time;
+	bool utime_scale = false;
+	u64 tmp;
 
 	/*
 	 * Tick based cputime accounting depend on random scheduling
@@ -548,18 +535,44 @@ static void cputime_adjust(struct task_cputime *curr,
 	 */
 	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
-	if (total)
-		utime = scale_utime(utime, rtime, total);
-	else
-		utime = rtime;
+	if (utime == stime) {
+		scaled_time = rtime / 2;
+	} else {
+		tmp = (__force u64) rtime;
+
+		/*
+		 * Choose smaller value to avoid possible overflow during
+		 * multiplication.
+		 */
+		if (utime < stime) {
+			tmp *= utime;
+			utime_scale = true;
+		} else {
+			tmp *= stime;
+		}
+
+		total = utime + stime;
+
+		if (sizeof(cputime_t) == 4)
+			tmp = div_u64(tmp, (__force u32) total);
+		else
+			tmp = div64_u64(tmp, (__force u64) total);
+
+		scaled_time = (__force cputime_t) tmp;
+	}
 
 	/*
 	 * If the tick based count grows faster than the scheduler one,
 	 * the result of the scaling may go backward.
 	 * Let's enforce monotonicity.
 	 */
-	prev->utime = max(prev->utime, utime);
-	prev->stime = max(prev->stime, rtime - prev->utime);
+	if (utime_scale) {
+		prev->utime = max(prev->utime, scaled_time);
+		prev->stime = max(prev->stime, rtime - prev->utime);
+	} else {
+		prev->stime = max(prev->stime, scaled_time);
+		prev->utime = max(prev->utime, rtime - prev->stime);
+	}
 
 	*ut = prev->utime;
 	*st = prev->stime;
-- 
1.7.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-07 11:31 [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases) Stanislaw Gruszka
@ 2013-01-09 18:33 ` Frederic Weisbecker
  2013-01-10 12:26   ` Stanislaw Gruszka
  2013-01-24  1:46   ` Xiaotian Feng
  0 siblings, 2 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2013-01-09 18:33 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Oleg Nesterov, akpm,
	Steven Rostedt

On Mon, Jan 07, 2013 at 12:31:45PM +0100, Stanislaw Gruszka wrote:
> We scale stime, utime values based on rtime (sum_exec_runtime converted
> to jiffies). During scaling we multiple rtime * utime, what seems to be
> fine, since both values are converted to u64, but is not.
> 
> Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads, run
> for 1 day, threads utilize 100% cpu on user space. Machine has 64 cpus.
> 
> Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies, what is
> 0x149970000. Multiplication rtime * utime result is 0x1a855771100000000,
> which can not be covered in 64 bits.
> 
> Result of overflow is stall of utime values visible in user space
> (prev_utime in kernel), even if application still consume lot of CPU
> time.
> 
> Probably good fix for the problem, will be using 128 bit variable and
> proper mul128 and div_u128_u64 primitives. While mul128 is on it's
> way to kernel, there is no 128 bit division yet. I'm not sure, if we
> want to add it to kernel. Perhaps we could also change the way how
> stime and utime are calculated, but I don't know how, so I come with
> the below solution for the problem.
> 
> To avoid overflow patch change value we scale to min(stime, utime). This
> is more like workaround, but will work for processes, which perform
> mostly on user space or mostly on kernel space. Unfortunately processes,
> which perform on kernel and user space equally, and additionally utilize
> lot of CPU time, still will hit this overflow pretty quickly. However
> such processes seems to be uncommon.
> 
> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>

I can easily imagine that overflow to happen with user time on intensive
CPU bound loads, or may be guests.

But can we easily reach the same for system time? Even on intensive I/O bound
loads we shouldn't spend that much time in the kernel. Most of it probably goes
to idle.

What do you think?

If that assumption is right in most cases, the following patch should solve the
issue:

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 293b202..0650dd4 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -509,11 +509,11 @@ EXPORT_SYMBOL_GPL(vtime_account);
 # define nsecs_to_cputime(__nsecs)	nsecs_to_jiffies(__nsecs)
 #endif
 
-static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
+static cputime_t scale_utime(cputime_t stime, cputime_t rtime, cputime_t total)
 {
 	u64 temp = (__force u64) rtime;
 
-	temp *= (__force u64) utime;
+	temp *= (__force u64) stime;
 
 	if (sizeof(cputime_t) == 4)
 		temp = div_u64(temp, (__force u32) total);
@@ -531,10 +531,10 @@ static void cputime_adjust(struct task_cputime *curr,
 			   struct cputime *prev,
 			   cputime_t *ut, cputime_t *st)
 {
-	cputime_t rtime, utime, total;
+	cputime_t rtime, stime, total;
 
-	utime = curr->utime;
-	total = utime + curr->stime;
+	stime = curr->stime;
+	total = stime + curr->utime;
 
 	/*
 	 * Tick based cputime accounting depend on random scheduling
@@ -549,17 +549,17 @@ static void cputime_adjust(struct task_cputime *curr,
 	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
 	if (total)
-		utime = scale_utime(utime, rtime, total);
+		stime = scale_stime(stime, rtime, total);
 	else
-		utime = rtime;
+		stime = rtime;
 
 	/*
 	 * If the tick based count grows faster than the scheduler one,
 	 * the result of the scaling may go backward.
 	 * Let's enforce monotonicity.
 	 */
-	prev->utime = max(prev->utime, utime);
-	prev->stime = max(prev->stime, rtime - prev->utime);
+	prev->stime = max(prev->stime, stime);
+	prev->utime = max(prev->utime, rtime - prev->stime);
 
 	*ut = prev->utime;
 	*st = prev->stime;

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-09 18:33 ` Frederic Weisbecker
@ 2013-01-10 12:26   ` Stanislaw Gruszka
  2013-01-23 15:58     ` Frederic Weisbecker
  2013-01-24  1:46   ` Xiaotian Feng
  1 sibling, 1 reply; 7+ messages in thread
From: Stanislaw Gruszka @ 2013-01-10 12:26 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Oleg Nesterov, akpm,
	Steven Rostedt

On Wed, Jan 09, 2013 at 07:33:03PM +0100, Frederic Weisbecker wrote:
> On Mon, Jan 07, 2013 at 12:31:45PM +0100, Stanislaw Gruszka wrote:
> > We scale stime, utime values based on rtime (sum_exec_runtime converted
> > to jiffies). During scaling we multiple rtime * utime, what seems to be
> > fine, since both values are converted to u64, but is not.
> > 
> > Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads, run
> > for 1 day, threads utilize 100% cpu on user space. Machine has 64 cpus.
> > 
> > Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies, what is
> > 0x149970000. Multiplication rtime * utime result is 0x1a855771100000000,
> > which can not be covered in 64 bits.
> > 
> > Result of overflow is stall of utime values visible in user space
> > (prev_utime in kernel), even if application still consume lot of CPU
> > time.
> > 
> > Probably good fix for the problem, will be using 128 bit variable and
> > proper mul128 and div_u128_u64 primitives. While mul128 is on it's
> > way to kernel, there is no 128 bit division yet. I'm not sure, if we
> > want to add it to kernel. Perhaps we could also change the way how
> > stime and utime are calculated, but I don't know how, so I come with
> > the below solution for the problem.
> > 
> > To avoid overflow patch change value we scale to min(stime, utime). This
> > is more like workaround, but will work for processes, which perform
> > mostly on user space or mostly on kernel space. Unfortunately processes,
> > which perform on kernel and user space equally, and additionally utilize
> > lot of CPU time, still will hit this overflow pretty quickly. However
> > such processes seems to be uncommon.
> > 
> > Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
> 
> I can easily imagine that overflow to happen with user time on intensive
> CPU bound loads, or may be guests.
> 
> But can we easily reach the same for system time? Even on intensive I/O bound
> loads we shouldn't spend that much time in the kernel. Most of it probably goes
> to idle.
> 
> What do you think?

I think you are right :-)
 
> If that assumption is right in most cases, the following patch should solve the
> issue:

I'm fine with this patch, it achives the same effect as my patch, but is simpler.

Thanks
Stanislaw

> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index 293b202..0650dd4 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -509,11 +509,11 @@ EXPORT_SYMBOL_GPL(vtime_account);
>  # define nsecs_to_cputime(__nsecs)	nsecs_to_jiffies(__nsecs)
>  #endif
>  
> -static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
> +static cputime_t scale_utime(cputime_t stime, cputime_t rtime, cputime_t total)
>  {
>  	u64 temp = (__force u64) rtime;
>  
> -	temp *= (__force u64) utime;
> +	temp *= (__force u64) stime;
>  
>  	if (sizeof(cputime_t) == 4)
>  		temp = div_u64(temp, (__force u32) total);
> @@ -531,10 +531,10 @@ static void cputime_adjust(struct task_cputime *curr,
>  			   struct cputime *prev,
>  			   cputime_t *ut, cputime_t *st)
>  {
> -	cputime_t rtime, utime, total;
> +	cputime_t rtime, stime, total;
>  
> -	utime = curr->utime;
> -	total = utime + curr->stime;
> +	stime = curr->stime;
> +	total = stime + curr->utime;
>  
>  	/*
>  	 * Tick based cputime accounting depend on random scheduling
> @@ -549,17 +549,17 @@ static void cputime_adjust(struct task_cputime *curr,
>  	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
>  
>  	if (total)
> -		utime = scale_utime(utime, rtime, total);
> +		stime = scale_stime(stime, rtime, total);
>  	else
> -		utime = rtime;
> +		stime = rtime;
>  
>  	/*
>  	 * If the tick based count grows faster than the scheduler one,
>  	 * the result of the scaling may go backward.
>  	 * Let's enforce monotonicity.
>  	 */
> -	prev->utime = max(prev->utime, utime);
> -	prev->stime = max(prev->stime, rtime - prev->utime);
> +	prev->stime = max(prev->stime, stime);
> +	prev->utime = max(prev->utime, rtime - prev->stime);
>  
>  	*ut = prev->utime;
>  	*st = prev->stime;

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-10 12:26   ` Stanislaw Gruszka
@ 2013-01-23 15:58     ` Frederic Weisbecker
  2013-01-24 11:05       ` Stanislaw Gruszka
  0 siblings, 1 reply; 7+ messages in thread
From: Frederic Weisbecker @ 2013-01-23 15:58 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Oleg Nesterov, akpm,
	Steven Rostedt

2013/1/10 Stanislaw Gruszka <sgruszka@redhat.com>:
> On Wed, Jan 09, 2013 at 07:33:03PM +0100, Frederic Weisbecker wrote:
>> On Mon, Jan 07, 2013 at 12:31:45PM +0100, Stanislaw Gruszka wrote:
>> > We scale stime, utime values based on rtime (sum_exec_runtime converted
>> > to jiffies). During scaling we multiple rtime * utime, what seems to be
>> > fine, since both values are converted to u64, but is not.
>> >
>> > Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads, run
>> > for 1 day, threads utilize 100% cpu on user space. Machine has 64 cpus.
>> >
>> > Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies, what is
>> > 0x149970000. Multiplication rtime * utime result is 0x1a855771100000000,
>> > which can not be covered in 64 bits.
>> >
>> > Result of overflow is stall of utime values visible in user space
>> > (prev_utime in kernel), even if application still consume lot of CPU
>> > time.
>> >
>> > Probably good fix for the problem, will be using 128 bit variable and
>> > proper mul128 and div_u128_u64 primitives. While mul128 is on it's
>> > way to kernel, there is no 128 bit division yet. I'm not sure, if we
>> > want to add it to kernel. Perhaps we could also change the way how
>> > stime and utime are calculated, but I don't know how, so I come with
>> > the below solution for the problem.
>> >
>> > To avoid overflow patch change value we scale to min(stime, utime). This
>> > is more like workaround, but will work for processes, which perform
>> > mostly on user space or mostly on kernel space. Unfortunately processes,
>> > which perform on kernel and user space equally, and additionally utilize
>> > lot of CPU time, still will hit this overflow pretty quickly. However
>> > such processes seems to be uncommon.
>> >
>> > Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
>>
>> I can easily imagine that overflow to happen with user time on intensive
>> CPU bound loads, or may be guests.
>>
>> But can we easily reach the same for system time? Even on intensive I/O bound
>> loads we shouldn't spend that much time in the kernel. Most of it probably goes
>> to idle.
>>
>> What do you think?
>
> I think you are right :-)
>
>> If that assumption is right in most cases, the following patch should solve the
>> issue:
>
> I'm fine with this patch, it achives the same effect as my patch, but is simpler.

Cool! So I can add your acked-by, right? I'll send this patch to Ingo soon.

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-09 18:33 ` Frederic Weisbecker
  2013-01-10 12:26   ` Stanislaw Gruszka
@ 2013-01-24  1:46   ` Xiaotian Feng
  2013-01-24 16:58     ` Frederic Weisbecker
  1 sibling, 1 reply; 7+ messages in thread
From: Xiaotian Feng @ 2013-01-24  1:46 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Stanislaw Gruszka, Peter Zijlstra, Ingo Molnar, linux-kernel,
	Oleg Nesterov, akpm, Steven Rostedt

On Thu, Jan 10, 2013 at 2:33 AM, Frederic Weisbecker <fweisbec@gmail.com> wrote:
> On Mon, Jan 07, 2013 at 12:31:45PM +0100, Stanislaw Gruszka wrote:
>> We scale stime, utime values based on rtime (sum_exec_runtime converted
>> to jiffies). During scaling we multiple rtime * utime, what seems to be
>> fine, since both values are converted to u64, but is not.
>>
>> Let assume HZ is 1000 - 1ms tick. Process consist of 64 threads, run
>> for 1 day, threads utilize 100% cpu on user space. Machine has 64 cpus.
>>
>> Process rtime = utime will be 64 * 24 * 60 * 60 * 1000 jiffies, what is
>> 0x149970000. Multiplication rtime * utime result is 0x1a855771100000000,
>> which can not be covered in 64 bits.
>>
>> Result of overflow is stall of utime values visible in user space
>> (prev_utime in kernel), even if application still consume lot of CPU
>> time.
>>
>> Probably good fix for the problem, will be using 128 bit variable and
>> proper mul128 and div_u128_u64 primitives. While mul128 is on it's
>> way to kernel, there is no 128 bit division yet. I'm not sure, if we
>> want to add it to kernel. Perhaps we could also change the way how
>> stime and utime are calculated, but I don't know how, so I come with
>> the below solution for the problem.
>>
>> To avoid overflow patch change value we scale to min(stime, utime). This
>> is more like workaround, but will work for processes, which perform
>> mostly on user space or mostly on kernel space. Unfortunately processes,
>> which perform on kernel and user space equally, and additionally utilize
>> lot of CPU time, still will hit this overflow pretty quickly. However
>> such processes seems to be uncommon.
>>
>> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
>
> I can easily imagine that overflow to happen with user time on intensive
> CPU bound loads, or may be guests.
>
> But can we easily reach the same for system time? Even on intensive I/O bound
> loads we shouldn't spend that much time in the kernel. Most of it probably goes
> to idle.
>
> What do you think?
>
> If that assumption is right in most cases, the following patch should solve the
> issue:
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index 293b202..0650dd4 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -509,11 +509,11 @@ EXPORT_SYMBOL_GPL(vtime_account);
>  # define nsecs_to_cputime(__nsecs)     nsecs_to_jiffies(__nsecs)
>  #endif
>
> -static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
> +static cputime_t scale_utime(cputime_t stime, cputime_t rtime, cputime_t total)

s/scale_utime/scale_stime.

>  {
>         u64 temp = (__force u64) rtime;
>
> -       temp *= (__force u64) utime;
> +       temp *= (__force u64) stime;
>
>         if (sizeof(cputime_t) == 4)
>                 temp = div_u64(temp, (__force u32) total);
> @@ -531,10 +531,10 @@ static void cputime_adjust(struct task_cputime *curr,
>                            struct cputime *prev,
>                            cputime_t *ut, cputime_t *st)
>  {
> -       cputime_t rtime, utime, total;
> +       cputime_t rtime, stime, total;
>
> -       utime = curr->utime;
> -       total = utime + curr->stime;
> +       stime = curr->stime;
> +       total = stime + curr->utime;
>
>         /*
>          * Tick based cputime accounting depend on random scheduling
> @@ -549,17 +549,17 @@ static void cputime_adjust(struct task_cputime *curr,
>         rtime = nsecs_to_cputime(curr->sum_exec_runtime);
>
>         if (total)
> -               utime = scale_utime(utime, rtime, total);
> +               stime = scale_stime(stime, rtime, total);
>         else
> -               utime = rtime;
> +               stime = rtime;
>
>         /*
>          * If the tick based count grows faster than the scheduler one,
>          * the result of the scaling may go backward.
>          * Let's enforce monotonicity.
>          */
> -       prev->utime = max(prev->utime, utime);
> -       prev->stime = max(prev->stime, rtime - prev->utime);
> +       prev->stime = max(prev->stime, stime);
> +       prev->utime = max(prev->utime, rtime - prev->stime);
>
>         *ut = prev->utime;
>         *st = prev->stime;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-23 15:58     ` Frederic Weisbecker
@ 2013-01-24 11:05       ` Stanislaw Gruszka
  0 siblings, 0 replies; 7+ messages in thread
From: Stanislaw Gruszka @ 2013-01-24 11:05 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Oleg Nesterov, akpm,
	Steven Rostedt

On Wed, Jan 23, 2013 at 04:58:14PM +0100, Frederic Weisbecker wrote:
> Cool! So I can add your acked-by, right? I'll send this patch to Ingo soon.

Sure.

Acked-by: Stanislaw Gruszka <sgruszka@redhat.com>

Thanks
Stanislaw

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases)
  2013-01-24  1:46   ` Xiaotian Feng
@ 2013-01-24 16:58     ` Frederic Weisbecker
  0 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2013-01-24 16:58 UTC (permalink / raw)
  To: Xiaotian Feng
  Cc: Stanislaw Gruszka, Peter Zijlstra, Ingo Molnar, linux-kernel,
	Oleg Nesterov, akpm, Steven Rostedt

2013/1/24 Xiaotian Feng <xtfeng@gmail.com>:
> On Thu, Jan 10, 2013 at 2:33 AM, Frederic Weisbecker <fweisbec@gmail.com> wrote:
>> --- a/kernel/sched/cputime.c
>> +++ b/kernel/sched/cputime.c
>> @@ -509,11 +509,11 @@ EXPORT_SYMBOL_GPL(vtime_account);
>>  # define nsecs_to_cputime(__nsecs)     nsecs_to_jiffies(__nsecs)
>>  #endif
>>
>> -static cputime_t scale_utime(cputime_t utime, cputime_t rtime, cputime_t total)
>> +static cputime_t scale_utime(cputime_t stime, cputime_t rtime, cputime_t total)
>
> s/scale_utime/scale_stime.

Whoops! :-)

Thanks I'm fixing this.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-01-24 16:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-07 11:31 [PATCH v2 repost] sched: cputime: avoid multiplication overflow (in common cases) Stanislaw Gruszka
2013-01-09 18:33 ` Frederic Weisbecker
2013-01-10 12:26   ` Stanislaw Gruszka
2013-01-23 15:58     ` Frederic Weisbecker
2013-01-24 11:05       ` Stanislaw Gruszka
2013-01-24  1:46   ` Xiaotian Feng
2013-01-24 16:58     ` Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).