linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* sched/cputime: sig->prev_stime underflow
@ 2013-04-04 17:40 Dave Hansen
  2013-04-04 23:41 ` Frederic Weisbecker
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2013-04-04 17:40 UTC (permalink / raw)
  To: linux-kernel, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

With the 3.9-rcs (and probably much earlier) I'm seeing some weird top
output where the cpu time "spent" is millions of hours:

445 root      20   0     0    0    0 S    0  0.0  5124095h kworker/45:1
404 root      20   0     0    0    0 S    0  0.0  5124095h kworker/4:1

I see it mostly with kernel threads, but it doesn't seem to happen on my
distro kernel (3.5 era).  The suspect code is in thread_group_times():

	sig->prev_stime = max(sig->prev_stime, rtime - sig->prev_utime);

In my case, I caught it with rtime=34 and sig->prev_utime=35.  This code
_looks_ to be pretty mature, coming in at commit 0cf55e1e in 2009.  The
system I'm running on _does_ have some non-sync'd TSCs, but they are at
least being detected, so I expect the fallout to be minimal:

	tsc: Marking TSC unstable due to check_tsc_sync_source failed

config:

	http://sr71.net/~dave/linux/config-bigbox-04042013.txt

The dumb fix here would seem to be to just check "rtime <
sig->prev_utime".  Any thoughts?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-04 17:40 sched/cputime: sig->prev_stime underflow Dave Hansen
@ 2013-04-04 23:41 ` Frederic Weisbecker
  2013-04-08 15:57   ` Dave Hansen
  0 siblings, 1 reply; 10+ messages in thread
From: Frederic Weisbecker @ 2013-04-04 23:41 UTC (permalink / raw)
  To: Dave Hansen
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto, Stanislaw Gruszka

2013/4/4 Dave Hansen <dave@sr71.net>:
> With the 3.9-rcs (and probably much earlier) I'm seeing some weird top
> output where the cpu time "spent" is millions of hours:
>
> 445 root      20   0     0    0    0 S    0  0.0  5124095h kworker/45:1
> 404 root      20   0     0    0    0 S    0  0.0  5124095h kworker/4:1
>
> I see it mostly with kernel threads, but it doesn't seem to happen on my
> distro kernel (3.5 era).  The suspect code is in thread_group_times():
>
>         sig->prev_stime = max(sig->prev_stime, rtime - sig->prev_utime);
>
> In my case, I caught it with rtime=34 and sig->prev_utime=35.  This code
> _looks_ to be pretty mature, coming in at commit 0cf55e1e in 2009.  The
> system I'm running on _does_ have some non-sync'd TSCs, but they are at
> least being detected, so I expect the fallout to be minimal:
>
>         tsc: Marking TSC unstable due to check_tsc_sync_source failed
>
> config:
>
>         http://sr71.net/~dave/linux/config-bigbox-04042013.txt
>
> The dumb fix here would seem to be to just check "rtime <
> sig->prev_utime".  Any thoughts?

Hi Dave,

Does this patch fix the issue for you?
https://lkml.org/lkml/2013/4/4/112

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-04 23:41 ` Frederic Weisbecker
@ 2013-04-08 15:57   ` Dave Hansen
  2013-04-11  7:45     ` Stanislaw Gruszka
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2013-04-08 15:57 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto, Stanislaw Gruszka

On 04/04/2013 04:41 PM, Frederic Weisbecker wrote:
> Does this patch fix the issue for you?
> https://lkml.org/lkml/2013/4/4/112

Nope, that doesn't seem to make a difference.  I'm still seeing the
underflow.  I'm pretty sure it's already gone to hell by the time it
gets in to the loop that's patched there.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-08 15:57   ` Dave Hansen
@ 2013-04-11  7:45     ` Stanislaw Gruszka
  2013-04-11 18:47       ` Dave Hansen
  0 siblings, 1 reply; 10+ messages in thread
From: Stanislaw Gruszka @ 2013-04-11  7:45 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

On Mon, Apr 08, 2013 at 08:57:16AM -0700, Dave Hansen wrote:
> On 04/04/2013 04:41 PM, Frederic Weisbecker wrote:
> > Does this patch fix the issue for you?
> > https://lkml.org/lkml/2013/4/4/112
> 
> Nope, that doesn't seem to make a difference.  I'm still seeing the
> underflow.  I'm pretty sure it's already gone to hell by the time it
> gets in to the loop that's patched there.

Perhaps this is glich introduced by commit 
62188451f0d63add7ad0cd2a1ae269d600c1663d "cputime: Avoid multiplication
overflow on utime scaling" . Could you try to revert it and see if that
helps. If it does not, can you check if problem happen on 3.8 ?

Stanislaw

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-11  7:45     ` Stanislaw Gruszka
@ 2013-04-11 18:47       ` Dave Hansen
  2013-04-16 11:06         ` Stanislaw Gruszka
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2013-04-11 18:47 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

On 04/11/2013 12:45 AM, Stanislaw Gruszka wrote:
> On Mon, Apr 08, 2013 at 08:57:16AM -0700, Dave Hansen wrote:
>> On 04/04/2013 04:41 PM, Frederic Weisbecker wrote:
>>> Does this patch fix the issue for you?
>>> https://lkml.org/lkml/2013/4/4/112
>>
>> Nope, that doesn't seem to make a difference.  I'm still seeing the
>> underflow.  I'm pretty sure it's already gone to hell by the time it
>> gets in to the loop that's patched there.
> 
> Perhaps this is glich introduced by commit 
> 62188451f0d63add7ad0cd2a1ae269d600c1663d "cputime: Avoid multiplication
> overflow on utime scaling" . Could you try to revert it and see if that
> helps. If it does not, can you check if problem happen on 3.8 ?

I'll run a bit longer, but reverting
62188451f0d63add7ad0cd2a1ae269d600c1663d _does_ seem to make it happier.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-11 18:47       ` Dave Hansen
@ 2013-04-16 11:06         ` Stanislaw Gruszka
  2013-04-16 21:05           ` Dave Hansen
  0 siblings, 1 reply; 10+ messages in thread
From: Stanislaw Gruszka @ 2013-04-16 11:06 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

[-- Attachment #1: Type: text/plain, Size: 1727 bytes --]

On Thu, Apr 11, 2013 at 11:47:37AM -0700, Dave Hansen wrote:
> On 04/11/2013 12:45 AM, Stanislaw Gruszka wrote:
> > On Mon, Apr 08, 2013 at 08:57:16AM -0700, Dave Hansen wrote:
> >> On 04/04/2013 04:41 PM, Frederic Weisbecker wrote:
> >>> Does this patch fix the issue for you?
> >>> https://lkml.org/lkml/2013/4/4/112
> >>
> >> Nope, that doesn't seem to make a difference.  I'm still seeing the
> >> underflow.  I'm pretty sure it's already gone to hell by the time it
> >> gets in to the loop that's patched there.
> > 
> > Perhaps this is glich introduced by commit 
> > 62188451f0d63add7ad0cd2a1ae269d600c1663d "cputime: Avoid multiplication
> > overflow on utime scaling" . Could you try to revert it and see if that
> > helps. If it does not, can you check if problem happen on 3.8 ?
> 
> I'll run a bit longer, but reverting
> 62188451f0d63add7ad0cd2a1ae269d600c1663d _does_ seem to make it happier.

Hmm, I supposed glitch in this commit because it something that was
changed recently and previously we did not have such bug reports. But
I do not see mistake there. Basically prev->stime should never be
bigger than rtime, since at any moment stime <= rtime and previous
rtime <= current rtime. So this looks like rtime decrease and for
some reason before 62188451f0d it does not manifest as a bug.

It can be fixed by comparing prev->stime < rtime or by something
like first attached patch. Not sure what will be better.

But since this (most likely) is rtime monotonicity problem it
is bug by itself and probably that should be fixed. Can you 
check second patch attached and see if it trigger the warning.

Note patches are not tested, I expect you'll fix them if they
do not do what expected :-)

Thanks
Stanislaw

[-- Attachment #2: cputime_adjust_modify.patch --]
[-- Type: text/plain, Size: 1077 bytes --]

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index ed12cbb..63f2358 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -543,7 +543,7 @@ static void cputime_adjust(struct task_cputime *curr,
 			   struct cputime *prev,
 			   cputime_t *ut, cputime_t *st)
 {
-	cputime_t rtime, stime, total;
+	cputime_t rtime, stime, utime, total;
 
 	stime = curr->stime;
 	total = stime + curr->utime;
@@ -560,10 +560,13 @@ static void cputime_adjust(struct task_cputime *curr,
 	 */
 	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
-	if (total)
+	if (total) {
 		stime = scale_stime(stime, rtime, total);
-	else
+		utime = rtime - stime;
+	} else {
 		stime = rtime;
+		utime = 0;
+	}
 
 	/*
 	 * If the tick based count grows faster than the scheduler one,
@@ -571,7 +574,7 @@ static void cputime_adjust(struct task_cputime *curr,
 	 * Let's enforce monotonicity.
 	 */
 	prev->stime = max(prev->stime, stime);
-	prev->utime = max(prev->utime, rtime - prev->stime);
+	prev->utime = max(prev->utime, utime);
 
 	*ut = prev->utime;
 	*st = prev->stime;

[-- Attachment #3: check_rtime.patch --]
[-- Type: text/plain, Size: 1117 bytes --]

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d35d2b6..3f1c088 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -418,6 +418,7 @@ struct cpu_itimer {
 struct cputime {
 	cputime_t utime;
 	cputime_t stime;
+	cputime_t rtime;
 };
 
 /**
diff --git a/kernel/fork.c b/kernel/fork.c
index 1766d32..d67c219 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1235,6 +1235,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
 	p->utimescaled = p->stimescaled = 0;
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING
 	p->prev_cputime.utime = p->prev_cputime.stime = 0;
+	p->prev_cputime.rtime = 0;
 #endif
 #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
 	seqlock_init(&p->vtime_seqlock);
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index ed12cbb..146e5f1 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -560,6 +560,9 @@ static void cputime_adjust(struct task_cputime *curr,
 	 */
 	rtime = nsecs_to_cputime(curr->sum_exec_runtime);
 
+	WARN_ON(prev->rtime > rtime);
+	prev->rtime = rtime;
+
 	if (total)
 		stime = scale_stime(stime, rtime, total);
 	else

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-16 11:06         ` Stanislaw Gruszka
@ 2013-04-16 21:05           ` Dave Hansen
  2013-04-30 12:36             ` Stanislaw Gruszka
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2013-04-16 21:05 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

On 04/16/2013 04:06 AM, Stanislaw Gruszka wrote:
> On Thu, Apr 11, 2013 at 11:47:37AM -0700, Dave Hansen wrote:
>> On 04/11/2013 12:45 AM, Stanislaw Gruszka wrote:
>>> On Mon, Apr 08, 2013 at 08:57:16AM -0700, Dave Hansen wrote:
>>>> On 04/04/2013 04:41 PM, Frederic Weisbecker wrote:
>>>>> Does this patch fix the issue for you?
>>>>> https://lkml.org/lkml/2013/4/4/112
>>>>
>>>> Nope, that doesn't seem to make a difference.  I'm still seeing the
>>>> underflow.  I'm pretty sure it's already gone to hell by the time it
>>>> gets in to the loop that's patched there.
>>>
>>> Perhaps this is glich introduced by commit 
>>> 62188451f0d63add7ad0cd2a1ae269d600c1663d "cputime: Avoid multiplication
>>> overflow on utime scaling" . Could you try to revert it and see if that
>>> helps. If it does not, can you check if problem happen on 3.8 ?
>>
>> I'll run a bit longer, but reverting
>> 62188451f0d63add7ad0cd2a1ae269d600c1663d _does_ seem to make it happier.
> 
> Hmm, I supposed glitch in this commit because it something that was
> changed recently and previously we did not have such bug reports. But
> I do not see mistake there. Basically prev->stime should never be
> bigger than rtime, since at any moment stime <= rtime and previous
> rtime <= current rtime. So this looks like rtime decrease and for
> some reason before 62188451f0d it does not manifest as a bug.
> 
> It can be fixed by comparing prev->stime < rtime or by something
> like first attached patch. Not sure what will be better.

The first patch does appear to fix the issue for me.

> But since this (most likely) is rtime monotonicity problem it
> is bug by itself and probably that should be fixed. Can you 
> check second patch attached and see if it trigger the warning.

Yup, it triggers lots of warnings.  Here's one.

> [  460.789710] ------------[ cut here ]------------
> [  460.789730] WARNING: at /home/davehans/linux.git/kernel/sched/cputime.c:563 cputime_adjust+0xe6/0xf0()
> [  460.789736] Hardware name: PRIMEQUEST 1800E2
> [  460.789739] Modules linked in:
> [  460.789745] Pid: 4245, comm: top Tainted: G        W    3.9.0-rc7-00004-gbb33db7-dirty #19
> [  460.789748] Call Trace:
> [  460.789777]  [<ffffffff810c6dc7>] warn_slowpath_common+0xb7/0x120
> [  460.789783]  [<ffffffff810c6e5a>] warn_slowpath_null+0x2a/0x40
> [  460.789789]  [<ffffffff8111a8d6>] cputime_adjust+0xe6/0xf0
> [  460.789795]  [<ffffffff8111b755>] ? thread_group_cputime+0x5/0x140
> [  460.789800]  [<ffffffff8111bbb9>] thread_group_cputime_adjusted+0x59/0x70
> [  460.789810]  [<ffffffff813179b5>] do_task_stat+0xc35/0xef0
> [  460.789821]  [<ffffffff8114306f>] ? __lock_acquire+0x3ff/0x1010
> [  460.789827]  [<ffffffff8114048d>] ? lock_release_holdtime+0x4d/0x270
> [  460.789832]  [<ffffffff81318a0c>] proc_tgid_stat+0x1c/0x30
> [  460.789841]  [<ffffffff813117db>] proc_single_show+0x7b/0xf0
> [  460.789848]  [<ffffffff812ace19>] seq_read+0x139/0x620
> [  460.789857]  [<ffffffff812894ae>] ? final_putname+0x2e/0x80
> [  460.789866]  [<ffffffff81278e4b>] vfs_read+0xdb/0x230
> [  460.789871]  [<ffffffff81279221>] sys_read+0x81/0xf0
> [  460.789881]  [<ffffffff819b3542>] system_call_fastpath+0x16/0x1b
> [  460.789883] ---[ end trace fbba451b40d2c6c7 ]---


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-16 21:05           ` Dave Hansen
@ 2013-04-30 12:36             ` Stanislaw Gruszka
  2013-05-02 16:05               ` Dave Hansen
  0 siblings, 1 reply; 10+ messages in thread
From: Stanislaw Gruszka @ 2013-04-30 12:36 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

[-- Attachment #1: Type: text/plain, Size: 1247 bytes --]

On Tue, Apr 16, 2013 at 02:05:22PM -0700, Dave Hansen wrote:
> > But since this (most likely) is rtime monotonicity problem it
> > is bug by itself and probably that should be fixed. Can you 
> > check second patch attached and see if it trigger the warning.
> 
> Yup, it triggers lots of warnings.  Here's one.
> 
> > [  460.789710] ------------[ cut here ]------------
> > [  460.789730] WARNING: at /home/davehans/linux.git/kernel/sched/cputime.c:563 cputime_adjust+0xe6/0xf0()
> > [  460.789736] Hardware name: PRIMEQUEST 1800E2
> > [  460.789739] Modules linked in:
> > [  460.789745] Pid: 4245, comm: top Tainted: G        W    3.9.0-rc7-00004-gbb33db7-dirty #19
> > [  460.789748] Call Trace:
> > [  460.789777]  [<ffffffff810c6dc7>] warn_slowpath_common+0xb7/0x120
> > [  460.789783]  [<ffffffff810c6e5a>] warn_slowpath_null+0x2a/0x40
> > [  460.789789]  [<ffffffff8111a8d6>] cputime_adjust+0xe6/0xf0
> > [  460.789795]  [<ffffffff8111b755>] ? thread_group_cputime+0x5/0x140
> > [  460.789800]  [<ffffffff8111bbb9>] thread_group_cputime_adjusted+0x59/0x70

Dave, could you test attached patch and check if make warnings gone.
Patch is not right fix, but if it prevent the warnings, this will
give clue where the problem can be.

Stanislaw

[-- Attachment #2: stop_use_task_sched_runtime.patch --]
[-- Type: text/plain, Size: 501 bytes --]

diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index e93cca9..057faa8 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -313,7 +313,7 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
 		task_cputime(t, &utime, &stime);
 		times->utime += utime;
 		times->stime += stime;
-		times->sum_exec_runtime += task_sched_runtime(t);
+		times->sum_exec_runtime += t->se.sum_exec_runtime;
 	} while_each_thread(tsk, t);
 out:
 	rcu_read_unlock();

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-04-30 12:36             ` Stanislaw Gruszka
@ 2013-05-02 16:05               ` Dave Hansen
  2013-05-03 11:00                 ` Stanislaw Gruszka
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Hansen @ 2013-05-02 16:05 UTC (permalink / raw)
  To: Stanislaw Gruszka
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

On 04/30/2013 05:36 AM, Stanislaw Gruszka wrote:
> On Tue, Apr 16, 2013 at 02:05:22PM -0700, Dave Hansen wrote:
>>> But since this (most likely) is rtime monotonicity problem it
>>> is bug by itself and probably that should be fixed. Can you 
>>> check second patch attached and see if it trigger the warning.
>>
>> Yup, it triggers lots of warnings.  Here's one.
>>
>>> [  460.789710] ------------[ cut here ]------------
>>> [  460.789730] WARNING: at /home/davehans/linux.git/kernel/sched/cputime.c:563 cputime_adjust+0xe6/0xf0()
>>> [  460.789736] Hardware name: PRIMEQUEST 1800E2
>>> [  460.789739] Modules linked in:
>>> [  460.789745] Pid: 4245, comm: top Tainted: G        W    3.9.0-rc7-00004-gbb33db7-dirty #19
>>> [  460.789748] Call Trace:
>>> [  460.789777]  [<ffffffff810c6dc7>] warn_slowpath_common+0xb7/0x120
>>> [  460.789783]  [<ffffffff810c6e5a>] warn_slowpath_null+0x2a/0x40
>>> [  460.789789]  [<ffffffff8111a8d6>] cputime_adjust+0xe6/0xf0
>>> [  460.789795]  [<ffffffff8111b755>] ? thread_group_cputime+0x5/0x140
>>> [  460.789800]  [<ffffffff8111bbb9>] thread_group_cputime_adjusted+0x59/0x70
> 
> Dave, could you test attached patch and check if make warnings gone.
> Patch is not right fix, but if it prevent the warnings, this will
> give clue where the problem can be.

Does this have any relation to the other four -tip patches you also
posted?  What do you want this applied on top of?


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sched/cputime: sig->prev_stime underflow
  2013-05-02 16:05               ` Dave Hansen
@ 2013-05-03 11:00                 ` Stanislaw Gruszka
  0 siblings, 0 replies; 10+ messages in thread
From: Stanislaw Gruszka @ 2013-05-03 11:00 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Frederic Weisbecker, LKML, Ingo Molnar, Peter Zijlstra, Hidetoshi Seto

On Thu, May 02, 2013 at 09:05:28AM -0700, Dave Hansen wrote:
> On 04/30/2013 05:36 AM, Stanislaw Gruszka wrote:
> > On Tue, Apr 16, 2013 at 02:05:22PM -0700, Dave Hansen wrote:
> >>> But since this (most likely) is rtime monotonicity problem it
> >>> is bug by itself and probably that should be fixed. Can you 
> >>> check second patch attached and see if it trigger the warning.
> >>
> >> Yup, it triggers lots of warnings.  Here's one.
> >>
> >>> [  460.789710] ------------[ cut here ]------------
> >>> [  460.789730] WARNING: at /home/davehans/linux.git/kernel/sched/cputime.c:563 cputime_adjust+0xe6/0xf0()
> >>> [  460.789736] Hardware name: PRIMEQUEST 1800E2
> >>> [  460.789739] Modules linked in:
> >>> [  460.789745] Pid: 4245, comm: top Tainted: G        W    3.9.0-rc7-00004-gbb33db7-dirty #19
> >>> [  460.789748] Call Trace:
> >>> [  460.789777]  [<ffffffff810c6dc7>] warn_slowpath_common+0xb7/0x120
> >>> [  460.789783]  [<ffffffff810c6e5a>] warn_slowpath_null+0x2a/0x40
> >>> [  460.789789]  [<ffffffff8111a8d6>] cputime_adjust+0xe6/0xf0
> >>> [  460.789795]  [<ffffffff8111b755>] ? thread_group_cputime+0x5/0x140
> >>> [  460.789800]  [<ffffffff8111bbb9>] thread_group_cputime_adjusted+0x59/0x70
> > 
> > Dave, could you test attached patch and check if make warnings gone.
> > Patch is not right fix, but if it prevent the warnings, this will
> > give clue where the problem can be.
> 
> Does this have any relation to the other four -tip patches you also
> posted?  What do you want this applied on top of?

No, they are not related. But if you apply patch " sched: Avoid
prev->stime underflow", you will also need previous additional
patch, which WARN if rtime goes backward to detect that condition
and see if it is fixed by times->sum_exec_runtime calculation
change.

Stanislaw

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-05-03 10:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-04 17:40 sched/cputime: sig->prev_stime underflow Dave Hansen
2013-04-04 23:41 ` Frederic Weisbecker
2013-04-08 15:57   ` Dave Hansen
2013-04-11  7:45     ` Stanislaw Gruszka
2013-04-11 18:47       ` Dave Hansen
2013-04-16 11:06         ` Stanislaw Gruszka
2013-04-16 21:05           ` Dave Hansen
2013-04-30 12:36             ` Stanislaw Gruszka
2013-05-02 16:05               ` Dave Hansen
2013-05-03 11:00                 ` Stanislaw Gruszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).