All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] iowait/idle time accounting hiccups in NOHZ kernels
       [not found]   ` <201301180857.r0I8vK7c052791@www262.sakura.ne.jp>
@ 2013-03-19  2:38     ` Fernando Luis Vázquez Cao
  2013-04-01 13:05       ` Tetsuo Handa
  0 siblings, 1 reply; 9+ messages in thread
From: Fernando Luis Vázquez Cao @ 2013-03-19  2:38 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Tetsuo Handa, linux-kernel, Frederic Weisbecker

(Moving discussion to LKML)

Hi Thomas, Frederic,

Tetsuo Handa reported that the iowait time obtained through /proc/stat
is not monotonic.

The reason is that get_cpu_iowait_time_us() is inherently racy;
->idle_entrytime and ->iowait_sleeptime can be updated from another
CPU (via update_ts_time_stats()) during the delta and iowait time
calculations and the "now" values used by the racing CPUs are not
necessarily ordered.

The patch below fixes the problem that the delta becomes negative, but
this is not enough. Fixing the whole problem properly may require some
major plumbing so I would like to know your take on this before going
ahead.

Thanks,
Fernando

---

diff -urNp linux-3.9-rc3-orig/kernel/time/tick-sched.c linux-3.9-rc3/kernel/time/tick-sched.c
--- linux-3.9-rc3-orig/kernel/time/tick-sched.c	2013-03-18 16:58:36.076335000 +0900
+++ linux-3.9-rc3/kernel/time/tick-sched.c	2013-03-19 10:57:32.729247000 +0900
@@ -292,18 +292,20 @@ EXPORT_SYMBOL_GPL(get_cpu_idle_time_us);
 u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time)
 {
 	struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
-	ktime_t now, iowait;
+	ktime_t now, iowait, idle_entrytime;
 
 	if (!tick_nohz_enabled)
 		return -1;
 
+	idle_entrytime = ts->idle_entrytime;
+	smp_mb();
 	now = ktime_get();
 	if (last_update_time) {
 		update_ts_time_stats(cpu, ts, now, last_update_time);
 		iowait = ts->iowait_sleeptime;
 	} else {
 		if (ts->idle_active && nr_iowait_cpu(cpu) > 0) {
-			ktime_t delta = ktime_sub(now, ts->idle_entrytime);
+			ktime_t delta = ktime_sub(now, idle_entrytime);
 
 			iowait = ktime_add(ts->iowait_sleeptime, delta);
 		} else {


On Fri, 2013-01-18 at 17:57 +0900, Tetsuo Handa wrote:
> I forwarded this problem to Fernando.
> I think he will start discussion on how to fix this problem at the LKML.
>
> On Tue, 15 Jan 2013 13:14:38 +0100 (CET)
> Thomas Gleixner <tglx@linutronix.de> wrote:
>
> > On Tue, 15 Jan 2013, Tetsuo Handa wrote:
> >
> > > Hello.
> > >
> > > I can observe that get_cpu_iowait_time_us(cpu, NULL) sometime decreases,
> > > resulting in iowait field of cpu lines in /proc/stat decreasing.
> > > Is this a feature of tick_nohz_enabled == 1 ?
> >
> > It definitely not a feature. Is that simple to observe or does it
> > require any special setup/workload ?
> >
> > Thanks,
> >
> > 	Thomas



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC] iowait/idle time accounting hiccups in NOHZ kernels
  2013-03-19  2:38     ` [RFC] iowait/idle time accounting hiccups in NOHZ kernels Fernando Luis Vázquez Cao
@ 2013-04-01 13:05       ` Tetsuo Handa
  2013-04-23 12:45         ` [PATCH] proc: Add workaround for idle/iowait decreasing problem Tetsuo Handa
  0 siblings, 1 reply; 9+ messages in thread
From: Tetsuo Handa @ 2013-04-01 13:05 UTC (permalink / raw)
  To: tglx, fweisbec; +Cc: linux-kernel, fernando_b1

Fernando Luis V痙quez Cao wrote:
> (Moving discussion to LKML)
> 
> Hi Thomas, Frederic,
> 
> Tetsuo Handa reported that the iowait time obtained through /proc/stat
> is not monotonic.
> 

Hello.

The following instructions are steps for observing this problem.
It is recommended to use a machine with 4 CPUs, for it is difficult to
observe this problem with 2 CPUs.



[Approach 1] Steps for observing this problem from user space.

Compile readstat.c which reports only when this problem occurs

# cc -Wall -O2 -o readstat readstat.c

---------- readstat.c start ----------
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <poll.h>

#define MAX_CPUS 17

int main(int argc, char *argv[])
{
	const int fd = open("/proc/stat", O_RDONLY);
	static unsigned long long prev[MAX_CPUS][10];
	static unsigned long long cur[10];
	static const char * const array[10] = {
		"user", "nice", "system", "idle", "iowait", "irq", "softirq",
		"steal", "guest", "guest_nice"
	};
	memset(prev, 0, sizeof(prev));
	while (1) {
		static char buffer[1048576];
		char *buf = buffer;
		int i = pread(fd, buffer, sizeof(buffer) - 1, 0);
		if (i <= 0)
			break;
		buffer[i] = '\0';
		for (i = 0; i < MAX_CPUS; i++) {
			char *eol = strchr(buf, '\n');
			int j;
			if (!eol)
				break;
			*eol++ = '\0';
			if (strncmp(buf, "cpu", 3))
				break;
			while (*buf && *buf != ' ')
				buf++;
			memset(cur, 0, sizeof(cur));
			sscanf(buf, "%llu %llu %llu %llu %llu %llu %llu %llu "
			       "%llu %llu", &cur[0], &cur[1], &cur[2], &cur[3],
			       &cur[4], &cur[5], &cur[6], &cur[7], &cur[8],
			       &cur[9]);
			for (j = 0; j < 10; j++) {
				if (prev[i][j] > cur[j])
					printf("cpu[%d].%-10s : "
					       "%llu -> %llu\n", i - 1,
					       array[j], prev[i][j], cur[j]);
				prev[i][j] = cur[j];
			}
			buf = eol;
		}
	}
	return 0;
}
---------- readstat.c end ----------

and run command 1 and command 2 in parallel using two consoles.
We assume that the file written by command 2 is located on a filesystem
on disk devices (e.g. /dev/sda ).

(Command 1) # ./readstat
(Command 2) # dd if=/dev/zero of=/tmp/file bs=10485760 count=1024

You will see output like below.
Depending on situation, cpu[x].idle lines will also be printed.

---------- readstat output start ----------
cpu[-1].iowait     : 72373 -> 72370
cpu[1].iowait     : 21146 -> 21143
cpu[-1].iowait     : 72402 -> 72399
cpu[3].iowait     : 21033 -> 21030
cpu[-1].iowait     : 72502 -> 72499
cpu[1].iowait     : 21196 -> 21193
cpu[-1].iowait     : 72532 -> 72529
cpu[3].iowait     : 21047 -> 21044
cpu[-1].iowait     : 72623 -> 72612
cpu[2].iowait     : 4220 -> 4209
cpu[-1].iowait     : 72696 -> 72680
cpu[1].iowait     : 21269 -> 21253
cpu[2].iowait     : 4227 -> 4226
cpu[1].iowait     : 21290 -> 21289
cpu[-1].iowait     : 72930 -> 72924
cpu[1].iowait     : 21325 -> 21320
---------- readstat output end ----------

If you don't have C compiler installed, you may instead try

# watch -n 0.1 cat /proc/stat

for command 1, though it may be difficult for you to
confirm this problem since the output by watch goes away immediately.

If you don't get output, you can try different workload like

# dd if=/dev/zero of=dump1 bs=1M count=1000 conv=sync & dd if=/dev/zero of=dump2 bs=1M count=1000 conv=sync &

for command 2.



[Approach 2] Steps for observing this problem from kernel space.

Locate iowait/iowait.c and iowait/Makefile at the root directory of
kernel source tree

---------- iowait/iowait.c start ----------
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/tick.h>

static int __init check_init(void)
{
	static u64 last_iowait[NR_CPUS];
	static u64 last_idle[NR_CPUS];
	while (!signal_pending(current)) {
		int i;
		for_each_online_cpu(i) {
			u64 now = get_cpu_iowait_time_us(i, NULL);
			if (last_iowait[i] > now)
				printk(KERN_WARNING "iowait(%d): %llu -> %llu\n", i, last_iowait[i], now);
			last_iowait[i] = now;
			now = get_cpu_idle_time_us(i, NULL);
			if (last_idle[i] > now)
				printk(KERN_WARNING "idle(%d): %llu -> %llu\n", i, last_idle[i], now);
			last_idle[i] = now;
		}
		cond_resched();
	}
	return -EINVAL;
}

module_init(check_init);
MODULE_LICENSE("GPL");
---------- iowait/iowait.c end ----------

---------- iowait/Makefile start ----------
obj-m += iowait.o
---------- iowait/Makefile end ----------

and build iowait/iowait.ko module by

# make SUBDIRS=iowait modules

and run command 3 and command 4 in parallel using two consoles.

(Command 3) # insmod iowait/iowait.ko
(Command 4) # dd if=/dev/zero of=/tmp/file bs=10485760 count=1024

After dd process by command 4 terminates, kill insmod process by command 3
using Ctrl-C and run dmesg command. You will see output like below.

---------- dmesg output start ----------
iowait(1): 356464315 -> 356464314
idle(3): 482443634 -> 482443633
idle(0): 396026944 -> 396026943
idle(0): 396028152 -> 396028151
idle(0): 396029290 -> 396029289
idle(1): 280770309 -> 280770308
iowait(3): 237239882 -> 237239881
iowait(3): 237240830 -> 237240829
idle(1): 280770416 -> 280770415
iowait(3): 237244241 -> 237244240
idle(1): 280770746 -> 280770745
iowait(0): 397554152 -> 397554151
iowait(0): 397556611 -> 397556610
iowait(0): 397558355 -> 397558354
idle(1): 280771891 -> 280771890
idle(0): 396039655 -> 396039654
---------- dmesg output end ----------



> The patch below fixes the problem that the delta becomes negative, but
> this is not enough. Fixing the whole problem properly may require some
> major plumbing so I would like to know your take on this before going
> ahead.

Any ideas on how to handle this problem?

Regards.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-04-01 13:05       ` Tetsuo Handa
@ 2013-04-23 12:45         ` Tetsuo Handa
  2013-04-28  0:49           ` Frederic Weisbecker
  0 siblings, 1 reply; 9+ messages in thread
From: Tetsuo Handa @ 2013-04-23 12:45 UTC (permalink / raw)
  To: tglx, fweisbec; +Cc: linux-kernel, linux-fsdevel, fernando_b1

CONFIG_NO_HZ=y can cause idle/iowait values to decrease.

If /proc/stat is monitored with a short interval (e.g. 1 or 2 secs) using
sysstat package, sar reports bogus %idle/iowait values because sar expects
that idle/iowait values do not decrease unless wraparound happens.

This patch makes idle/iowait values visible from /proc/stat increase
monotonically, with an assumption that we don't need to worry about
wraparound.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
 fs/proc/stat.c |   42 ++++++++++++++++++++++++++++++++++++++----
 1 files changed, 38 insertions(+), 4 deletions(-)

diff --git a/fs/proc/stat.c b/fs/proc/stat.c
index e296572..9fff534 100644
--- a/fs/proc/stat.c
+++ b/fs/proc/stat.c
@@ -19,6 +19,40 @@
 #define arch_irq_stat() 0
 #endif
 
+/*
+ * CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
+ * Make sure that idle/iowait values visible from /proc/stat do not decrease.
+ */
+static inline u64 validate_iowait(u64 iowait, const int cpu)
+{
+#ifdef CONFIG_NO_HZ
+	static u64 max_iowait[NR_CPUS];
+	static DEFINE_SPINLOCK(lock);
+	spin_lock(&lock);
+	if (likely(iowait >= max_iowait[cpu]))
+		max_iowait[cpu] = iowait;
+	else
+		iowait = max_iowait[cpu];
+	spin_unlock(&lock);
+#endif
+	return iowait;
+}
+
+static inline u64 validate_idle(u64 idle, const int cpu)
+{
+#ifdef CONFIG_NO_HZ
+	static u64 max_idle[NR_CPUS];
+	static DEFINE_SPINLOCK(lock);
+	spin_lock(&lock);
+	if (likely(idle >= max_idle[cpu]))
+		max_idle[cpu] = idle;
+	else
+		idle = max_idle[cpu];
+	spin_unlock(&lock);
+#endif
+	return idle;
+}
+
 #ifdef arch_idle_time
 
 static cputime64_t get_idle_time(int cpu)
@@ -28,7 +62,7 @@ static cputime64_t get_idle_time(int cpu)
 	idle = kcpustat_cpu(cpu).cpustat[CPUTIME_IDLE];
 	if (cpu_online(cpu) && !nr_iowait_cpu(cpu))
 		idle += arch_idle_time(cpu);
-	return idle;
+	return validate_idle(idle, cpu);
 }
 
 static cputime64_t get_iowait_time(int cpu)
@@ -38,7 +72,7 @@ static cputime64_t get_iowait_time(int cpu)
 	iowait = kcpustat_cpu(cpu).cpustat[CPUTIME_IOWAIT];
 	if (cpu_online(cpu) && nr_iowait_cpu(cpu))
 		iowait += arch_idle_time(cpu);
-	return iowait;
+	return validate_iowait(iowait, cpu);
 }
 
 #else
@@ -56,7 +90,7 @@ static u64 get_idle_time(int cpu)
 	else
 		idle = usecs_to_cputime64(idle_time);
 
-	return idle;
+	return validate_idle(idle, cpu);
 }
 
 static u64 get_iowait_time(int cpu)
@@ -72,7 +106,7 @@ static u64 get_iowait_time(int cpu)
 	else
 		iowait = usecs_to_cputime64(iowait_time);
 
-	return iowait;
+	return validate_iowait(iowait, cpu);
 }
 
 #endif
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-04-23 12:45         ` [PATCH] proc: Add workaround for idle/iowait decreasing problem Tetsuo Handa
@ 2013-04-28  0:49           ` Frederic Weisbecker
  2013-07-02  3:56             ` Fernando Luis Vazquez Cao
  0 siblings, 1 reply; 9+ messages in thread
From: Frederic Weisbecker @ 2013-04-28  0:49 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: tglx, linux-kernel, linux-fsdevel, fernando_b1, Ingo Molnar,
	Peter Zijlstra, Andrew Morton

On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
> CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> 
> If /proc/stat is monitored with a short interval (e.g. 1 or 2 secs) using
> sysstat package, sar reports bogus %idle/iowait values because sar expects
> that idle/iowait values do not decrease unless wraparound happens.
> 
> This patch makes idle/iowait values visible from /proc/stat increase
> monotonically, with an assumption that we don't need to worry about
> wraparound.
> 
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>

It's not clear in the changelog why you see non-monotonic idle/iowait values.

Looking at the previous patch from Fernando, it seems that's because we can
race with concurrent updates from the CPU target when it wakes up from idle?
(could be updated by drivers/cpufreq/cpufreq_governor.c as well).

If so the bug has another symptom: we may also report a wrong iowait/idle time
by accounting the last idle time twice.

In this case we should fix the bug from the source, for example we can force
the given ordering:

= Write side =                          = Read side =

// tick_nohz_start_idle()
write_seqcount_begin(ts->seq)
ts->idle_entrytime = now
ts->idle_active = 1
write_seqcount_end(ts->seq)

// tick_nohz_stop_idle()
write_seqcount_begin(ts->seq)
ts->iowait_sleeptime += now - ts->idle_entrytime
t->idle_active = 0
write_seqcount_end(ts->seq)

                                        // get_cpu_iowait_time_us()
                                        do {
                                            seq = read_seqcount_begin(ts->seq)
                                            if (t->idle_active) {
                                                time = now - ts->idle_entrytime
                                                time += ts->iowait_sleeptime
                                            } else {
                                                time = ts->iowait_sleeptime
                                            }
                                        } while (read_seqcount_retry(ts->seq, seq));

Right? seqcount should be enough to make sure we are getting a consistent result.
I doubt we need harder locking.

Another thing while at it. It seems that an update done from drivers/cpufreq/cpufreq_governor.c
(calling get_cpu_iowait_time_us() -> update_ts_time_stats()) can randomly race with a CPU
entering/exiting idle. I have no idea why drivers/cpufreq/cpufreq_governor.c does the update
itself. It can just compute the delta like any reader. May be we could remove that and only
ever call update_ts_time_stats() from the CPU that exit idle.

What do you think?

Thanks.

	Frederic.

> ---
>  fs/proc/stat.c |   42 ++++++++++++++++++++++++++++++++++++++----
>  1 files changed, 38 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/proc/stat.c b/fs/proc/stat.c
> index e296572..9fff534 100644
> --- a/fs/proc/stat.c
> +++ b/fs/proc/stat.c
> @@ -19,6 +19,40 @@
>  #define arch_irq_stat() 0
>  #endif
>  
> +/*
> + * CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> + * Make sure that idle/iowait values visible from /proc/stat do not decrease.
> + */
> +static inline u64 validate_iowait(u64 iowait, const int cpu)
> +{
> +#ifdef CONFIG_NO_HZ
> +	static u64 max_iowait[NR_CPUS];
> +	static DEFINE_SPINLOCK(lock);
> +	spin_lock(&lock);
> +	if (likely(iowait >= max_iowait[cpu]))
> +		max_iowait[cpu] = iowait;
> +	else
> +		iowait = max_iowait[cpu];
> +	spin_unlock(&lock);
> +#endif
> +	return iowait;
> +}
> +
> +static inline u64 validate_idle(u64 idle, const int cpu)
> +{
> +#ifdef CONFIG_NO_HZ
> +	static u64 max_idle[NR_CPUS];
> +	static DEFINE_SPINLOCK(lock);
> +	spin_lock(&lock);
> +	if (likely(idle >= max_idle[cpu]))
> +		max_idle[cpu] = idle;
> +	else
> +		idle = max_idle[cpu];
> +	spin_unlock(&lock);
> +#endif
> +	return idle;
> +}
> +
>  #ifdef arch_idle_time
>  
>  static cputime64_t get_idle_time(int cpu)
> @@ -28,7 +62,7 @@ static cputime64_t get_idle_time(int cpu)
>  	idle = kcpustat_cpu(cpu).cpustat[CPUTIME_IDLE];
>  	if (cpu_online(cpu) && !nr_iowait_cpu(cpu))
>  		idle += arch_idle_time(cpu);
> -	return idle;
> +	return validate_idle(idle, cpu);
>  }
>  
>  static cputime64_t get_iowait_time(int cpu)
> @@ -38,7 +72,7 @@ static cputime64_t get_iowait_time(int cpu)
>  	iowait = kcpustat_cpu(cpu).cpustat[CPUTIME_IOWAIT];
>  	if (cpu_online(cpu) && nr_iowait_cpu(cpu))
>  		iowait += arch_idle_time(cpu);
> -	return iowait;
> +	return validate_iowait(iowait, cpu);
>  }
>  
>  #else
> @@ -56,7 +90,7 @@ static u64 get_idle_time(int cpu)
>  	else
>  		idle = usecs_to_cputime64(idle_time);
>  
> -	return idle;
> +	return validate_idle(idle, cpu);
>  }
>  
>  static u64 get_iowait_time(int cpu)
> @@ -72,7 +106,7 @@ static u64 get_iowait_time(int cpu)
>  	else
>  		iowait = usecs_to_cputime64(iowait_time);
>  
> -	return iowait;
> +	return validate_iowait(iowait, cpu);
>  }
>  
>  #endif
> -- 
> 1.7.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-04-28  0:49           ` Frederic Weisbecker
@ 2013-07-02  3:56             ` Fernando Luis Vazquez Cao
  2013-07-02 10:39               ` Fernando Luis Vazquez Cao
  2013-08-07  0:12               ` Frederic Weisbecker
  0 siblings, 2 replies; 9+ messages in thread
From: Fernando Luis Vazquez Cao @ 2013-07-02  3:56 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Tetsuo Handa, tglx, linux-kernel, linux-fsdevel, Ingo Molnar,
	Peter Zijlstra, Andrew Morton, Arjan van de Ven

Hi Frederic,

I'm sorry it's taken me so long to respond; I got sidetracked for
a while. Comments follow below.

On 2013/04/28 09:49, Frederic Weisbecker wrote:
> On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
>> CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
[...]
> It's not clear in the changelog why you see non-monotonic idle/iowait values.
>
> Looking at the previous patch from Fernando, it seems that's because we can
> race with concurrent updates from the CPU target when it wakes up from idle?
> (could be updated by drivers/cpufreq/cpufreq_governor.c as well).
>
> If so the bug has another symptom: we may also report a wrong iowait/idle time
> by accounting the last idle time twice.
>
> In this case we should fix the bug from the source, for example we can force
> the given ordering:
>
> = Write side =                          = Read side =
>
> // tick_nohz_start_idle()
> write_seqcount_begin(ts->seq)
> ts->idle_entrytime = now
> ts->idle_active = 1
> write_seqcount_end(ts->seq)
>
> // tick_nohz_stop_idle()
> write_seqcount_begin(ts->seq)
> ts->iowait_sleeptime += now - ts->idle_entrytime
> t->idle_active = 0
> write_seqcount_end(ts->seq)
>
>                                          // get_cpu_iowait_time_us()
>                                          do {
>                                              seq = read_seqcount_begin(ts->seq)
>                                              if (t->idle_active) {
>                                                  time = now - ts->idle_entrytime
>                                                  time += ts->iowait_sleeptime
>                                              } else {
>                                                  time = ts->iowait_sleeptime
>                                              }
>                                          } while (read_seqcount_retry(ts->seq, seq));
>
> Right? seqcount should be enough to make sure we are getting a consistent result.
> I doubt we need harder locking.

I tried that and it doesn't suffice. The problem that causes the most
serious skews is related to the CPU scheduler: the per-run queue
counter nr_iowait can be updated not only from the CPU it belongs
to but also from any other CPU if tasks are migrated out while
waiting on I/O.

The race looks like this:

CPU0                            CPU1
                                 [ CPU1_rq->nr_iowait == 0 ]
                                 Task foo: io_schedule()
                                             schedule()
                                 [ CPU1_rq->nr_iowait == 1) ]
                                 Task foo migrated to CPU0
                                 Goes to sleep

// get_cpu_iowait_time_us(1, NULL)
[ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1         ]
[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
now = 5
delta = 5 - 3 = 2
iowait = 4 + 2 = 6

Task foo wakes up
[ CPU1_rq->nr_iowait == 0 ]

                                 CPU1 comes out of sleep state
                                 tick_nohz_stop_idle()
                                   update_ts_time_stats()
                                     [ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 0         ]
                                     [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
                                     now = 6
                                     delta = 6 - 3 = 3
                                     (CPU1_ts->iowait_sleeptime is not updated)
                                     CPU1_ts->idle_entrytime = now = 6
                                   CPU1_ts->idle_active = 0

// get_cpu_iowait_time_us(1, NULL)
[ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0         ]
[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
iowait = CPU1_ts->iowait_sleeptime = 4
(iowait decreased from 6 to 4)


> Another thing while at it. It seems that an update done from drivers/cpufreq/cpufreq_governor.c
> (calling get_cpu_iowait_time_us() -> update_ts_time_stats()) can randomly race with a CPU
> entering/exiting idle. I have no idea why drivers/cpufreq/cpufreq_governor.c does the update
> itself. It can just compute the delta like any reader. May be we could remove that and only
> ever call update_ts_time_stats() from the CPU that exit idle.
>
> What do you think?

I am all for it. We just need to make sure that CPU governors
can cope with non-monotonic idle and iowait times. I'll take
a closer look at the code but I wouldn't mind if Arjan (CCed)
beat me at that.

Thanks,
Fernando

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-07-02  3:56             ` Fernando Luis Vazquez Cao
@ 2013-07-02 10:39               ` Fernando Luis Vazquez Cao
  2013-08-07  0:58                   ` Frederic Weisbecker
  2013-08-07  0:12               ` Frederic Weisbecker
  1 sibling, 1 reply; 9+ messages in thread
From: Fernando Luis Vazquez Cao @ 2013-07-02 10:39 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Tetsuo Handa, tglx, linux-kernel, linux-fsdevel, Ingo Molnar,
	Peter Zijlstra, Andrew Morton, Arjan van de Ven

On 2013年07月02日 12:56, Fernando Luis Vazquez Cao wrote:
> Hi Frederic,
>
> I'm sorry it's taken me so long to respond; I got sidetracked for
> a while. Comments follow below.
>
> On 2013/04/28 09:49, Frederic Weisbecker wrote:
>> On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
>>> CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> [...]
>> It's not clear in the changelog why you see non-monotonic idle/iowait 
>> values.
>>
>> Looking at the previous patch from Fernando, it seems that's because 
>> we can
>> race with concurrent updates from the CPU target when it wakes up 
>> from idle?
>> (could be updated by drivers/cpufreq/cpufreq_governor.c as well).
>>
>> If so the bug has another symptom: we may also report a wrong 
>> iowait/idle time
>> by accounting the last idle time twice.
>>
>> In this case we should fix the bug from the source, for example we 
>> can force
>> the given ordering:
>>
>> = Write side =                          = Read side =
>>
>> // tick_nohz_start_idle()
>> write_seqcount_begin(ts->seq)
>> ts->idle_entrytime = now
>> ts->idle_active = 1
>> write_seqcount_end(ts->seq)
>>
>> // tick_nohz_stop_idle()
>> write_seqcount_begin(ts->seq)
>> ts->iowait_sleeptime += now - ts->idle_entrytime
>> t->idle_active = 0
>> write_seqcount_end(ts->seq)
>>
>>                                          // get_cpu_iowait_time_us()
>>                                          do {
>>                                              seq = 
>> read_seqcount_begin(ts->seq)
>>                                              if (t->idle_active) {
>>                                                  time = now - 
>> ts->idle_entrytime
>>                                                  time += 
>> ts->iowait_sleeptime
>>                                              } else {
>>                                                  time = 
>> ts->iowait_sleeptime
>>                                              }
>>                                          } while 
>> (read_seqcount_retry(ts->seq, seq));
>>
>> Right? seqcount should be enough to make sure we are getting a 
>> consistent result.
>> I doubt we need harder locking.
>
> I tried that and it doesn't suffice. The problem that causes the most
> serious skews is related to the CPU scheduler: the per-run queue
> counter nr_iowait can be updated not only from the CPU it belongs
> to but also from any other CPU if tasks are migrated out while
> waiting on I/O.
>
> The race looks like this:
>
> CPU0                            CPU1
>                                 [ CPU1_rq->nr_iowait == 0 ]
>                                 Task foo: io_schedule()
>                                             schedule()
>                                 [ CPU1_rq->nr_iowait == 1) ]
>                                 Task foo migrated to CPU0
>                                 Goes to sleep
>
> // get_cpu_iowait_time_us(1, NULL)
> [ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1 ]
> [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
> now = 5
> delta = 5 - 3 = 2
> iowait = 4 + 2 = 6
>
> Task foo wakes up
> [ CPU1_rq->nr_iowait == 0 ]
>
>                                 CPU1 comes out of sleep state
>                                 tick_nohz_stop_idle()
>                                   update_ts_time_stats()
>                                     [ CPU1_ts->idle_active == 1, 
> CPU1_rq->nr_iowait == 0         ]
>                                     [ CPU1_ts->iowait_sleeptime = 4, 
> CPU1_ts->idle_entrytime = 3 ]
>                                     now = 6
>                                     delta = 6 - 3 = 3
>                                     (CPU1_ts->iowait_sleeptime is not 
> updated)
>                                     CPU1_ts->idle_entrytime = now = 6
>                                   CPU1_ts->idle_active = 0
>
> // get_cpu_iowait_time_us(1, NULL)
> [ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0 ]
> [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
> iowait = CPU1_ts->iowait_sleeptime = 4
> (iowait decreased from 6 to 4)

A possible solution to the races above would be to add
a per-cpu variable such ->iowait_sleeptime_user which
shadows ->iowait_sleeptime but is maintained in
get_cpu_iowait_time_us() and kept monotonic,
the former being the one we would export to user
space.

Another approach would be updating ->nr_iowait
of the source and destination CPUs during task
migration, but this may be overkill.

What do you think?

Thanks,
Fernando

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-07-02  3:56             ` Fernando Luis Vazquez Cao
  2013-07-02 10:39               ` Fernando Luis Vazquez Cao
@ 2013-08-07  0:12               ` Frederic Weisbecker
  1 sibling, 0 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-07  0:12 UTC (permalink / raw)
  To: Fernando Luis Vazquez Cao
  Cc: Tetsuo Handa, tglx, linux-kernel, linux-fsdevel, Ingo Molnar,
	Peter Zijlstra, Andrew Morton, Arjan van de Ven

On Tue, Jul 02, 2013 at 12:56:04PM +0900, Fernando Luis Vazquez Cao wrote:
> Hi Frederic,
> 
> I'm sorry it's taken me so long to respond; I got sidetracked for
> a while. Comments follow below.
> 
> On 2013/04/28 09:49, Frederic Weisbecker wrote:
> >On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
> >>CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> [...]
> >It's not clear in the changelog why you see non-monotonic idle/iowait values.
> >
> >Looking at the previous patch from Fernando, it seems that's because we can
> >race with concurrent updates from the CPU target when it wakes up from idle?
> >(could be updated by drivers/cpufreq/cpufreq_governor.c as well).
> >
> >If so the bug has another symptom: we may also report a wrong iowait/idle time
> >by accounting the last idle time twice.
> >
> >In this case we should fix the bug from the source, for example we can force
> >the given ordering:
> >
> >= Write side =                          = Read side =
> >
> >// tick_nohz_start_idle()
> >write_seqcount_begin(ts->seq)
> >ts->idle_entrytime = now
> >ts->idle_active = 1
> >write_seqcount_end(ts->seq)
> >
> >// tick_nohz_stop_idle()
> >write_seqcount_begin(ts->seq)
> >ts->iowait_sleeptime += now - ts->idle_entrytime
> >t->idle_active = 0
> >write_seqcount_end(ts->seq)
> >
> >                                         // get_cpu_iowait_time_us()
> >                                         do {
> >                                             seq = read_seqcount_begin(ts->seq)
> >                                             if (t->idle_active) {
> >                                                 time = now - ts->idle_entrytime
> >                                                 time += ts->iowait_sleeptime
> >                                             } else {
> >                                                 time = ts->iowait_sleeptime
> >                                             }
> >                                         } while (read_seqcount_retry(ts->seq, seq));
> >
> >Right? seqcount should be enough to make sure we are getting a consistent result.
> >I doubt we need harder locking.
> 
> I tried that and it doesn't suffice. The problem that causes the most
> serious skews is related to the CPU scheduler: the per-run queue
> counter nr_iowait can be updated not only from the CPU it belongs
> to but also from any other CPU if tasks are migrated out while
> waiting on I/O.
> 
> The race looks like this:
> 
> CPU0                            CPU1
>                                 [ CPU1_rq->nr_iowait == 0 ]
>                                 Task foo: io_schedule()
>                                             schedule()
>                                 [ CPU1_rq->nr_iowait == 1) ]
>                                 Task foo migrated to CPU0
>                                 Goes to sleep
> 
> // get_cpu_iowait_time_us(1, NULL)
> [ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1         ]
> [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
> now = 5
> delta = 5 - 3 = 2
> iowait = 4 + 2 = 6
> 
> Task foo wakes up
> [ CPU1_rq->nr_iowait == 0 ]
> 
>                                 CPU1 comes out of sleep state
>                                 tick_nohz_stop_idle()
>                                   update_ts_time_stats()
>                                     [ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 0         ]
>                                     [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
>                                     now = 6
>                                     delta = 6 - 3 = 3
>                                     (CPU1_ts->iowait_sleeptime is not updated)
>                                     CPU1_ts->idle_entrytime = now = 6
>                                   CPU1_ts->idle_active = 0
> 
> // get_cpu_iowait_time_us(1, NULL)
> [ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0         ]
> [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
> iowait = CPU1_ts->iowait_sleeptime = 4
> (iowait decreased from 6 to 4)

Yeah, that's why we need to allow updates of ts->idle/iowait_sleeptime only from the local CPU
when it exits idle.

> 
> 
> >Another thing while at it. It seems that an update done from drivers/cpufreq/cpufreq_governor.c
> >(calling get_cpu_iowait_time_us() -> update_ts_time_stats()) can randomly race with a CPU
> >entering/exiting idle. I have no idea why drivers/cpufreq/cpufreq_governor.c does the update
> >itself. It can just compute the delta like any reader. May be we could remove that and only
> >ever call update_ts_time_stats() from the CPU that exit idle.
> >
> >What do you think?
> 
> I am all for it. We just need to make sure that CPU governors
> can cope with non-monotonic idle and iowait times. I'll take
> a closer look at the code but I wouldn't mind if Arjan (CCed)
> beat me at that.

I'm not sure what you mean. Only allowing the update from local idle exit won't break
monotonicity.

I'll try to write some patches about that.

> 
> Thanks,
> Fernando

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
  2013-07-02 10:39               ` Fernando Luis Vazquez Cao
@ 2013-08-07  0:58                   ` Frederic Weisbecker
  0 siblings, 0 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-07  0:58 UTC (permalink / raw)
  To: Fernando Luis Vazquez Cao
  Cc: Tetsuo Handa, tglx, linux-kernel, linux-fsdevel, Ingo Molnar,
	Peter Zijlstra, Andrew Morton, Arjan van de Ven

On Tue, Jul 02, 2013 at 07:39:08PM +0900, Fernando Luis Vazquez Cao wrote:
> On 2013年07月02日 12:56, Fernando Luis Vazquez Cao wrote:
> >Hi Frederic,
> >
> >I'm sorry it's taken me so long to respond; I got sidetracked for
> >a while. Comments follow below.
> >
> >On 2013/04/28 09:49, Frederic Weisbecker wrote:
> >>On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
> >>>CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> >[...]
> >>It's not clear in the changelog why you see non-monotonic
> >>idle/iowait values.
> >>
> >>Looking at the previous patch from Fernando, it seems that's
> >>because we can
> >>race with concurrent updates from the CPU target when it wakes
> >>up from idle?
> >>(could be updated by drivers/cpufreq/cpufreq_governor.c as well).
> >>
> >>If so the bug has another symptom: we may also report a wrong
> >>iowait/idle time
> >>by accounting the last idle time twice.
> >>
> >>In this case we should fix the bug from the source, for example
> >>we can force
> >>the given ordering:
> >>
> >>= Write side =                          = Read side =
> >>
> >>// tick_nohz_start_idle()
> >>write_seqcount_begin(ts->seq)
> >>ts->idle_entrytime = now
> >>ts->idle_active = 1
> >>write_seqcount_end(ts->seq)
> >>
> >>// tick_nohz_stop_idle()
> >>write_seqcount_begin(ts->seq)
> >>ts->iowait_sleeptime += now - ts->idle_entrytime
> >>t->idle_active = 0
> >>write_seqcount_end(ts->seq)
> >>
> >>                                         // get_cpu_iowait_time_us()
> >>                                         do {
> >>                                             seq =
> >>read_seqcount_begin(ts->seq)
> >>                                             if (t->idle_active) {
> >>                                                 time = now -
> >>ts->idle_entrytime
> >>                                                 time +=
> >>ts->iowait_sleeptime
> >>                                             } else {
> >>                                                 time =
> >>ts->iowait_sleeptime
> >>                                             }
> >>                                         } while
> >>(read_seqcount_retry(ts->seq, seq));
> >>
> >>Right? seqcount should be enough to make sure we are getting a
> >>consistent result.
> >>I doubt we need harder locking.
> >
> >I tried that and it doesn't suffice. The problem that causes the most
> >serious skews is related to the CPU scheduler: the per-run queue
> >counter nr_iowait can be updated not only from the CPU it belongs
> >to but also from any other CPU if tasks are migrated out while
> >waiting on I/O.
> >
> >The race looks like this:
> >
> >CPU0                            CPU1
> >                                [ CPU1_rq->nr_iowait == 0 ]
> >                                Task foo: io_schedule()
> >                                            schedule()
> >                                [ CPU1_rq->nr_iowait == 1) ]
> >                                Task foo migrated to CPU0
> >                                Goes to sleep
> >
> >// get_cpu_iowait_time_us(1, NULL)
> >[ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1 ]
> >[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
> >now = 5
> >delta = 5 - 3 = 2
> >iowait = 4 + 2 = 6
> >
> >Task foo wakes up
> >[ CPU1_rq->nr_iowait == 0 ]
> >
> >                                CPU1 comes out of sleep state
> >                                tick_nohz_stop_idle()
> >                                  update_ts_time_stats()
> >                                    [ CPU1_ts->idle_active == 1,
> >CPU1_rq->nr_iowait == 0         ]
> >                                    [ CPU1_ts->iowait_sleeptime =
> >4, CPU1_ts->idle_entrytime = 3 ]
> >                                    now = 6
> >                                    delta = 6 - 3 = 3
> >                                    (CPU1_ts->iowait_sleeptime is
> >not updated)
> >                                    CPU1_ts->idle_entrytime = now = 6
> >                                  CPU1_ts->idle_active = 0
> >
> >// get_cpu_iowait_time_us(1, NULL)
> >[ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0 ]
> >[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
> >iowait = CPU1_ts->iowait_sleeptime = 4
> >(iowait decreased from 6 to 4)
> 
> A possible solution to the races above would be to add
> a per-cpu variable such ->iowait_sleeptime_user which
> shadows ->iowait_sleeptime but is maintained in
> get_cpu_iowait_time_us() and kept monotonic,
> the former being the one we would export to user
> space.
> 
> Another approach would be updating ->nr_iowait
> of the source and destination CPUs during task
> migration, but this may be overkill.
> 
> What do you think?

I have the feeling we can fix that with:

* only update ts->idle_sleeptime / ts->iowait_sleeptime locally
  from tick_nohz_start_idle() and tick_nohz_stop_idle()

* readers can add the pending delta to these values anytime they fetch it

* use seqcount to ensure that ts->idle_entrytime, ts->iowait/idle_sleeptime update
sequences are well synchronized.

I just wrote the patches that do that. Let me just test them and write the changelogs
then I'll post that tomorrow.

Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.
@ 2013-08-07  0:58                   ` Frederic Weisbecker
  0 siblings, 0 replies; 9+ messages in thread
From: Frederic Weisbecker @ 2013-08-07  0:58 UTC (permalink / raw)
  To: Fernando Luis Vazquez Cao
  Cc: Tetsuo Handa, tglx, linux-kernel, linux-fsdevel, Ingo Molnar,
	Peter Zijlstra, Andrew Morton, Arjan van de Ven

On Tue, Jul 02, 2013 at 07:39:08PM +0900, Fernando Luis Vazquez Cao wrote:
> On 2013年07月02日 12:56, Fernando Luis Vazquez Cao wrote:
> >Hi Frederic,
> >
> >I'm sorry it's taken me so long to respond; I got sidetracked for
> >a while. Comments follow below.
> >
> >On 2013/04/28 09:49, Frederic Weisbecker wrote:
> >>On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
> >>>CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
> >[...]
> >>It's not clear in the changelog why you see non-monotonic
> >>idle/iowait values.
> >>
> >>Looking at the previous patch from Fernando, it seems that's
> >>because we can
> >>race with concurrent updates from the CPU target when it wakes
> >>up from idle?
> >>(could be updated by drivers/cpufreq/cpufreq_governor.c as well).
> >>
> >>If so the bug has another symptom: we may also report a wrong
> >>iowait/idle time
> >>by accounting the last idle time twice.
> >>
> >>In this case we should fix the bug from the source, for example
> >>we can force
> >>the given ordering:
> >>
> >>= Write side =                          = Read side =
> >>
> >>// tick_nohz_start_idle()
> >>write_seqcount_begin(ts->seq)
> >>ts->idle_entrytime = now
> >>ts->idle_active = 1
> >>write_seqcount_end(ts->seq)
> >>
> >>// tick_nohz_stop_idle()
> >>write_seqcount_begin(ts->seq)
> >>ts->iowait_sleeptime += now - ts->idle_entrytime
> >>t->idle_active = 0
> >>write_seqcount_end(ts->seq)
> >>
> >>                                         // get_cpu_iowait_time_us()
> >>                                         do {
> >>                                             seq =
> >>read_seqcount_begin(ts->seq)
> >>                                             if (t->idle_active) {
> >>                                                 time = now -
> >>ts->idle_entrytime
> >>                                                 time +=
> >>ts->iowait_sleeptime
> >>                                             } else {
> >>                                                 time =
> >>ts->iowait_sleeptime
> >>                                             }
> >>                                         } while
> >>(read_seqcount_retry(ts->seq, seq));
> >>
> >>Right? seqcount should be enough to make sure we are getting a
> >>consistent result.
> >>I doubt we need harder locking.
> >
> >I tried that and it doesn't suffice. The problem that causes the most
> >serious skews is related to the CPU scheduler: the per-run queue
> >counter nr_iowait can be updated not only from the CPU it belongs
> >to but also from any other CPU if tasks are migrated out while
> >waiting on I/O.
> >
> >The race looks like this:
> >
> >CPU0                            CPU1
> >                                [ CPU1_rq->nr_iowait == 0 ]
> >                                Task foo: io_schedule()
> >                                            schedule()
> >                                [ CPU1_rq->nr_iowait == 1) ]
> >                                Task foo migrated to CPU0
> >                                Goes to sleep
> >
> >// get_cpu_iowait_time_us(1, NULL)
> >[ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1 ]
> >[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
> >now = 5
> >delta = 5 - 3 = 2
> >iowait = 4 + 2 = 6
> >
> >Task foo wakes up
> >[ CPU1_rq->nr_iowait == 0 ]
> >
> >                                CPU1 comes out of sleep state
> >                                tick_nohz_stop_idle()
> >                                  update_ts_time_stats()
> >                                    [ CPU1_ts->idle_active == 1,
> >CPU1_rq->nr_iowait == 0         ]
> >                                    [ CPU1_ts->iowait_sleeptime =
> >4, CPU1_ts->idle_entrytime = 3 ]
> >                                    now = 6
> >                                    delta = 6 - 3 = 3
> >                                    (CPU1_ts->iowait_sleeptime is
> >not updated)
> >                                    CPU1_ts->idle_entrytime = now = 6
> >                                  CPU1_ts->idle_active = 0
> >
> >// get_cpu_iowait_time_us(1, NULL)
> >[ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0 ]
> >[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
> >iowait = CPU1_ts->iowait_sleeptime = 4
> >(iowait decreased from 6 to 4)
> 
> A possible solution to the races above would be to add
> a per-cpu variable such ->iowait_sleeptime_user which
> shadows ->iowait_sleeptime but is maintained in
> get_cpu_iowait_time_us() and kept monotonic,
> the former being the one we would export to user
> space.
> 
> Another approach would be updating ->nr_iowait
> of the source and destination CPUs during task
> migration, but this may be overkill.
> 
> What do you think?

I have the feeling we can fix that with:

* only update ts->idle_sleeptime / ts->iowait_sleeptime locally
  from tick_nohz_start_idle() and tick_nohz_stop_idle()

* readers can add the pending delta to these values anytime they fetch it

* use seqcount to ensure that ts->idle_entrytime, ts->iowait/idle_sleeptime update
sequences are well synchronized.

I just wrote the patches that do that. Let me just test them and write the changelogs
then I'll post that tomorrow.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-08-07  0:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <201301152014.AAD52192.FOOHQVtSFMFOJL@I-love.SAKURA.ne.jp>
     [not found] ` <alpine.LFD.2.02.1301151313170.7475@ionos>
     [not found]   ` <201301180857.r0I8vK7c052791@www262.sakura.ne.jp>
2013-03-19  2:38     ` [RFC] iowait/idle time accounting hiccups in NOHZ kernels Fernando Luis Vázquez Cao
2013-04-01 13:05       ` Tetsuo Handa
2013-04-23 12:45         ` [PATCH] proc: Add workaround for idle/iowait decreasing problem Tetsuo Handa
2013-04-28  0:49           ` Frederic Weisbecker
2013-07-02  3:56             ` Fernando Luis Vazquez Cao
2013-07-02 10:39               ` Fernando Luis Vazquez Cao
2013-08-07  0:58                 ` Frederic Weisbecker
2013-08-07  0:58                   ` Frederic Weisbecker
2013-08-07  0:12               ` Frederic Weisbecker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.