All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096
@ 2012-11-06 21:02 Nathan Zimmer
  2012-11-06 21:02 ` [RFC 1/2] procfs: /proc/sched_stat fails on very very large machines Nathan Zimmer
  2012-11-07  0:37 ` [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Al Viro
  0 siblings, 2 replies; 8+ messages in thread
From: Nathan Zimmer @ 2012-11-06 21:02 UTC (permalink / raw)
  Cc: Nathan Zimmer, Ingo Molnar, Peter Zijlstra, linux-kernel

When running with 4096 cores attemping to read /proc/sched_stat and
/proc/sched_debug will fail with an ENOMEM condition.
On a sufficantly large systems the total amount of data is more then 4mb, so
it won't fit into a single buffer. 

Nathan Zimmer (2):
  procfs: /proc/sched_stat fails on very very large machines.
  procfs: /proc/sched_debug fails on very very large machines.

 kernel/sched/debug.c |  101 +++++++++++++++++++++++++-----------
 kernel/sched/stats.c |  139 +++++++++++++++++++++++++++++---------------------
 2 files changed, 152 insertions(+), 88 deletions(-)

Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
CC: Ingo Molnar <mingo@redhat.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: linux-kernel@vger.kernel.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC 1/2] procfs: /proc/sched_stat fails on very very large machines.
  2012-11-06 21:02 [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Nathan Zimmer
@ 2012-11-06 21:02 ` Nathan Zimmer
  2012-11-06 21:02   ` [RFC 2/2] procfs: /proc/sched_debug " Nathan Zimmer
  2012-11-07  0:37 ` [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Al Viro
  1 sibling, 1 reply; 8+ messages in thread
From: Nathan Zimmer @ 2012-11-06 21:02 UTC (permalink / raw)
  Cc: Nathan Zimmer, Ingo Molnar, Peter Zijlstra, linux-kernel

On systems with 4096 cores doing a cat /proc/sched_stat fails.
We are trying to push all the data into a single kmalloc buffer.
The issue is on these very large machines all the data will not fit in 4mb.

A better solution is to not us the single_open mechanism but to provide
our own seq_operations.

The output should be identical to previous version and thus not need the
version number.

Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
CC: Ingo Molnar <mingo@redhat.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: linux-kernel@vger.kernel.org

---
 kernel/sched/stats.c |  139 +++++++++++++++++++++++++++++---------------------
 1 files changed, 81 insertions(+), 58 deletions(-)

diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
index 903ffa9..a4326a8 100644
--- a/kernel/sched/stats.c
+++ b/kernel/sched/stats.c
@@ -17,90 +17,113 @@ static int show_schedstat(struct seq_file *seq, void *v)
 	int cpu;
 	int mask_len = DIV_ROUND_UP(NR_CPUS, 32) * 9;
 	char *mask_str = kmalloc(mask_len, GFP_KERNEL);
+	cpu = *(loff_t *)v;
 
 	if (mask_str == NULL)
 		return -ENOMEM;
 
-	seq_printf(seq, "version %d\n", SCHEDSTAT_VERSION);
-	seq_printf(seq, "timestamp %lu\n", jiffies);
-	for_each_online_cpu(cpu) {
-		struct rq *rq = cpu_rq(cpu);
+	if (!cpu) {
+		seq_printf(seq, "version %d\n", SCHEDSTAT_VERSION);
+		seq_printf(seq, "timestamp %lu\n", jiffies);
+	}
+
+	struct rq *rq = cpu_rq(cpu);
 #ifdef CONFIG_SMP
-		struct sched_domain *sd;
-		int dcount = 0;
+	struct sched_domain *sd;
+	int dcount = 0;
 #endif
 
-		/* runqueue-specific stats */
-		seq_printf(seq,
-		    "cpu%d %u 0 %u %u %u %u %llu %llu %lu",
-		    cpu, rq->yld_count,
-		    rq->sched_count, rq->sched_goidle,
-		    rq->ttwu_count, rq->ttwu_local,
-		    rq->rq_cpu_time,
-		    rq->rq_sched_info.run_delay, rq->rq_sched_info.pcount);
+	/* runqueue-specific stats */
+	seq_printf(seq,
+	    "cpu%d %u 0 %u %u %u %u %llu %llu %lu",
+	    cpu, rq->yld_count,
+	    rq->sched_count, rq->sched_goidle,
+	    rq->ttwu_count, rq->ttwu_local,
+	    rq->rq_cpu_time,
+	    rq->rq_sched_info.run_delay, rq->rq_sched_info.pcount);
 
-		seq_printf(seq, "\n");
+	seq_printf(seq, "\n");
 
 #ifdef CONFIG_SMP
-		/* domain-specific stats */
-		rcu_read_lock();
-		for_each_domain(cpu, sd) {
-			enum cpu_idle_type itype;
-
-			cpumask_scnprintf(mask_str, mask_len,
-					  sched_domain_span(sd));
-			seq_printf(seq, "domain%d %s", dcount++, mask_str);
-			for (itype = CPU_IDLE; itype < CPU_MAX_IDLE_TYPES;
-					itype++) {
-				seq_printf(seq, " %u %u %u %u %u %u %u %u",
-				    sd->lb_count[itype],
-				    sd->lb_balanced[itype],
-				    sd->lb_failed[itype],
-				    sd->lb_imbalance[itype],
-				    sd->lb_gained[itype],
-				    sd->lb_hot_gained[itype],
-				    sd->lb_nobusyq[itype],
-				    sd->lb_nobusyg[itype]);
-			}
-			seq_printf(seq,
-				   " %u %u %u %u %u %u %u %u %u %u %u %u\n",
-			    sd->alb_count, sd->alb_failed, sd->alb_pushed,
-			    sd->sbe_count, sd->sbe_balanced, sd->sbe_pushed,
-			    sd->sbf_count, sd->sbf_balanced, sd->sbf_pushed,
-			    sd->ttwu_wake_remote, sd->ttwu_move_affine,
-			    sd->ttwu_move_balance);
+	/* domain-specific stats */
+	rcu_read_lock();
+	for_each_domain(cpu, sd) {
+		enum cpu_idle_type itype;
+
+		cpumask_scnprintf(mask_str, mask_len,
+				  sched_domain_span(sd));
+		seq_printf(seq, "domain%d %s", dcount++, mask_str);
+		for (itype = CPU_IDLE; itype < CPU_MAX_IDLE_TYPES;
+				itype++) {
+			seq_printf(seq, " %u %u %u %u %u %u %u %u",
+			    sd->lb_count[itype],
+			    sd->lb_balanced[itype],
+			    sd->lb_failed[itype],
+			    sd->lb_imbalance[itype],
+			    sd->lb_gained[itype],
+			    sd->lb_hot_gained[itype],
+			    sd->lb_nobusyq[itype],
+			    sd->lb_nobusyg[itype]);
 		}
-		rcu_read_unlock();
-#endif
+		seq_printf(seq,
+			   " %u %u %u %u %u %u %u %u %u %u %u %u\n",
+		    sd->alb_count, sd->alb_failed, sd->alb_pushed,
+		    sd->sbe_count, sd->sbe_balanced, sd->sbe_pushed,
+		    sd->sbf_count, sd->sbf_balanced, sd->sbf_pushed,
+		    sd->ttwu_wake_remote, sd->ttwu_move_affine,
+		    sd->ttwu_move_balance);
 	}
+	rcu_read_unlock();
+#endif
 	kfree(mask_str);
 	return 0;
 }
 
+static void *schedstat_start(struct seq_file *file, loff_t *offset)
+{
+	if (cpu_online(*offset))
+		return offset;
+	return NULL;
+}
+
+static void *schedstat_next(struct seq_file *file, void *data, loff_t *offset)
+{
+	*offset = cpumask_next(*offset, cpu_online_mask);
+	if (cpu_online(*offset))
+		return offset;
+	return NULL;
+}
+
+static void schedstat_stop(struct seq_file *file, void *data)
+{
+}
+
+static const struct seq_operations schedstat_sops = {
+	.start = schedstat_start,
+	.next  = schedstat_next,
+	.stop  = schedstat_stop,
+	.show  = show_schedstat,
+};
+
 static int schedstat_open(struct inode *inode, struct file *file)
 {
-	unsigned int size = PAGE_SIZE * (1 + num_online_cpus() / 32);
-	char *buf = kmalloc(size, GFP_KERNEL);
-	struct seq_file *m;
-	int res;
+	int res = 0;
+
+	res = seq_open(file, &schedstat_sops);
 
-	if (!buf)
-		return -ENOMEM;
-	res = single_open(file, show_schedstat, NULL);
-	if (!res) {
-		m = file->private_data;
-		m->buf = buf;
-		m->size = size;
-	} else
-		kfree(buf);
 	return res;
 }
 
+static int schedstat_release(struct inode *inode, struct file *file)
+{
+	return 0;
+};
+
 static const struct file_operations proc_schedstat_operations = {
 	.open    = schedstat_open,
 	.read    = seq_read,
 	.llseek  = seq_lseek,
-	.release = single_release,
+	.release = schedstat_release,
 };
 
 static int __init proc_schedstat_init(void)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC 2/2] procfs: /proc/sched_debug fails on very very large machines.
  2012-11-06 21:02 ` [RFC 1/2] procfs: /proc/sched_stat fails on very very large machines Nathan Zimmer
@ 2012-11-06 21:02   ` Nathan Zimmer
  2012-11-06 21:31     ` Dave Jones
  0 siblings, 1 reply; 8+ messages in thread
From: Nathan Zimmer @ 2012-11-06 21:02 UTC (permalink / raw)
  Cc: Nathan Zimmer, Ingo Molnar, Peter Zijlstra, linux-kernel

On systems with 4096 cores attemping to read /proc/sched_debug fails.
We are trying to push all the data into a single kmalloc buffer.
The issue is on these very large machines all the data will not fit in 4mb.

A better solution is to not us the single_open mechanism but to provide
our own seq_operations and treat each cpu as an individual record.

The output should be identical to previous version except the trailing '\n'
was dropped.
Does that require incrementing the version?
Or should I find a way to reinclude it?

Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
CC: Ingo Molnar <mingo@redhat.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: linux-kernel@vger.kernel.org

---
 kernel/sched/debug.c |  101 +++++++++++++++++++++++++++++++++++---------------
 1 files changed, 71 insertions(+), 30 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 6f79596..8c1631f 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -328,54 +328,56 @@ static int sched_debug_show(struct seq_file *m, void *v)
 	unsigned long flags;
 	int cpu;
 
-	local_irq_save(flags);
-	ktime = ktime_to_ns(ktime_get());
-	sched_clk = sched_clock();
-	cpu_clk = local_clock();
-	local_irq_restore(flags);
+	cpu = *(loff_t *)v;
 
-	SEQ_printf(m, "Sched Debug Version: v0.10, %s %.*s\n",
-		init_utsname()->release,
-		(int)strcspn(init_utsname()->version, " "),
-		init_utsname()->version);
+	if (!cpu) {
+		local_irq_save(flags);
+		ktime = ktime_to_ns(ktime_get());
+		sched_clk = sched_clock();
+		cpu_clk = local_clock();
+		local_irq_restore(flags);
+
+		SEQ_printf(m, "Sched Debug Version: v0.10, %s %.*s\n",
+			init_utsname()->release,
+			(int)strcspn(init_utsname()->version, " "),
+			init_utsname()->version);
 
 #define P(x) \
 	SEQ_printf(m, "%-40s: %Ld\n", #x, (long long)(x))
 #define PN(x) \
 	SEQ_printf(m, "%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x))
-	PN(ktime);
-	PN(sched_clk);
-	PN(cpu_clk);
-	P(jiffies);
+		PN(ktime);
+		PN(sched_clk);
+		PN(cpu_clk);
+		P(jiffies);
 #ifdef CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
-	P(sched_clock_stable);
+		P(sched_clock_stable);
 #endif
 #undef PN
 #undef P
 
-	SEQ_printf(m, "\n");
-	SEQ_printf(m, "sysctl_sched\n");
+		SEQ_printf(m, "\n");
+		SEQ_printf(m, "sysctl_sched\n");
 
 #define P(x) \
 	SEQ_printf(m, "  .%-40s: %Ld\n", #x, (long long)(x))
 #define PN(x) \
 	SEQ_printf(m, "  .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x))
-	PN(sysctl_sched_latency);
-	PN(sysctl_sched_min_granularity);
-	PN(sysctl_sched_wakeup_granularity);
-	P(sysctl_sched_child_runs_first);
-	P(sysctl_sched_features);
+		PN(sysctl_sched_latency);
+		PN(sysctl_sched_min_granularity);
+		PN(sysctl_sched_wakeup_granularity);
+		P(sysctl_sched_child_runs_first);
+		P(sysctl_sched_features);
 #undef PN
 #undef P
 
-	SEQ_printf(m, "  .%-40s: %d (%s)\n", "sysctl_sched_tunable_scaling",
-		sysctl_sched_tunable_scaling,
-		sched_tunable_scaling_names[sysctl_sched_tunable_scaling]);
+		SEQ_printf(m, "  .%-40s: %d (%s)\n",
+			"sysctl_sched_tunable_scaling",
+			sysctl_sched_tunable_scaling,
+			sched_tunable_scaling_names[sysctl_sched_tunable_scaling]);
+	}
 
-	for_each_online_cpu(cpu)
-		print_cpu(m, cpu);
-
-	SEQ_printf(m, "\n");
+	print_cpu(m, cpu);
 
 	return 0;
 }
@@ -385,16 +387,55 @@ void sysrq_sched_debug_show(void)
 	sched_debug_show(NULL, NULL);
 }
 
+
+static void *sched_debug_start(struct seq_file *file, loff_t *offset)
+{
+	if (cpu_online(*offset))
+		return offset;
+	return NULL;
+}
+
+static void *sched_debug_next(struct seq_file *file, void *data, loff_t *offset)
+{
+	*offset = cpumask_next(*offset, cpu_online_mask);
+	if (cpu_online(*offset))
+		return offset;
+	return NULL;
+}
+
+static void sched_debug_stop(struct seq_file *file, void *data)
+{
+}
+
+
+static const struct seq_operations sched_debug_sops = {
+	.start = sched_debug_start,
+	.next = sched_debug_next,
+	.stop = sched_debug_stop,
+	.show = sched_debug_show,
+};
+
+static int sched_debug_release(struct inode *inode, struct file *file)
+{
+	seq_release(inode, file);
+
+	return 0;
+}
+
 static int sched_debug_open(struct inode *inode, struct file *filp)
 {
-	return single_open(filp, sched_debug_show, NULL);
+	int ret = 0;
+
+	ret = seq_open(filp, &sched_debug_sops);
+
+	return ret;
 }
 
 static const struct file_operations sched_debug_fops = {
 	.open		= sched_debug_open,
 	.read		= seq_read,
 	.llseek		= seq_lseek,
-	.release	= single_release,
+	.release	= sched_debug_release,
 };
 
 static int __init init_sched_debug_procfs(void)
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] procfs: /proc/sched_debug fails on very very large machines.
  2012-11-06 21:02   ` [RFC 2/2] procfs: /proc/sched_debug " Nathan Zimmer
@ 2012-11-06 21:31     ` Dave Jones
  2012-11-06 23:24       ` Nathan Zimmer
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Jones @ 2012-11-06 21:31 UTC (permalink / raw)
  To: Nathan Zimmer; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
 > On systems with 4096 cores attemping to read /proc/sched_debug fails.
 > We are trying to push all the data into a single kmalloc buffer.
 > The issue is on these very large machines all the data will not fit in 4mb.
 > 
 > A better solution is to not us the single_open mechanism but to provide
 > our own seq_operations and treat each cpu as an individual record.

Good timing.

This looks like it would solve the problem I just reported here:
https://lkml.org/lkml/2012/11/6/390

That happens even on an 8-way, so it's not just niche machines that have
this problems.

	Dave


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] procfs: /proc/sched_debug fails on very very large machines.
  2012-11-06 21:31     ` Dave Jones
@ 2012-11-06 23:24       ` Nathan Zimmer
  2012-11-06 23:49         ` Dave Jones
  0 siblings, 1 reply; 8+ messages in thread
From: Nathan Zimmer @ 2012-11-06 23:24 UTC (permalink / raw)
  To: Dave Jones; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, Nov 06, 2012 at 04:31:28PM -0500, Dave Jones wrote:
> On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
>  > On systems with 4096 cores attemping to read /proc/sched_debug fails.
>  > We are trying to push all the data into a single kmalloc buffer.
>  > The issue is on these very large machines all the data will not fit in 4mb.
>  > 
>  > A better solution is to not us the single_open mechanism but to provide
>  > our own seq_operations and treat each cpu as an individual record.
> 
> Good timing.
> 
> This looks like it would solve the problem I just reported here:
> https://lkml.org/lkml/2012/11/6/390
> 
> That happens even on an 8-way, so it's not just niche machines that have
> this problems.
> 
> 	Dave
> 

Glad to help. I hadn't thought of memory tight situation but it does make sense
that it helps as it can get by with 4k allocation vs grabbing successively
large chucks.

If you have seen similar issues with your fuzz testing let me know where and
I'll take a look.

Nate


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] procfs: /proc/sched_debug fails on very very large machines.
  2012-11-06 23:24       ` Nathan Zimmer
@ 2012-11-06 23:49         ` Dave Jones
  2012-11-07 15:58           ` Nathan Zimmer
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Jones @ 2012-11-06 23:49 UTC (permalink / raw)
  To: Nathan Zimmer; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, Nov 06, 2012 at 05:24:15PM -0600, Nathan Zimmer wrote:
 > On Tue, Nov 06, 2012 at 04:31:28PM -0500, Dave Jones wrote:
 > > On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
 > >  > On systems with 4096 cores attemping to read /proc/sched_debug fails.
 > >  > We are trying to push all the data into a single kmalloc buffer.
 > >  > The issue is on these very large machines all the data will not fit in 4mb.
 > >  > 
 > >  > A better solution is to not us the single_open mechanism but to provide
 > >  > our own seq_operations and treat each cpu as an individual record.
 > > 
 > > Good timing.
 > > 
 > > This looks like it would solve the problem I just reported here:
 > > https://lkml.org/lkml/2012/11/6/390
 > > 
 > > That happens even on an 8-way, so it's not just niche machines that have
 > > this problems.
 > 
 > Glad to help. I hadn't thought of memory tight situation but it does make sense
 > that it helps as it can get by with 4k allocation vs grabbing successively
 > large chucks.
 > 
 > If you have seen similar issues with your fuzz testing let me know where and
 > I'll take a look.

I think /proc/timer_list could probably use the same treatment.
I had traces showing that using 64k allocations too, but I think I may have
just bricked my testbox.

	Dave


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096
  2012-11-06 21:02 [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Nathan Zimmer
  2012-11-06 21:02 ` [RFC 1/2] procfs: /proc/sched_stat fails on very very large machines Nathan Zimmer
@ 2012-11-07  0:37 ` Al Viro
  1 sibling, 0 replies; 8+ messages in thread
From: Al Viro @ 2012-11-07  0:37 UTC (permalink / raw)
  To: Nathan Zimmer; +Cc: Ingo Molnar, Peter Zijlstra, linux-kernel

On Tue, Nov 06, 2012 at 03:02:19PM -0600, Nathan Zimmer wrote:
> When running with 4096 cores attemping to read /proc/sched_stat and
> /proc/sched_debug will fail with an ENOMEM condition.
> On a sufficantly large systems the total amount of data is more then 4mb, so
> it won't fit into a single buffer. 

Not a bad idea, but the iterator is wrong - it assumes that CPU 0 is always
online, AFAICS.  Header should be handled separately - see my reply to
davej several hours ago.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC 2/2] procfs: /proc/sched_debug fails on very very large machines.
  2012-11-06 23:49         ` Dave Jones
@ 2012-11-07 15:58           ` Nathan Zimmer
  0 siblings, 0 replies; 8+ messages in thread
From: Nathan Zimmer @ 2012-11-07 15:58 UTC (permalink / raw)
  To: Dave Jones, Ingo Molnar, Peter Zijlstra, linux-kernel

On 11/06/2012 05:49 PM, Dave Jones wrote:
> On Tue, Nov 06, 2012 at 05:24:15PM -0600, Nathan Zimmer wrote:
>   > On Tue, Nov 06, 2012 at 04:31:28PM -0500, Dave Jones wrote:
>   > > On Tue, Nov 06, 2012 at 03:02:21PM -0600, Nathan Zimmer wrote:
>   > >  > On systems with 4096 cores attemping to read /proc/sched_debug fails.
>   > >  > We are trying to push all the data into a single kmalloc buffer.
>   > >  > The issue is on these very large machines all the data will not fit in 4mb.
>   > >  >
>   > >  > A better solution is to not us the single_open mechanism but to provide
>   > >  > our own seq_operations and treat each cpu as an individual record.
>   > >
>   > > Good timing.
>   > >
>   > > This looks like it would solve the problem I just reported here:
>   > > https://lkml.org/lkml/2012/11/6/390
>   > >
>   > > That happens even on an 8-way, so it's not just niche machines that have
>   > > this problems.
>   >
>   > Glad to help. I hadn't thought of memory tight situation but it does make sense
>   > that it helps as it can get by with 4k allocation vs grabbing successively
>   > large chucks.
>   >
>   > If you have seen similar issues with your fuzz testing let me know where and
>   > I'll take a look.
>
> I think /proc/timer_list could probably use the same treatment.
> I had traces showing that using 64k allocations too, but I think I may have
> just bricked my testbox.
>
> 	Dave
>

Yup it looks like /proc/timer_list is doing the thing with single open.

nzimmer@harp50-sys:~> cat /proc/timer_list
cat: /proc/timer_list: Cannot allocate memory
nzimmer@harp50-sys:~>

I'll see if I can squeeze that one in too.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-11-07 15:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-06 21:02 [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Nathan Zimmer
2012-11-06 21:02 ` [RFC 1/2] procfs: /proc/sched_stat fails on very very large machines Nathan Zimmer
2012-11-06 21:02   ` [RFC 2/2] procfs: /proc/sched_debug " Nathan Zimmer
2012-11-06 21:31     ` Dave Jones
2012-11-06 23:24       ` Nathan Zimmer
2012-11-06 23:49         ` Dave Jones
2012-11-07 15:58           ` Nathan Zimmer
2012-11-07  0:37 ` [RFC 0/2] /proc/sched_stat and /proc/sched_debug fail at 4096 Al Viro

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.