All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -v2 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus
@ 2015-02-25 16:38 riel
  2015-02-25 16:38 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
  2015-02-25 16:38 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
  0 siblings, 2 replies; 51+ messages in thread
From: riel @ 2015-02-25 16:38 UTC (permalink / raw)
  To: linux-kernel

-v2 addresses the conflict David Rientjes spotted between my previous
patches and commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps
including cpumasks and nodemasks")

Ensure that cpus specified with the isolcpus= boot commandline
option stay outside of the load balancing in the kernel scheduler.

Operations like load balancing can introduce unwanted latencies,
which is exactly what the isolcpus= commandline is there to prevent.

Previously, simply creating a new cpuset, without even touching the
cpuset.cpus field inside the new cpuset, would undo the effects of
isolcpus=, by creating a scheduler domain spanning the whole system,
and setting up load balancing inside that domain. The cpuset root
cpuset.cpus file is read-only, so there was not even a way to undo
that effect.

This does not impact the majority of cpusets users, since isolcpus=
is a fairly specialized feature used for realtime purposes.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
  2015-02-25 16:38 [PATCH -v2 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
@ 2015-02-25 16:38 ` riel
  2015-02-27  9:32     ` Peter Zijlstra
  2015-02-28  3:21     ` Zefan Li
  2015-02-25 16:38 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
  1 sibling, 2 replies; 51+ messages in thread
From: riel @ 2015-02-25 16:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith, cgroups

From: Rik van Riel <riel@redhat.com>

Ensure that cpus specified with the isolcpus= boot commandline
option stay outside of the load balancing in the kernel scheduler.

Operations like load balancing can introduce unwanted latencies,
which is exactly what the isolcpus= commandline is there to prevent.

Previously, simply creating a new cpuset, without even touching the
cpuset.cpus field inside the new cpuset, would undo the effects of
isolcpus=, by creating a scheduler domain spanning the whole system,
and setting up load balancing inside that domain. The cpuset root
cpuset.cpus file is read-only, so there was not even a way to undo
that effect.

This does not impact the majority of cpusets users, since isolcpus=
is a fairly specialized feature used for realtime purposes.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
Tested-by: David Rientjes <rientjes@google.com>
---
 include/linux/sched.h |  2 ++
 kernel/cpuset.c       | 13 +++++++++++--
 kernel/sched/core.c   |  2 +-
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6d77432e14ff..aeae02435717 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1038,6 +1038,8 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
 extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 				    struct sched_domain_attr *dattr_new);
 
+extern cpumask_var_t cpu_isolated_map;
+
 /* Allocate an array of sched domains, for partition_sched_domains(). */
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 1d1fe9361d29..b544e5229d99 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -625,6 +625,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
 	int csn;		/* how many cpuset ptrs in csa so far */
 	int i, j, k;		/* indices for partition finding loops */
 	cpumask_var_t *doms;	/* resulting partition; i.e. sched domains */
+	cpumask_var_t non_isolated_cpus;  /* load balanced CPUs */
 	struct sched_domain_attr *dattr;  /* attributes for custom domains */
 	int ndoms = 0;		/* number of sched domains in result */
 	int nslot;		/* next empty doms[] struct cpumask slot */
@@ -634,6 +635,10 @@ static int generate_sched_domains(cpumask_var_t **domains,
 	dattr = NULL;
 	csa = NULL;
 
+	if (!alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL))
+		goto done;
+	cpumask_andnot(non_isolated_cpus, cpu_possible_mask, cpu_isolated_map);
+
 	/* Special case for the 99% of systems with one, full, sched domain */
 	if (is_sched_load_balance(&top_cpuset)) {
 		ndoms = 1;
@@ -646,7 +651,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
 			*dattr = SD_ATTR_INIT;
 			update_domain_attr_tree(dattr, &top_cpuset);
 		}
-		cpumask_copy(doms[0], top_cpuset.effective_cpus);
+		cpumask_and(doms[0], top_cpuset.effective_cpus,
+				     non_isolated_cpus);
 
 		goto done;
 	}
@@ -669,7 +675,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
 		 * the corresponding sched domain.
 		 */
 		if (!cpumask_empty(cp->cpus_allowed) &&
-		    !is_sched_load_balance(cp))
+		    !(is_sched_load_balance(cp) &&
+		      cpumask_intersects(cp->cpus_allowed, non_isolated_cpus)))
 			continue;
 
 		if (is_sched_load_balance(cp))
@@ -751,6 +758,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
 
 			if (apn == b->pn) {
 				cpumask_or(dp, dp, b->effective_cpus);
+				cpumask_and(dp, dp, non_isolated_cpus);
 				if (dattr)
 					update_domain_attr_tree(dattr + nslot, b);
 
@@ -763,6 +771,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
 	BUG_ON(nslot != ndoms);
 
 done:
+	free_cpumask_var(non_isolated_cpus);
 	kfree(csa);
 
 	/*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f0f831e8a345..3db1beace19b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5812,7 +5812,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 }
 
 /* cpus with isolated domains */
-static cpumask_var_t cpu_isolated_map;
+cpumask_var_t cpu_isolated_map;
 
 /* Setup the mask of cpus configured for isolated domains */
 static int __init isolated_cpu_setup(char *str)
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-25 16:38 [PATCH -v2 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
  2015-02-25 16:38 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-25 16:38 ` riel
  2015-02-25 21:09     ` David Rientjes
                     ` (2 more replies)
  1 sibling, 3 replies; 51+ messages in thread
From: riel @ 2015-02-25 16:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, David Rientjes, Mike Galbraith,
	cgroups

From: Rik van Riel <riel@redhat.com>

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 kernel/cpuset.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..94bf59588e23 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+	cpumask_var_t my_isolated_cpus;
+
+	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+		return;
+
+	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
+
+	free_cpumask_var(my_isolated_cpus);
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
 		break;
+	case FILE_ISOLCPUS:
+		cpuset_seq_print_isolcpus(sf, cs);
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -1893,6 +1911,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25 21:09     ` David Rientjes
  0 siblings, 0 replies; 51+ messages in thread
From: David Rientjes @ 2015-02-25 21:09 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith, cgroups

On Wed, 25 Feb 2015, riel@redhat.com wrote:

> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index b544e5229d99..94bf59588e23 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
>  	FILE_MEMORY_PRESSURE,
>  	FILE_SPREAD_PAGE,
>  	FILE_SPREAD_SLAB,
> +	FILE_ISOLCPUS,
>  } cpuset_filetype_t;
>  
>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>  	return retval ?: nbytes;
>  }
>  
> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +
> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return;
> +
> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));

That unfortunately won't output anything, it needs to be 
cpumask_pr_args().  After that's fixed, feel free to add my

	Acked-by: David Rientjes <rientjes@google.com>

> +
> +	free_cpumask_var(my_isolated_cpus);
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>  		break;
> +	case FILE_ISOLCPUS:
> +		cpuset_seq_print_isolcpus(sf, cs);
> +		break;
>  	default:
>  		ret = -EINVAL;
>  	}
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25 21:09     ` David Rientjes
  0 siblings, 0 replies; 51+ messages in thread
From: David Rientjes @ 2015-02-25 21:09 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
	Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, 25 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:

> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index b544e5229d99..94bf59588e23 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
>  	FILE_MEMORY_PRESSURE,
>  	FILE_SPREAD_PAGE,
>  	FILE_SPREAD_SLAB,
> +	FILE_ISOLCPUS,
>  } cpuset_filetype_t;
>  
>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>  	return retval ?: nbytes;
>  }
>  
> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +
> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return;
> +
> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));

That unfortunately won't output anything, it needs to be 
cpumask_pr_args().  After that's fixed, feel free to add my

	Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

> +
> +	free_cpumask_var(my_isolated_cpus);
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>  		break;
> +	case FILE_ISOLCPUS:
> +		cpuset_seq_print_isolcpus(sf, cs);
> +		break;
>  	default:
>  		ret = -EINVAL;
>  	}
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-25 21:09     ` David Rientjes
  (?)
@ 2015-02-25 21:21     ` Rik van Riel
  -1 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-25 21:21 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith, cgroups

On 02/25/2015 04:09 PM, David Rientjes wrote:
> On Wed, 25 Feb 2015, riel@redhat.com wrote:
> 
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index b544e5229d99..94bf59588e23 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>>  	FILE_MEMORY_PRESSURE,
>>  	FILE_SPREAD_PAGE,
>>  	FILE_SPREAD_SLAB,
>> +	FILE_ISOLCPUS,
>>  } cpuset_filetype_t;
>>  
>>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>  	return retval ?: nbytes;
>>  }
>>  
>> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
>> +{
>> +	cpumask_var_t my_isolated_cpus;
>> +
>> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> +		return;
>> +
>> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> +	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
> 
> That unfortunately won't output anything, it needs to be 
> cpumask_pr_args().  After that's fixed, feel free to add my
> 
> 	Acked-by: David Rientjes <rientjes@google.com>

Gah. Too many things going on at once.

Let me resend a v3 of just patch 2/2 with your ack.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v3 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25 21:32     ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-25 21:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Clark Williams, Li Zefan, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Acked-by: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
OK, I suck. Thanks to David Rientjes for spotting the silly mistake.

 kernel/cpuset.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..455df101ceec 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+	cpumask_var_t my_isolated_cpus;
+
+	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+		return;
+
+	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	seq_printf(sf, "%*pbl\n", cpumask_pr_args(my_isolated_cpus));
+
+	free_cpumask_var(my_isolated_cpus);
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
 		break;
+	case FILE_ISOLCPUS:
+		cpuset_seq_print_isolcpus(sf, cs);
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -1893,6 +1911,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v3 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25 21:32     ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-25 21:32 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: Peter Zijlstra, Clark Williams, Li Zefan, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith,
	cgroups-u79uwXL29TY76Z2rM5mHXA

Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
OK, I suck. Thanks to David Rientjes for spotting the silly mistake.

 kernel/cpuset.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..455df101ceec 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+	cpumask_var_t my_isolated_cpus;
+
+	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+		return;
+
+	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	seq_printf(sf, "%*pbl\n", cpumask_pr_args(my_isolated_cpus));
+
+	free_cpumask_var(my_isolated_cpus);
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
 		break;
+	case FILE_ISOLCPUS:
+		cpuset_seq_print_isolcpus(sf, cs);
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -1893,6 +1911,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-26 11:05     ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-26 11:05 UTC (permalink / raw)
  To: riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +
> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return;
> +

Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
in cpuset_init().

> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
> +
> +	free_cpumask_var(my_isolated_cpus);
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>  		break;
> +	case FILE_ISOLCPUS:
> +		cpuset_seq_print_isolcpus(sf, cs);
> +		break;
>  	default:
>  		ret = -EINVAL;
>  	}
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-26 11:05     ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-26 11:05 UTC (permalink / raw)
  To: riel-H+wXaHxf7aLQT0dZR+AlfA
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA

> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +
> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return;
> +

Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
in cpuset_init().

> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
> +
> +	free_cpumask_var(my_isolated_cpus);
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>  		break;
> +	case FILE_ISOLCPUS:
> +		cpuset_seq_print_isolcpus(sf, cs);
> +		break;
>  	default:
>  		ret = -EINVAL;
>  	}
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  
> 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-26 11:05     ` Zefan Li
  (?)
@ 2015-02-26 15:24     ` Rik van Riel
  -1 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-26 15:24 UTC (permalink / raw)
  To: Zefan Li
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On 02/26/2015 06:05 AM, Zefan Li wrote:
>> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
>> +{
>> +	cpumask_var_t my_isolated_cpus;
>> +
>> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> +		return;
>> +
> 
> Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
> in cpuset_init().

OK, can do.

I see that cpuset_common_seq_show already takes a lock, so having
one global variable for this should not introduce any additional
contention.

I will send a v4.

>> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>>  	case FILE_EFFECTIVE_MEMLIST:
>>  		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>>  		break;
>> +	case FILE_ISOLCPUS:
>> +		cpuset_seq_print_isolcpus(sf, cs);
>> +		break;
>>  	default:
>>  		ret = -EINVAL;
>>  	}


-- 
All rights reversed

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-26 11:05     ` Zefan Li
@ 2015-02-26 17:12       ` Rik van Riel
  -1 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-26 17:12 UTC (permalink / raw)
  To: Zefan Li
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On Thu, 26 Feb 2015 19:05:57 +0800
Zefan Li <lizefan@huawei.com> wrote:

> Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
> in cpuset_init().

Here you are. This addresses your concern, as well as the
issue David Rientjes found earlier.

---8<---

Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Acked-by: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 kernel/cpuset.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..5462e1ca90bd 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,16 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+/* protected by the lock in cpuset_common_seq_show */
+static cpumask_var_t print_isolated_cpus;
+
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+	cpumask_and(print_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	seq_printf(sf, "%*pbl\n", cpumask_pr_args(print_isolated_cpus));
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1733,6 +1744,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
 		break;
+	case FILE_ISOLCPUS:
+		cpuset_seq_print_isolcpus(sf, cs);
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -1893,6 +1907,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 
@@ -2070,6 +2090,8 @@ int __init cpuset_init(void)
 		BUG();
 	if (!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL))
 		BUG();
+	if (!alloc_cpumask_var(&print_isolated_cpus, GFP_KERNEL))
+		BUG();
 
 	cpumask_setall(top_cpuset.cpus_allowed);
 	nodes_setall(top_cpuset.mems_allowed);

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-26 17:12       ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-26 17:12 UTC (permalink / raw)
  To: Zefan Li
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On Thu, 26 Feb 2015 19:05:57 +0800
Zefan Li <lizefan@huawei.com> wrote:

> Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
> in cpuset_init().

Here you are. This addresses your concern, as well as the
issue David Rientjes found earlier.

---8<---

Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Acked-by: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 kernel/cpuset.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..5462e1ca90bd 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,16 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+/* protected by the lock in cpuset_common_seq_show */
+static cpumask_var_t print_isolated_cpus;
+
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+	cpumask_and(print_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	seq_printf(sf, "%*pbl\n", cpumask_pr_args(print_isolated_cpus));
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1733,6 +1744,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
 		break;
+	case FILE_ISOLCPUS:
+		cpuset_seq_print_isolcpus(sf, cs);
+		break;
 	default:
 		ret = -EINVAL;
 	}
@@ -1893,6 +1907,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 
@@ -2070,6 +2090,8 @@ int __init cpuset_init(void)
 		BUG();
 	if (!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL))
 		BUG();
+	if (!alloc_cpumask_var(&print_isolated_cpus, GFP_KERNEL))
+		BUG();
 
 	cpumask_setall(top_cpuset.cpus_allowed);
 	nodes_setall(top_cpuset.mems_allowed);

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
@ 2015-02-27  9:32     ` Peter Zijlstra
  0 siblings, 0 replies; 51+ messages in thread
From: Peter Zijlstra @ 2015-02-27  9:32 UTC (permalink / raw)
  To: riel
  Cc: linux-kernel, Clark Williams, Li Zefan, Ingo Molnar,
	Luiz Capitulino, Mike Galbraith, cgroups

On Wed, Feb 25, 2015 at 11:38:07AM -0500, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
> 
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
> 
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
> 
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
> 
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Tested-by: David Rientjes <rientjes@google.com>

Might I asked you to update Documentation/cgroups/cpusets.txt with this
knowledge? While it does mentions isolcpus it does not clarify the
interaction between it and cpusets.

Other than that,

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
@ 2015-02-27  9:32     ` Peter Zijlstra
  0 siblings, 0 replies; 51+ messages in thread
From: Peter Zijlstra @ 2015-02-27  9:32 UTC (permalink / raw)
  To: riel-H+wXaHxf7aLQT0dZR+AlfA
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Wed, Feb 25, 2015 at 11:38:07AM -0500, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:
> From: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> 
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
> 
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
> 
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
> 
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
> 
> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Tested-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

Might I asked you to update Documentation/cgroups/cpusets.txt with this
knowledge? While it does mentions isolcpus it does not clarify the
interaction between it and cpusets.

Other than that,

Acked-by: Peter Zijlstra (Intel) <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus
@ 2015-02-27 17:08       ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-27 17:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Clark Williams, Li Zefan, Ingo Molnar,
	Luiz Capitulino, Mike Galbraith, cgroups

Document the subtly changed relationship between cpusets and isolcpus.
Turns out the old documentation did not quite match the code...

Signed-off-by: Rik van Riel <riel@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
 Documentation/cgroups/cpusets.txt | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index f2235a162529..fdf7dff3f607 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -392,8 +392,10 @@ Put simply, it costs less to balance between two smaller sched domains
 than one big one, but doing so means that overloads in one of the
 two domains won't be load balanced to the other one.
 
-By default, there is one sched domain covering all CPUs, except those
-marked isolated using the kernel boot time "isolcpus=" argument.
+By default, there is one sched domain covering all CPUs, including those
+marked isolated using the kernel boot time "isolcpus=" argument. However,
+the isolated CPUs will not participate in load balancing, and will not
+have tasks running on them unless explicitly assigned.
 
 This default load balancing across all CPUs is not well suited for
 the following two situations:
@@ -465,6 +467,10 @@ such partially load balanced cpusets, as they may be artificially
 constrained to some subset of the CPUs allowed to them, for lack of
 load balancing to the other CPUs.
 
+CPUs in "cpuset.isolcpus" were excluded from load balancing by the
+isolcpus= kernel boot option, and will never be load balanced regardless
+of the value of "cpuset.sched_load_balance" in any cpuset.
+
 1.7.1 sched_load_balance implementation details.
 ------------------------------------------------
 

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus
@ 2015-02-27 17:08       ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-27 17:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith,
	cgroups-u79uwXL29TY76Z2rM5mHXA

Document the subtly changed relationship between cpusets and isolcpus.
Turns out the old documentation did not quite match the code...

Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Suggested-by: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
---
 Documentation/cgroups/cpusets.txt | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/Documentation/cgroups/cpusets.txt b/Documentation/cgroups/cpusets.txt
index f2235a162529..fdf7dff3f607 100644
--- a/Documentation/cgroups/cpusets.txt
+++ b/Documentation/cgroups/cpusets.txt
@@ -392,8 +392,10 @@ Put simply, it costs less to balance between two smaller sched domains
 than one big one, but doing so means that overloads in one of the
 two domains won't be load balanced to the other one.
 
-By default, there is one sched domain covering all CPUs, except those
-marked isolated using the kernel boot time "isolcpus=" argument.
+By default, there is one sched domain covering all CPUs, including those
+marked isolated using the kernel boot time "isolcpus=" argument. However,
+the isolated CPUs will not participate in load balancing, and will not
+have tasks running on them unless explicitly assigned.
 
 This default load balancing across all CPUs is not well suited for
 the following two situations:
@@ -465,6 +467,10 @@ such partially load balanced cpusets, as they may be artificially
 constrained to some subset of the CPUs allowed to them, for lack of
 load balancing to the other CPUs.
 
+CPUs in "cpuset.isolcpus" were excluded from load balancing by the
+isolcpus= kernel boot option, and will never be load balanced regardless
+of the value of "cpuset.sched_load_balance" in any cpuset.
+
 1.7.1 sched_load_balance implementation details.
 ------------------------------------------------
 

^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus
  2015-02-27 17:08       ` Rik van Riel
  (?)
@ 2015-02-27 21:15       ` David Rientjes
  -1 siblings, 0 replies; 51+ messages in thread
From: David Rientjes @ 2015-02-27 21:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, linux-kernel, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, Mike Galbraith, cgroups

On Fri, 27 Feb 2015, Rik van Riel wrote:

> Document the subtly changed relationship between cpusets and isolcpus.
> Turns out the old documentation did not quite match the code...
> 
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>

Acked-by: David Rientjes <rientjes@google.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
  2015-02-25 16:38 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-28  3:21     ` Zefan Li
  2015-02-28  3:21     ` Zefan Li
  1 sibling, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:21 UTC (permalink / raw)
  To: riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, Mike Galbraith, cgroups

On 2015/2/26 0:38, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
> 
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
> 
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
> 
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
> 
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Tested-by: David Rientjes <rientjes@google.com>

Acked-by: Zefan Li <lizefan@huawei.com>


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
@ 2015-02-28  3:21     ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:21 UTC (permalink / raw)
  To: riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, Mike Galbraith, cgroups

On 2015/2/26 0:38, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
> 
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
> 
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
> 
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
> 
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Tested-by: David Rientjes <rientjes@google.com>

Acked-by: Zefan Li <lizefan@huawei.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-28  3:22         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:22 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>

Acked-by: Zefan Li <lizefan@huawei.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-28  3:22         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:22 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA

> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

Acked-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus
@ 2015-02-28  3:23         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:23 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, Mike Galbraith, cgroups

On 2015/2/28 1:08, Rik van Riel wrote:
> Document the subtly changed relationship between cpusets and isolcpus.
> Turns out the old documentation did not quite match the code...
> 
> Signed-off-by: Rik van Riel <riel@redhat.com>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>

Acked-by: Zefan Li <lizefan@huawei.com>


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus
@ 2015-02-28  3:23         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-02-28  3:23 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Clark Williams, Ingo Molnar, Luiz Capitulino, Mike Galbraith,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On 2015/2/28 1:08, Rik van Riel wrote:
> Document the subtly changed relationship between cpusets and isolcpus.
> Turns out the old documentation did not quite match the code...
> 
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Suggested-by: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>

Acked-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02  6:15         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-03-02  6:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

Hi Rik,

> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 

One Question, why not add a /sys/devices/system/cpu/isolated instead?


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02  6:15         ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-03-02  6:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA

Hi Rik,

> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 

One Question, why not add a /sys/devices/system/cpu/isolated instead?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02  9:09         ` Peter Zijlstra
  0 siblings, 0 replies; 51+ messages in thread
From: Peter Zijlstra @ 2015-03-02  9:09 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Zefan Li, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>

So let me start off by saying I hate isolcpus ;-)

Let me further state that I had hopes we could extend cpusets to
natively provide the functionality isolcpus has, and kill isolcpus.

The 'normal' way would be to create 2 cgroups with disjoint cpus,
disable sched_load_balance on root and one of the siblings, while moving
everything into the other group.

The 'problem' is that we cannot move everything that is affected by
isolcpus, workqueues have grown a horrible 'new' interface outside of
the regular task interfaces and things like kthreadd are non-movable for
mostly good reasons.

Furthermore it appears that software like system-disease and libvirt
hard assume they're lord and master of the cgroup hierarchy and do not
expect things like this.

So while I mostly hate all of this it might be the best we can do :-(

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02  9:09         ` Peter Zijlstra
  0 siblings, 0 replies; 51+ messages in thread
From: Peter Zijlstra @ 2015-03-02  9:09 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Zefan Li, linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes, Mike Galbraith,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

So let me start off by saying I hate isolcpus ;-)

Let me further state that I had hopes we could extend cpusets to
natively provide the functionality isolcpus has, and kill isolcpus.

The 'normal' way would be to create 2 cgroups with disjoint cpus,
disable sched_load_balance on root and one of the siblings, while moving
everything into the other group.

The 'problem' is that we cannot move everything that is affected by
isolcpus, workqueues have grown a horrible 'new' interface outside of
the regular task interfaces and things like kthreadd are non-movable for
mostly good reasons.

Furthermore it appears that software like system-disease and libvirt
hard assume they're lord and master of the cgroup hierarchy and do not
expect things like this.

So while I mostly hate all of this it might be the best we can do :-(

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-03-02  6:15         ` Zefan Li
  (?)
@ 2015-03-02  9:12         ` Peter Zijlstra
  2015-03-03  9:51             ` Zefan Li
  -1 siblings, 1 reply; 51+ messages in thread
From: Peter Zijlstra @ 2015-03-02  9:12 UTC (permalink / raw)
  To: Zefan Li
  Cc: Rik van Riel, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On Mon, Mar 02, 2015 at 02:15:39PM +0800, Zefan Li wrote:
> Hi Rik,
> 
> > Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> > 
> > The previous patch makes it so the code skips over isolcpus when
> > building scheduler load balancing domains. This makes it hard to
> > see for a user which of the CPUs in a cpuset are participating in
> > load balancing, and which ones are isolated cpus.
> > 
> > Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> > isolated CPUs.
> > 
> > This file is read-only for now. In the future we could extend things
> > so isolcpus can be changed at run time, for the root (system wide)
> > cpuset only.
> > 
> 
> One Question, why not add a /sys/devices/system/cpu/isolated instead?

It would leave userspace to calculate the result for any one cpuset
itself. Furthermore, is that /sys thing visible for all nested
containers?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-03-02  9:09         ` Peter Zijlstra
  (?)
@ 2015-03-02 12:44         ` Mike Galbraith
  2015-03-02 14:35             ` Rik van Riel
  2015-03-02 15:29             ` Tejun Heo
  -1 siblings, 2 replies; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 12:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, Zefan Li, linux-kernel, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes, cgroups

On Mon, 2015-03-02 at 10:09 +0100, Peter Zijlstra wrote:
> On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> > Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> > 
> > The previous patch makes it so the code skips over isolcpus when
> > building scheduler load balancing domains. This makes it hard to
> > see for a user which of the CPUs in a cpuset are participating in
> > load balancing, and which ones are isolated cpus.
> > 
> > Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> > isolated CPUs.
> > 
> > This file is read-only for now. In the future we could extend things
> > so isolcpus can be changed at run time, for the root (system wide)
> > cpuset only.
> > 
> > Acked-by: David Rientjes <rientjes@google.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Clark Williams <williams@redhat.com>
> > Cc: Li Zefan <lizefan@huawei.com>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: Luiz Capitulino <lcapitulino@redhat.com>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> > Cc: cgroups@vger.kernel.org
> > Signed-off-by: Rik van Riel <riel@redhat.com>
> 
> So let me start off by saying I hate isolcpus ;-)
> 
> Let me further state that I had hopes we could extend cpusets to
> natively provide the functionality isolcpus has, and kill isolcpus.

+1

That's where nohz_full goop belongs too.

> The 'normal' way would be to create 2 cgroups with disjoint cpus,
> disable sched_load_balance on root and one of the siblings, while moving
> everything into the other group.

That's what cset shield does, works fine.

> The 'problem' is that we cannot move everything that is affected by
> isolcpus, workqueues have grown a horrible 'new' interface outside of
> the regular task interfaces and things like kthreadd are non-movable for
> mostly good reasons.
> 
> Furthermore it appears that software like system-disease and libvirt
> hard assume they're lord and master of the cgroup hierarchy and do not
> expect things like this.
> 
> So while I mostly hate all of this it might be the best we can do :-(

Hm, I'm now all system-disease-ified now (still hate the bloody thing),
and have no problem isolating cpus via cpusets, modulo workqueues
wanting a bat upside the head.

	-Mike


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 14:35             ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-03-02 14:35 UTC (permalink / raw)
  To: Mike Galbraith, Peter Zijlstra
  Cc: Zefan Li, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, cgroups

On 03/02/2015 07:44 AM, Mike Galbraith wrote:
> On Mon, 2015-03-02 at 10:09 +0100, Peter Zijlstra wrote:
>> On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
>>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
>>>
>>> The previous patch makes it so the code skips over isolcpus when
>>> building scheduler load balancing domains. This makes it hard to
>>> see for a user which of the CPUs in a cpuset are participating in
>>> load balancing, and which ones are isolated cpus.
>>>
>>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>>> isolated CPUs.
>>>
>>> This file is read-only for now. In the future we could extend things
>>> so isolcpus can be changed at run time, for the root (system wide)
>>> cpuset only.
>>>
>>> Acked-by: David Rientjes <rientjes@google.com>
>>> Cc: Peter Zijlstra <peterz@infradead.org>
>>> Cc: Clark Williams <williams@redhat.com>
>>> Cc: Li Zefan <lizefan@huawei.com>
>>> Cc: Ingo Molnar <mingo@redhat.com>
>>> Cc: Luiz Capitulino <lcapitulino@redhat.com>
>>> Cc: David Rientjes <rientjes@google.com>
>>> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
>>> Cc: cgroups@vger.kernel.org
>>> Signed-off-by: Rik van Riel <riel@redhat.com>
>>
>> So let me start off by saying I hate isolcpus ;-)
>>
>> Let me further state that I had hopes we could extend cpusets to
>> natively provide the functionality isolcpus has, and kill isolcpus.
> 
> +1
> 
> That's where nohz_full goop belongs too.

Except nohz_full and isolcpus are very much global attributes of
each CPU, so I am not sure whether it would make sense to allow
configuration of this attribute anywhere other than the root
cpuset.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 14:35             ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-03-02 14:35 UTC (permalink / raw)
  To: Mike Galbraith, Peter Zijlstra
  Cc: Zefan Li, linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On 03/02/2015 07:44 AM, Mike Galbraith wrote:
> On Mon, 2015-03-02 at 10:09 +0100, Peter Zijlstra wrote:
>> On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
>>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
>>>
>>> The previous patch makes it so the code skips over isolcpus when
>>> building scheduler load balancing domains. This makes it hard to
>>> see for a user which of the CPUs in a cpuset are participating in
>>> load balancing, and which ones are isolated cpus.
>>>
>>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>>> isolated CPUs.
>>>
>>> This file is read-only for now. In the future we could extend things
>>> so isolcpus can be changed at run time, for the root (system wide)
>>> cpuset only.
>>>
>>> Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>>> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>>> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>> Cc: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>>> Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
>>> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>
>> So let me start off by saying I hate isolcpus ;-)
>>
>> Let me further state that I had hopes we could extend cpusets to
>> natively provide the functionality isolcpus has, and kill isolcpus.
> 
> +1
> 
> That's where nohz_full goop belongs too.

Except nohz_full and isolcpus are very much global attributes of
each CPU, so I am not sure whether it would make sense to allow
configuration of this attribute anywhere other than the root
cpuset.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 14:54               ` Mike Galbraith
  0 siblings, 0 replies; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 14:54 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, Zefan Li, linux-kernel, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes, cgroups

On Mon, 2015-03-02 at 09:35 -0500, Rik van Riel wrote:
> On 03/02/2015 07:44 AM, Mike Galbraith wrote:
> > On Mon, 2015-03-02 at 10:09 +0100, Peter Zijlstra wrote:
> >> On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> >>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> >>>
> >>> The previous patch makes it so the code skips over isolcpus when
> >>> building scheduler load balancing domains. This makes it hard to
> >>> see for a user which of the CPUs in a cpuset are participating in
> >>> load balancing, and which ones are isolated cpus.
> >>>
> >>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> >>> isolated CPUs.
> >>>
> >>> This file is read-only for now. In the future we could extend things
> >>> so isolcpus can be changed at run time, for the root (system wide)
> >>> cpuset only.
> >>>
> >>> Acked-by: David Rientjes <rientjes@google.com>
> >>> Cc: Peter Zijlstra <peterz@infradead.org>
> >>> Cc: Clark Williams <williams@redhat.com>
> >>> Cc: Li Zefan <lizefan@huawei.com>
> >>> Cc: Ingo Molnar <mingo@redhat.com>
> >>> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> >>> Cc: David Rientjes <rientjes@google.com>
> >>> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> >>> Cc: cgroups@vger.kernel.org
> >>> Signed-off-by: Rik van Riel <riel@redhat.com>
> >>
> >> So let me start off by saying I hate isolcpus ;-)
> >>
> >> Let me further state that I had hopes we could extend cpusets to
> >> natively provide the functionality isolcpus has, and kill isolcpus.
> > 
> > +1
> > 
> > That's where nohz_full goop belongs too.
> 
> Except nohz_full and isolcpus are very much global attributes of
> each CPU, so I am not sure whether it would make sense to allow
> configuration of this attribute anywhere other than the root
> cpuset.

They're attributes of exclusive sets, which excludes the root set.  It'd
be kinda hard to have the root set be both ticked and tickless :)

	-Mike



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 14:54               ` Mike Galbraith
  0 siblings, 0 replies; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 14:54 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Peter Zijlstra, Zefan Li, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, 2015-03-02 at 09:35 -0500, Rik van Riel wrote:
> On 03/02/2015 07:44 AM, Mike Galbraith wrote:
> > On Mon, 2015-03-02 at 10:09 +0100, Peter Zijlstra wrote:
> >> On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> >>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> >>>
> >>> The previous patch makes it so the code skips over isolcpus when
> >>> building scheduler load balancing domains. This makes it hard to
> >>> see for a user which of the CPUs in a cpuset are participating in
> >>> load balancing, and which ones are isolated cpus.
> >>>
> >>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> >>> isolated CPUs.
> >>>
> >>> This file is read-only for now. In the future we could extend things
> >>> so isolcpus can be changed at run time, for the root (system wide)
> >>> cpuset only.
> >>>
> >>> Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> >>> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> >>> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> >>> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>> Cc: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> >>> Cc: Mike Galbraith <umgwanakikbuti-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> >>> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> >>> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>
> >> So let me start off by saying I hate isolcpus ;-)
> >>
> >> Let me further state that I had hopes we could extend cpusets to
> >> natively provide the functionality isolcpus has, and kill isolcpus.
> > 
> > +1
> > 
> > That's where nohz_full goop belongs too.
> 
> Except nohz_full and isolcpus are very much global attributes of
> each CPU, so I am not sure whether it would make sense to allow
> configuration of this attribute anywhere other than the root
> cpuset.

They're attributes of exclusive sets, which excludes the root set.  It'd
be kinda hard to have the root set be both ticked and tickless :)

	-Mike


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 15:29             ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 15:29 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li, linux-kernel,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups

On Mon, Mar 02, 2015 at 01:44:50PM +0100, Mike Galbraith wrote:
> Hm, I'm now all system-disease-ified now (still hate the bloody thing),
> and have no problem isolating cpus via cpusets, modulo workqueues
> wanting a bat upside the head.

It shouldn't be difficult to teach workqueue pools to follow the same
rules.  This matters only for the unbound ones anyway, right?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 15:29             ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 15:29 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, Mar 02, 2015 at 01:44:50PM +0100, Mike Galbraith wrote:
> Hm, I'm now all system-disease-ified now (still hate the bloody thing),
> and have no problem isolating cpus via cpusets, modulo workqueues
> wanting a bat upside the head.

It shouldn't be difficult to teach workqueue pools to follow the same
rules.  This matters only for the unbound ones anyway, right?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-03-02 15:29             ` Tejun Heo
  (?)
@ 2015-03-02 16:02             ` Mike Galbraith
  2015-03-02 16:09                 ` Tejun Heo
  -1 siblings, 1 reply; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 16:02 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li, linux-kernel,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups

On Mon, 2015-03-02 at 10:29 -0500, Tejun Heo wrote:
> On Mon, Mar 02, 2015 at 01:44:50PM +0100, Mike Galbraith wrote:
> > Hm, I'm now all system-disease-ified now (still hate the bloody thing),
> > and have no problem isolating cpus via cpusets, modulo workqueues
> > wanting a bat upside the head.
> 
> It shouldn't be difficult to teach workqueue pools to follow the same
> rules.  This matters only for the unbound ones anyway, right?

Well, those are the only ones we can do anything about.  Dirt simple
diddling of the workqueue default mask as sched domains are
added/removed should do it I think.  Automatically moving any existing
unbound worker away from isolated cores at the same time would be a
bonus, most important is that no new threads sneak in.

	-Mike


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 16:09                 ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 16:09 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li, linux-kernel,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups

On Mon, Mar 02, 2015 at 05:02:57PM +0100, Mike Galbraith wrote:
> Well, those are the only ones we can do anything about.  Dirt simple
> diddling of the workqueue default mask as sched domains are
> added/removed should do it I think.  Automatically moving any existing
> unbound worker away from isolated cores at the same time would be a
> bonus, most important is that no new threads sneak in.

Worker pools are immutable once created and configuraiton changes are
achieved by creating new pools and draining old ones but at any rate
making it follow config changes is almost trivial.  Figuring out
configuration policy might take a bit of effort tho.  Can you point me
to what specific configuration it should be following?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 16:09                 ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 16:09 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, Mar 02, 2015 at 05:02:57PM +0100, Mike Galbraith wrote:
> Well, those are the only ones we can do anything about.  Dirt simple
> diddling of the workqueue default mask as sched domains are
> added/removed should do it I think.  Automatically moving any existing
> unbound worker away from isolated cores at the same time would be a
> bonus, most important is that no new threads sneak in.

Worker pools are immutable once created and configuraiton changes are
achieved by creating new pools and draining old ones but at any rate
making it follow config changes is almost trivial.  Figuring out
configuration policy might take a bit of effort tho.  Can you point me
to what specific configuration it should be following?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-26 17:12       ` Rik van Riel
                         ` (3 preceding siblings ...)
  (?)
@ 2015-03-02 17:01       ` Tejun Heo
  2015-03-02 17:31           ` Tejun Heo
  -1 siblings, 1 reply; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 17:01 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Zefan Li, linux-kernel, Peter Zijlstra, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes, Mike Galbraith,
	cgroups

On Thu, Feb 26, 2015 at 12:12:31PM -0500, Rik van Riel wrote:
> On Thu, 26 Feb 2015 19:05:57 +0800
> Zefan Li <lizefan@huawei.com> wrote:
> 
> > Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
> > in cpuset_init().
> 
> Here you are. This addresses your concern, as well as the
> issue David Rientjes found earlier.
> 
> ---8<---
> 
> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>

Applied 1-2 to cgroup/for-4.1.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 17:31           ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 17:31 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Zefan Li, linux-kernel, Peter Zijlstra, Clark Williams,
	Ingo Molnar, Luiz Capitulino, David Rientjes, Mike Galbraith,
	cgroups

On Mon, Mar 02, 2015 at 12:01:16PM -0500, Tejun Heo wrote:
> Applied 1-2 to cgroup/for-4.1.

Reverted due to build failure.  Looks like UP build is broken.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 17:31           ` Tejun Heo
  0 siblings, 0 replies; 51+ messages in thread
From: Tejun Heo @ 2015-03-02 17:31 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Zefan Li, linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, Mar 02, 2015 at 12:01:16PM -0500, Tejun Heo wrote:
> Applied 1-2 to cgroup/for-4.1.

Reverted due to build failure.  Looks like UP build is broken.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 17:35                   ` Mike Galbraith
  0 siblings, 0 replies; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 17:35 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li, linux-kernel,
	Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
	cgroups

On Mon, 2015-03-02 at 11:09 -0500, Tejun Heo wrote:
> On Mon, Mar 02, 2015 at 05:02:57PM +0100, Mike Galbraith wrote:
> > Well, those are the only ones we can do anything about.  Dirt simple
> > diddling of the workqueue default mask as sched domains are
> > added/removed should do it I think.  Automatically moving any existing
> > unbound worker away from isolated cores at the same time would be a
> > bonus, most important is that no new threads sneak in.
> 
> Worker pools are immutable once created and configuraiton changes are
> achieved by creating new pools and draining old ones but at any rate
> making it follow config changes is almost trivial.  Figuring out
> configuration policy might take a bit of effort tho.  Can you point me
> to what specific configuration it should be following?

For cpusets, an exclusive set should become taboo to unbound workers
when load balancing is turned off.  The user making sched domains go
away is a not so subtle hint that he wants no random interference, as he
is trying to assume full responsibility for task placement therein.

In my trees, I let the user turn rt cpupri/push/pull off as well, as
that further reduces jitter.

	-Mike


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-02 17:35                   ` Mike Galbraith
  0 siblings, 0 replies; 51+ messages in thread
From: Mike Galbraith @ 2015-03-02 17:35 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Peter Zijlstra, Rik van Riel, Zefan Li,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, 2015-03-02 at 11:09 -0500, Tejun Heo wrote:
> On Mon, Mar 02, 2015 at 05:02:57PM +0100, Mike Galbraith wrote:
> > Well, those are the only ones we can do anything about.  Dirt simple
> > diddling of the workqueue default mask as sched domains are
> > added/removed should do it I think.  Automatically moving any existing
> > unbound worker away from isolated cores at the same time would be a
> > bonus, most important is that no new threads sneak in.
> 
> Worker pools are immutable once created and configuraiton changes are
> achieved by creating new pools and draining old ones but at any rate
> making it follow config changes is almost trivial.  Figuring out
> configuration policy might take a bit of effort tho.  Can you point me
> to what specific configuration it should be following?

For cpusets, an exclusive set should become taboo to unbound workers
when load balancing is turned off.  The user making sched domains go
away is a not so subtle hint that he wants no random interference, as he
is trying to assume full responsibility for task placement therein.

In my trees, I let the user turn rt cpupri/push/pull off as well, as
that further reduces jitter.

	-Mike

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-03-02  9:12         ` Peter Zijlstra
@ 2015-03-03  9:51             ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-03-03  9:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On 2015/3/2 17:12, Peter Zijlstra wrote:
> On Mon, Mar 02, 2015 at 02:15:39PM +0800, Zefan Li wrote:
>> Hi Rik,
>>
>>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
>>>
>>> The previous patch makes it so the code skips over isolcpus when
>>> building scheduler load balancing domains. This makes it hard to
>>> see for a user which of the CPUs in a cpuset are participating in
>>> load balancing, and which ones are isolated cpus.
>>>
>>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>>> isolated CPUs.
>>>
>>> This file is read-only for now. In the future we could extend things
>>> so isolcpus can be changed at run time, for the root (system wide)
>>> cpuset only.
>>>
>>
>> One Question, why not add a /sys/devices/system/cpu/isolated instead?
> 
> It would leave userspace to calculate the result for any one cpuset
> itself.

It's trivial. Instead of reading cpuset.isolcpus, now we read cpuset.cpus
and /sys/.../isolated.

> Furthermore, is that /sys thing visible for all nested
> containers?
> .
> 

Never tried nested containers, but I think so.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-03-03  9:51             ` Zefan Li
  0 siblings, 0 replies; 51+ messages in thread
From: Zefan Li @ 2015-03-03  9:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Rik van Riel, linux-kernel, Clark Williams, Ingo Molnar,
	Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups

On 2015/3/2 17:12, Peter Zijlstra wrote:
> On Mon, Mar 02, 2015 at 02:15:39PM +0800, Zefan Li wrote:
>> Hi Rik,
>>
>>> Subject: cpusets,isolcpus: add file to show isolated cpus in cpuset
>>>
>>> The previous patch makes it so the code skips over isolcpus when
>>> building scheduler load balancing domains. This makes it hard to
>>> see for a user which of the CPUs in a cpuset are participating in
>>> load balancing, and which ones are isolated cpus.
>>>
>>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>>> isolated CPUs.
>>>
>>> This file is read-only for now. In the future we could extend things
>>> so isolcpus can be changed at run time, for the root (system wide)
>>> cpuset only.
>>>
>>
>> One Question, why not add a /sys/devices/system/cpu/isolated instead?
> 
> It would leave userspace to calculate the result for any one cpuset
> itself.

It's trivial. Instead of reading cpuset.isolcpus, now we read cpuset.cpus
and /sys/.../isolated.

> Furthermore, is that /sys thing visible for all nested
> containers?
> .
> 

Never tried nested containers, but I think so.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25  3:30       ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-25  3:30 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, cgroups

On 02/24/2015 09:15 PM, David Rientjes wrote:
> On Mon, 23 Feb 2015, riel@redhat.com wrote:
> 
>> From: Rik van Riel <riel@redhat.com>
>>
>> The previous patch makes it so the code skips over isolcpus when
>> building scheduler load balancing domains. This makes it hard to
>> see for a user which of the CPUs in a cpuset are participating in
>> load balancing, and which ones are isolated cpus.
>>
>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>> isolated CPUs.
>>
>> This file is read-only for now. In the future we could extend things
>> so isolcpus can be changed at run time, for the root (system wide)
>> cpuset only.
>>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Clark Williams <williams@redhat.com>
>> Cc: Li Zefan <lizefan@huawei.com>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Luiz Capitulino <lcapitulino@redhat.com>
>> Cc: cgroups@vger.kernel.org
>> Signed-off-by: Rik van Riel <riel@redhat.com>
>> ---
>>  kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>>  1 file changed, 27 insertions(+)
>>
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index 1ad63fa37cb4..19ad5d3377f8 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>>  	FILE_MEMORY_PRESSURE,
>>  	FILE_SPREAD_PAGE,
>>  	FILE_SPREAD_SLAB,
>> +	FILE_ISOLCPUS,
>>  } cpuset_filetype_t;
>>  
>>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>  	return retval ?: nbytes;
>>  }
>>  
>> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
>> +{
>> +	cpumask_var_t my_isolated_cpus;
>> +	ssize_t count;
>> +	
> 
> Whitespace.
> 
>> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> +		return 0;
>> +
>> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> +	count = cpulist_scnprintf(s, pos, my_isolated_cpus);
>> +
>> +	free_cpumask_var(my_isolated_cpus);
>> +
>> +	return count;
>> +}
>> +
>>  /*
>>   * These ascii lists should be read in a single call, by using a user
>>   * buffer large enough to hold the entire map.  If read in smaller
>> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>>  	case FILE_EFFECTIVE_MEMLIST:
>>  		s += nodelist_scnprintf(s, count, cs->effective_mems);
>>  		break;
>> +	case FILE_ISOLCPUS:
>> +		s += cpuset_sprintf_isolcpus(s, count, cs);
>> +		break;
> 
> This patch looks fine, and I think cpuset.effective_cpus and 
> cpuset.isolcpus can be used well together, but will need updating now that 
> commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including 
> cpumasks and nodemasks") has been merged which reworks this function.

I will take a look at that changeset. It was not in the
tip tree I worked against.

Expect a v2 :)

> It's a little unfortunate, though, that the user sees Cpus_allowed, 
> cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have 
> to check another cpulist for the isolcpus to see their sched domain, 
> though.

Agreed, but all the alternatives I could think of would break the
userspace API, leaving this as the best way to go.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25  3:30       ` Rik van Riel
  0 siblings, 0 replies; 51+ messages in thread
From: Rik van Riel @ 2015-02-25  3:30 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On 02/24/2015 09:15 PM, David Rientjes wrote:
> On Mon, 23 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:
> 
>> From: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>
>> The previous patch makes it so the code skips over isolcpus when
>> building scheduler load balancing domains. This makes it hard to
>> see for a user which of the CPUs in a cpuset are participating in
>> load balancing, and which ones are isolated cpus.
>>
>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>> isolated CPUs.
>>
>> This file is read-only for now. In the future we could extend things
>> so isolcpus can be changed at run time, for the root (system wide)
>> cpuset only.
>>
>> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> ---
>>  kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>>  1 file changed, 27 insertions(+)
>>
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index 1ad63fa37cb4..19ad5d3377f8 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>>  	FILE_MEMORY_PRESSURE,
>>  	FILE_SPREAD_PAGE,
>>  	FILE_SPREAD_SLAB,
>> +	FILE_ISOLCPUS,
>>  } cpuset_filetype_t;
>>  
>>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>  	return retval ?: nbytes;
>>  }
>>  
>> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
>> +{
>> +	cpumask_var_t my_isolated_cpus;
>> +	ssize_t count;
>> +	
> 
> Whitespace.
> 
>> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> +		return 0;
>> +
>> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> +	count = cpulist_scnprintf(s, pos, my_isolated_cpus);
>> +
>> +	free_cpumask_var(my_isolated_cpus);
>> +
>> +	return count;
>> +}
>> +
>>  /*
>>   * These ascii lists should be read in a single call, by using a user
>>   * buffer large enough to hold the entire map.  If read in smaller
>> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>>  	case FILE_EFFECTIVE_MEMLIST:
>>  		s += nodelist_scnprintf(s, count, cs->effective_mems);
>>  		break;
>> +	case FILE_ISOLCPUS:
>> +		s += cpuset_sprintf_isolcpus(s, count, cs);
>> +		break;
> 
> This patch looks fine, and I think cpuset.effective_cpus and 
> cpuset.isolcpus can be used well together, but will need updating now that 
> commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including 
> cpumasks and nodemasks") has been merged which reworks this function.

I will take a look at that changeset. It was not in the
tip tree I worked against.

Expect a v2 :)

> It's a little unfortunate, though, that the user sees Cpus_allowed, 
> cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have 
> to check another cpulist for the isolcpus to see their sched domain, 
> though.

Agreed, but all the alternatives I could think of would break the
userspace API, leaving this as the best way to go.

-- 
All rights reversed

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25  2:15     ` David Rientjes
  0 siblings, 0 replies; 51+ messages in thread
From: David Rientjes @ 2015-02-25  2:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, cgroups

On Mon, 23 Feb 2015, riel@redhat.com wrote:

> From: Rik van Riel <riel@redhat.com>
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
>  kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index 1ad63fa37cb4..19ad5d3377f8 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
>  	FILE_MEMORY_PRESSURE,
>  	FILE_SPREAD_PAGE,
>  	FILE_SPREAD_SLAB,
> +	FILE_ISOLCPUS,
>  } cpuset_filetype_t;
>  
>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>  	return retval ?: nbytes;
>  }
>  
> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +	ssize_t count;
> +	

Whitespace.

> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return 0;
> +
> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	count = cpulist_scnprintf(s, pos, my_isolated_cpus);
> +
> +	free_cpumask_var(my_isolated_cpus);
> +
> +	return count;
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		s += nodelist_scnprintf(s, count, cs->effective_mems);
>  		break;
> +	case FILE_ISOLCPUS:
> +		s += cpuset_sprintf_isolcpus(s, count, cs);
> +		break;

This patch looks fine, and I think cpuset.effective_cpus and 
cpuset.isolcpus can be used well together, but will need updating now that 
commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including 
cpumasks and nodemasks") has been merged which reworks this function.

It's a little unfortunate, though, that the user sees Cpus_allowed, 
cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have 
to check another cpulist for the isolcpus to see their sched domain, 
though.

>  	default:
>  		ret = -EINVAL;
>  		goto out_unlock;
> @@ -1906,6 +1927,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
@ 2015-02-25  2:15     ` David Rientjes
  0 siblings, 0 replies; 51+ messages in thread
From: David Rientjes @ 2015-02-25  2:15 UTC (permalink / raw)
  To: Rik van Riel
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
	Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
	cgroups-u79uwXL29TY76Z2rM5mHXA

On Mon, 23 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:

> From: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> 
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
> 
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
> 
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
> 
> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index 1ad63fa37cb4..19ad5d3377f8 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
>  	FILE_MEMORY_PRESSURE,
>  	FILE_SPREAD_PAGE,
>  	FILE_SPREAD_SLAB,
> +	FILE_ISOLCPUS,
>  } cpuset_filetype_t;
>  
>  static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>  	return retval ?: nbytes;
>  }
>  
> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
> +{
> +	cpumask_var_t my_isolated_cpus;
> +	ssize_t count;
> +	

Whitespace.

> +	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> +		return 0;
> +
> +	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> +	count = cpulist_scnprintf(s, pos, my_isolated_cpus);
> +
> +	free_cpumask_var(my_isolated_cpus);
> +
> +	return count;
> +}
> +
>  /*
>   * These ascii lists should be read in a single call, by using a user
>   * buffer large enough to hold the entire map.  If read in smaller
> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>  	case FILE_EFFECTIVE_MEMLIST:
>  		s += nodelist_scnprintf(s, count, cs->effective_mems);
>  		break;
> +	case FILE_ISOLCPUS:
> +		s += cpuset_sprintf_isolcpus(s, count, cs);
> +		break;

This patch looks fine, and I think cpuset.effective_cpus and 
cpuset.isolcpus can be used well together, but will need updating now that 
commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including 
cpumasks and nodemasks") has been merged which reworks this function.

It's a little unfortunate, though, that the user sees Cpus_allowed, 
cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have 
to check another cpulist for the isolcpus to see their sched domain, 
though.

>  	default:
>  		ret = -EINVAL;
>  		goto out_unlock;
> @@ -1906,6 +1927,12 @@ static struct cftype files[] = {
>  		.private = FILE_MEMORY_PRESSURE_ENABLED,
>  	},
>  
> +	{
> +		.name = "isolcpus",
> +		.seq_show = cpuset_common_seq_show,
> +		.private = FILE_ISOLCPUS,
> +	},
> +
>  	{ }	/* terminate */
>  };
>  

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
  2015-02-23 21:45 [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
@ 2015-02-23 21:45 ` riel
  2015-02-25  2:15     ` David Rientjes
  0 siblings, 1 reply; 51+ messages in thread
From: riel @ 2015-02-23 21:45 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
	Ingo Molnar, Luiz Capitulino, cgroups

From: Rik van Riel <riel@redhat.com>

The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.

Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.

This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
 kernel/cpuset.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 1ad63fa37cb4..19ad5d3377f8 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
 	FILE_MEMORY_PRESSURE,
 	FILE_SPREAD_PAGE,
 	FILE_SPREAD_SLAB,
+	FILE_ISOLCPUS,
 } cpuset_filetype_t;
 
 static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	return retval ?: nbytes;
 }
 
+static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
+{
+	cpumask_var_t my_isolated_cpus;
+	ssize_t count;
+	
+	if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+		return 0;
+
+	cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+	count = cpulist_scnprintf(s, pos, my_isolated_cpus);
+
+	free_cpumask_var(my_isolated_cpus);
+
+	return count;
+}
+
 /*
  * These ascii lists should be read in a single call, by using a user
  * buffer large enough to hold the entire map.  If read in smaller
@@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	case FILE_EFFECTIVE_MEMLIST:
 		s += nodelist_scnprintf(s, count, cs->effective_mems);
 		break;
+	case FILE_ISOLCPUS:
+		s += cpuset_sprintf_isolcpus(s, count, cs);
+		break;
 	default:
 		ret = -EINVAL;
 		goto out_unlock;
@@ -1906,6 +1927,12 @@ static struct cftype files[] = {
 		.private = FILE_MEMORY_PRESSURE_ENABLED,
 	},
 
+	{
+		.name = "isolcpus",
+		.seq_show = cpuset_common_seq_show,
+		.private = FILE_ISOLCPUS,
+	},
+
 	{ }	/* terminate */
 };
 
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2015-03-03  9:55 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-25 16:38 [PATCH -v2 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
2015-02-25 16:38 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
2015-02-27  9:32   ` Peter Zijlstra
2015-02-27  9:32     ` Peter Zijlstra
2015-02-27 17:08     ` [PATCH 3/2] cpusets,isolcpus: document relationship between cpusets & isolcpus Rik van Riel
2015-02-27 17:08       ` Rik van Riel
2015-02-27 21:15       ` David Rientjes
2015-02-28  3:23       ` Zefan Li
2015-02-28  3:23         ` Zefan Li
2015-02-28  3:21   ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets Zefan Li
2015-02-28  3:21     ` Zefan Li
2015-02-25 16:38 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
2015-02-25 21:09   ` David Rientjes
2015-02-25 21:09     ` David Rientjes
2015-02-25 21:21     ` Rik van Riel
2015-02-25 21:32   ` [PATCH v3 " Rik van Riel
2015-02-25 21:32     ` Rik van Riel
2015-02-26 11:05   ` [PATCH " Zefan Li
2015-02-26 11:05     ` Zefan Li
2015-02-26 15:24     ` Rik van Riel
2015-02-26 17:12     ` [PATCH v4 " Rik van Riel
2015-02-26 17:12       ` Rik van Riel
2015-02-28  3:22       ` Zefan Li
2015-02-28  3:22         ` Zefan Li
2015-03-02  6:15       ` Zefan Li
2015-03-02  6:15         ` Zefan Li
2015-03-02  9:12         ` Peter Zijlstra
2015-03-03  9:51           ` Zefan Li
2015-03-03  9:51             ` Zefan Li
2015-03-02  9:09       ` Peter Zijlstra
2015-03-02  9:09         ` Peter Zijlstra
2015-03-02 12:44         ` Mike Galbraith
2015-03-02 14:35           ` Rik van Riel
2015-03-02 14:35             ` Rik van Riel
2015-03-02 14:54             ` Mike Galbraith
2015-03-02 14:54               ` Mike Galbraith
2015-03-02 15:29           ` Tejun Heo
2015-03-02 15:29             ` Tejun Heo
2015-03-02 16:02             ` Mike Galbraith
2015-03-02 16:09               ` Tejun Heo
2015-03-02 16:09                 ` Tejun Heo
2015-03-02 17:35                 ` Mike Galbraith
2015-03-02 17:35                   ` Mike Galbraith
2015-03-02 17:01       ` Tejun Heo
2015-03-02 17:31         ` Tejun Heo
2015-03-02 17:31           ` Tejun Heo
  -- strict thread matches above, loose matches on Subject: below --
2015-02-23 21:45 [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
2015-02-25  2:15   ` David Rientjes
2015-02-25  2:15     ` David Rientjes
2015-02-25  3:30     ` Rik van Riel
2015-02-25  3:30       ` Rik van Riel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.