* [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined.
@ 2012-09-25 13:12 Tang Chen
2012-09-25 13:12 ` [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions Tang Chen
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Tang Chen @ 2012-09-25 13:12 UTC (permalink / raw)
To: peterz, srivatsa.bhat, mingo, tglx, linux-kernel, x86, linux-numa
Once array sched_domains_numa_masks is defined, it is never updated.
When a new cpu on a new node is onlined, the coincident member in
sched_domains_numa_masks is not initialized, and all the masks are 0.
As a result, the build_overlap_sched_groups() will initialize a NULL
sched_group for the new cpu on the new node, which will lead to kernel panic.
[ 3189.403280] Call Trace:
[ 3189.403286] [<ffffffff8106c36f>] warn_slowpath_common+0x7f/0xc0
[ 3189.403289] [<ffffffff8106c3ca>] warn_slowpath_null+0x1a/0x20
[ 3189.403292] [<ffffffff810b1d57>] build_sched_domains+0x467/0x470
[ 3189.403296] [<ffffffff810b2067>] partition_sched_domains+0x307/0x510
[ 3189.403299] [<ffffffff810b1ea2>] ? partition_sched_domains+0x142/0x510
[ 3189.403305] [<ffffffff810fcc93>] cpuset_update_active_cpus+0x83/0x90
[ 3189.403308] [<ffffffff810b22a8>] cpuset_cpu_active+0x38/0x70
[ 3189.403316] [<ffffffff81674b87>] notifier_call_chain+0x67/0x150
[ 3189.403320] [<ffffffff81664647>] ? native_cpu_up+0x18a/0x1b5
[ 3189.403328] [<ffffffff810a044e>] __raw_notifier_call_chain+0xe/0x10
[ 3189.403333] [<ffffffff81070470>] __cpu_notify+0x20/0x40
[ 3189.403337] [<ffffffff8166663e>] _cpu_up+0xe9/0x131
[ 3189.403340] [<ffffffff81666761>] cpu_up+0xdb/0xee
[ 3189.403348] [<ffffffff8165667c>] store_online+0x9c/0xd0
[ 3189.403355] [<ffffffff81437640>] dev_attr_store+0x20/0x30
[ 3189.403361] [<ffffffff8124aa63>] sysfs_write_file+0xa3/0x100
[ 3189.403368] [<ffffffff811ccbe0>] vfs_write+0xd0/0x1a0
[ 3189.403371] [<ffffffff811ccdb4>] sys_write+0x54/0xa0
[ 3189.403375] [<ffffffff81679c69>] system_call_fastpath+0x16/0x1b
[ 3189.403377] ---[ end trace 1e6cf85d0859c941 ]---
[ 3189.403398] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
This patch registers a new notifier for cpu hotplug notify chain, and
updates sched_domains_numa_masks every time a new cpu is onlined or offlined.
Tang Chen (2):
Ensure sched_domains_numa_levels safe in other functions.
Update sched_domains_numa_masks when new cpus are onlined.
kernel/sched/core.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
--
1.7.10.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions.
2012-09-25 13:12 [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
@ 2012-09-25 13:12 ` Tang Chen
2012-10-05 13:01 ` [tip:sched/urgent] sched: Ensure 'sched_domains_numa_levels' is safe to use " tip-bot for Tang Chen
2012-09-25 13:12 ` [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-10-02 11:33 ` [PATCH v2 0/2] Update sched_domains_numa_masks " Peter Zijlstra
2 siblings, 1 reply; 7+ messages in thread
From: Tang Chen @ 2012-09-25 13:12 UTC (permalink / raw)
To: peterz, srivatsa.bhat, mingo, tglx, linux-kernel, x86, linux-numa
Cc: Tang Chen, Wen Congyang
We should temporarily reset sched_domains_numa_levels to 0 after
it is reset to 'level' in sched_init_numa(). If it fails to allocate
memory for array sched_domains_numa_masks[][], the array will contain
less then 'level' members. This could be dangerous when we use it to
iterate array sched_domains_numa_masks[][] in other functions.
This patch set sched_domains_numa_levels to 0 before initializing
array sched_domains_numa_masks[][], and reset it to 'level' when
sched_domains_numa_masks[][] is fully initialized.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
kernel/sched/core.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 649c9f8..3aa306a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6660,6 +6660,17 @@ static void sched_init_numa(void)
* numbers.
*/
+ /*
+ * Here, we should temporarily reset sched_domains_numa_levels to 0.
+ * If it fails to allocate memory for array sched_domains_numa_masks[][],
+ * the array will contain less then 'level' members. This could be
+ * dangerous when we use it to iterate array sched_domains_numa_masks[][]
+ * in other functions.
+ *
+ * We reset it to 'level' at the end of this function.
+ */
+ sched_domains_numa_levels = 0;
+
sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
if (!sched_domains_numa_masks)
return;
@@ -6714,6 +6725,8 @@ static void sched_init_numa(void)
}
sched_domain_topology = tl;
+
+ sched_domains_numa_levels = level;
}
#else
static inline void sched_init_numa(void)
--
1.7.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined.
2012-09-25 13:12 [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-09-25 13:12 ` [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions Tang Chen
@ 2012-09-25 13:12 ` Tang Chen
2012-10-04 12:30 ` Peter Zijlstra
2012-10-05 13:02 ` [tip:sched/urgent] sched: Update sched_domains_numa_masks[][] " tip-bot for Tang Chen
2012-10-02 11:33 ` [PATCH v2 0/2] Update sched_domains_numa_masks " Peter Zijlstra
2 siblings, 2 replies; 7+ messages in thread
From: Tang Chen @ 2012-09-25 13:12 UTC (permalink / raw)
To: peterz, srivatsa.bhat, mingo, tglx, linux-kernel, x86, linux-numa
Cc: Tang Chen, Wen Congyang
Once array sched_domains_numa_masks is defined, it is never updated.
When a new cpu on a new node is onlined, the coincident member in
sched_domains_numa_masks is not initialized, and all the masks are 0.
As a result, the build_overlap_sched_groups() will initialize a NULL
sched_group for the new cpu on the new node, which will lead to kernel panic.
[ 3189.403280] Call Trace:
[ 3189.403286] [<ffffffff8106c36f>] warn_slowpath_common+0x7f/0xc0
[ 3189.403289] [<ffffffff8106c3ca>] warn_slowpath_null+0x1a/0x20
[ 3189.403292] [<ffffffff810b1d57>] build_sched_domains+0x467/0x470
[ 3189.403296] [<ffffffff810b2067>] partition_sched_domains+0x307/0x510
[ 3189.403299] [<ffffffff810b1ea2>] ? partition_sched_domains+0x142/0x510
[ 3189.403305] [<ffffffff810fcc93>] cpuset_update_active_cpus+0x83/0x90
[ 3189.403308] [<ffffffff810b22a8>] cpuset_cpu_active+0x38/0x70
[ 3189.403316] [<ffffffff81674b87>] notifier_call_chain+0x67/0x150
[ 3189.403320] [<ffffffff81664647>] ? native_cpu_up+0x18a/0x1b5
[ 3189.403328] [<ffffffff810a044e>] __raw_notifier_call_chain+0xe/0x10
[ 3189.403333] [<ffffffff81070470>] __cpu_notify+0x20/0x40
[ 3189.403337] [<ffffffff8166663e>] _cpu_up+0xe9/0x131
[ 3189.403340] [<ffffffff81666761>] cpu_up+0xdb/0xee
[ 3189.403348] [<ffffffff8165667c>] store_online+0x9c/0xd0
[ 3189.403355] [<ffffffff81437640>] dev_attr_store+0x20/0x30
[ 3189.403361] [<ffffffff8124aa63>] sysfs_write_file+0xa3/0x100
[ 3189.403368] [<ffffffff811ccbe0>] vfs_write+0xd0/0x1a0
[ 3189.403371] [<ffffffff811ccdb4>] sys_write+0x54/0xa0
[ 3189.403375] [<ffffffff81679c69>] system_call_fastpath+0x16/0x1b
[ 3189.403377] ---[ end trace 1e6cf85d0859c941 ]---
[ 3189.403398] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
This patch registers a new notifier for cpu hotplug notify chain, and
updates sched_domains_numa_masks every time a new cpu is onlined or offlined.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
kernel/sched/core.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3aa306a..fffc751 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6728,10 +6728,65 @@ static void sched_init_numa(void)
sched_domains_numa_levels = level;
}
+
+static void sched_domains_numa_masks_set(int cpu)
+{
+ int i, j;
+ int node = cpu_to_node(cpu);
+
+ for (i = 0; i < sched_domains_numa_levels; i++) {
+ for (j = 0; j < nr_node_ids; j++) {
+ if (node_distance(j, node) <= sched_domains_numa_distance[i])
+ cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]);
+ }
+ }
+}
+
+static void sched_domains_numa_masks_clear(int cpu)
+{
+ int i, j;
+ for (i = 0; i < sched_domains_numa_levels; i++) {
+ for (j = 0; j < nr_node_ids; j++)
+ cpumask_clear_cpu(cpu, sched_domains_numa_masks[i][j]);
+ }
+}
+
+/*
+ * Update sched_domains_numa_masks[level][node] array when new cpus
+ * are onlined.
+ */
+static int sched_domains_numa_masks_update(struct notifier_block *nfb,
+ unsigned long action,
+ void *hcpu)
+{
+ int cpu = (int)hcpu;
+
+ switch (action & ~CPU_TASKS_FROZEN) {
+ case CPU_ONLINE:
+ sched_domains_numa_masks_set(cpu);
+ break;
+
+ case CPU_DEAD:
+ sched_domains_numa_masks_clear(cpu);
+ break;
+
+ default:
+ return NOTIFY_DONE;
+ }
+
+ return NOTIFY_OK;
+}
#else
static inline void sched_init_numa(void)
{
}
+
+static int sched_domains_numa_masks_update(struct notifier_block *nfb,
+ unsigned long action,
+ void *hcpu)
+{
+ return 0;
+}
#endif /* CONFIG_NUMA */
static int __sdt_alloc(const struct cpumask *cpu_map)
@@ -7180,6 +7235,7 @@ void __init sched_init_smp(void)
mutex_unlock(&sched_domains_mutex);
put_online_cpus();
+ hotcpu_notifier(sched_domains_numa_masks_update, CPU_PRI_SCHED_ACTIVE);
hotcpu_notifier(cpuset_cpu_active, CPU_PRI_CPUSET_ACTIVE);
hotcpu_notifier(cpuset_cpu_inactive, CPU_PRI_CPUSET_INACTIVE);
--
1.7.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined.
2012-09-25 13:12 [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-09-25 13:12 ` [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions Tang Chen
2012-09-25 13:12 ` [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
@ 2012-10-02 11:33 ` Peter Zijlstra
2 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2012-10-02 11:33 UTC (permalink / raw)
To: Tang Chen; +Cc: srivatsa.bhat, mingo, tglx, linux-kernel, x86, linux-numa
On Tue, 2012-09-25 at 21:12 +0800, Tang Chen wrote:
> Tang Chen (2):
> Ensure sched_domains_numa_levels safe in other functions.
> Update sched_domains_numa_masks when new cpus are onlined.
>
> kernel/sched/core.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 69 insertions(+)
Thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined.
2012-09-25 13:12 ` [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
@ 2012-10-04 12:30 ` Peter Zijlstra
2012-10-05 13:02 ` [tip:sched/urgent] sched: Update sched_domains_numa_masks[][] " tip-bot for Tang Chen
1 sibling, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2012-10-04 12:30 UTC (permalink / raw)
To: Tang Chen
Cc: srivatsa.bhat, mingo, tglx, linux-kernel, x86, linux-numa, Wen Congyang
On Tue, 2012-09-25 at 21:12 +0800, Tang Chen wrote:
> +static int sched_domains_numa_masks_update(struct notifier_block
> *nfb,
> + unsigned long action,
> + void *hcpu)
> +{
> + int cpu = (int)hcpu;
kernel/sched/core.c: In function ‘sched_domains_numa_masks_update’:
kernel/sched/core.c:6299:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
I've changed it to:
int cpu = (long)hcpu;
^ permalink raw reply [flat|nested] 7+ messages in thread
* [tip:sched/urgent] sched: Ensure 'sched_domains_numa_levels' is safe to use in other functions
2012-09-25 13:12 ` [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions Tang Chen
@ 2012-10-05 13:01 ` tip-bot for Tang Chen
0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Tang Chen @ 2012-10-05 13:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, wency, a.p.zijlstra, tangchen, tglx
Commit-ID: 5f7865f3e44db4c73fdc454fb2af40806212a7ca
Gitweb: http://git.kernel.org/tip/5f7865f3e44db4c73fdc454fb2af40806212a7ca
Author: Tang Chen <tangchen@cn.fujitsu.com>
AuthorDate: Tue, 25 Sep 2012 21:12:30 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 5 Oct 2012 13:54:46 +0200
sched: Ensure 'sched_domains_numa_levels' is safe to use in other functions
We should temporarily reset 'sched_domains_numa_levels' to 0 after
it is reset to 'level' in sched_init_numa(). If it fails to allocate
memory for array sched_domains_numa_masks[][], the array will contain
less then 'level' members. This could be dangerous when we use it to
iterate array sched_domains_numa_masks[][] in other functions.
This patch set sched_domains_numa_levels to 0 before initializing
array sched_domains_numa_masks[][], and reset it to 'level' when
sched_domains_numa_masks[][] is fully initialized.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1348578751-16904-2-git-send-email-tangchen@cn.fujitsu.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/core.c | 13 +++++++++++++
1 files changed, 13 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c177472..f895fdd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6122,6 +6122,17 @@ static void sched_init_numa(void)
* numbers.
*/
+ /*
+ * Here, we should temporarily reset sched_domains_numa_levels to 0.
+ * If it fails to allocate memory for array sched_domains_numa_masks[][],
+ * the array will contain less then 'level' members. This could be
+ * dangerous when we use it to iterate array sched_domains_numa_masks[][]
+ * in other functions.
+ *
+ * We reset it to 'level' at the end of this function.
+ */
+ sched_domains_numa_levels = 0;
+
sched_domains_numa_masks = kzalloc(sizeof(void *) * level, GFP_KERNEL);
if (!sched_domains_numa_masks)
return;
@@ -6176,6 +6187,8 @@ static void sched_init_numa(void)
}
sched_domain_topology = tl;
+
+ sched_domains_numa_levels = level;
}
#else
static inline void sched_init_numa(void)
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [tip:sched/urgent] sched: Update sched_domains_numa_masks[][] when new cpus are onlined
2012-09-25 13:12 ` [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-10-04 12:30 ` Peter Zijlstra
@ 2012-10-05 13:02 ` tip-bot for Tang Chen
1 sibling, 0 replies; 7+ messages in thread
From: tip-bot for Tang Chen @ 2012-10-05 13:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, wency, a.p.zijlstra, tangchen, tglx
Commit-ID: 301a5cba2887d1f640e6d5184b05a6d7132017d5
Gitweb: http://git.kernel.org/tip/301a5cba2887d1f640e6d5184b05a6d7132017d5
Author: Tang Chen <tangchen@cn.fujitsu.com>
AuthorDate: Tue, 25 Sep 2012 21:12:31 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 5 Oct 2012 13:54:48 +0200
sched: Update sched_domains_numa_masks[][] when new cpus are onlined
Once array sched_domains_numa_masks[] []is defined, it is never updated.
When a new cpu on a new node is onlined, the coincident member in
sched_domains_numa_masks[][] is not initialized, and all the masks are 0.
As a result, the build_overlap_sched_groups() will initialize a NULL
sched_group for the new cpu on the new node, which will lead to kernel panic:
[ 3189.403280] Call Trace:
[ 3189.403286] [<ffffffff8106c36f>] warn_slowpath_common+0x7f/0xc0
[ 3189.403289] [<ffffffff8106c3ca>] warn_slowpath_null+0x1a/0x20
[ 3189.403292] [<ffffffff810b1d57>] build_sched_domains+0x467/0x470
[ 3189.403296] [<ffffffff810b2067>] partition_sched_domains+0x307/0x510
[ 3189.403299] [<ffffffff810b1ea2>] ? partition_sched_domains+0x142/0x510
[ 3189.403305] [<ffffffff810fcc93>] cpuset_update_active_cpus+0x83/0x90
[ 3189.403308] [<ffffffff810b22a8>] cpuset_cpu_active+0x38/0x70
[ 3189.403316] [<ffffffff81674b87>] notifier_call_chain+0x67/0x150
[ 3189.403320] [<ffffffff81664647>] ? native_cpu_up+0x18a/0x1b5
[ 3189.403328] [<ffffffff810a044e>] __raw_notifier_call_chain+0xe/0x10
[ 3189.403333] [<ffffffff81070470>] __cpu_notify+0x20/0x40
[ 3189.403337] [<ffffffff8166663e>] _cpu_up+0xe9/0x131
[ 3189.403340] [<ffffffff81666761>] cpu_up+0xdb/0xee
[ 3189.403348] [<ffffffff8165667c>] store_online+0x9c/0xd0
[ 3189.403355] [<ffffffff81437640>] dev_attr_store+0x20/0x30
[ 3189.403361] [<ffffffff8124aa63>] sysfs_write_file+0xa3/0x100
[ 3189.403368] [<ffffffff811ccbe0>] vfs_write+0xd0/0x1a0
[ 3189.403371] [<ffffffff811ccdb4>] sys_write+0x54/0xa0
[ 3189.403375] [<ffffffff81679c69>] system_call_fastpath+0x16/0x1b
[ 3189.403377] ---[ end trace 1e6cf85d0859c941 ]---
[ 3189.403398] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
This patch registers a new notifier for cpu hotplug notify chain, and
updates sched_domains_numa_masks every time a new cpu is onlined or offlined.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
[ fixed compile warning ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1348578751-16904-3-git-send-email-tangchen@cn.fujitsu.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/core.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 56 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f895fdd..8322d73 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6190,10 +6190,65 @@ static void sched_init_numa(void)
sched_domains_numa_levels = level;
}
+
+static void sched_domains_numa_masks_set(int cpu)
+{
+ int i, j;
+ int node = cpu_to_node(cpu);
+
+ for (i = 0; i < sched_domains_numa_levels; i++) {
+ for (j = 0; j < nr_node_ids; j++) {
+ if (node_distance(j, node) <= sched_domains_numa_distance[i])
+ cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]);
+ }
+ }
+}
+
+static void sched_domains_numa_masks_clear(int cpu)
+{
+ int i, j;
+ for (i = 0; i < sched_domains_numa_levels; i++) {
+ for (j = 0; j < nr_node_ids; j++)
+ cpumask_clear_cpu(cpu, sched_domains_numa_masks[i][j]);
+ }
+}
+
+/*
+ * Update sched_domains_numa_masks[level][node] array when new cpus
+ * are onlined.
+ */
+static int sched_domains_numa_masks_update(struct notifier_block *nfb,
+ unsigned long action,
+ void *hcpu)
+{
+ int cpu = (long)hcpu;
+
+ switch (action & ~CPU_TASKS_FROZEN) {
+ case CPU_ONLINE:
+ sched_domains_numa_masks_set(cpu);
+ break;
+
+ case CPU_DEAD:
+ sched_domains_numa_masks_clear(cpu);
+ break;
+
+ default:
+ return NOTIFY_DONE;
+ }
+
+ return NOTIFY_OK;
+}
#else
static inline void sched_init_numa(void)
{
}
+
+static int sched_domains_numa_masks_update(struct notifier_block *nfb,
+ unsigned long action,
+ void *hcpu)
+{
+ return 0;
+}
#endif /* CONFIG_NUMA */
static int __sdt_alloc(const struct cpumask *cpu_map)
@@ -6642,6 +6697,7 @@ void __init sched_init_smp(void)
mutex_unlock(&sched_domains_mutex);
put_online_cpus();
+ hotcpu_notifier(sched_domains_numa_masks_update, CPU_PRI_SCHED_ACTIVE);
hotcpu_notifier(cpuset_cpu_active, CPU_PRI_CPUSET_ACTIVE);
hotcpu_notifier(cpuset_cpu_inactive, CPU_PRI_CPUSET_INACTIVE);
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-10-05 13:03 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-25 13:12 [PATCH v2 0/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-09-25 13:12 ` [PATCH v2 1/2] Ensure sched_domains_numa_levels safe in other functions Tang Chen
2012-10-05 13:01 ` [tip:sched/urgent] sched: Ensure 'sched_domains_numa_levels' is safe to use " tip-bot for Tang Chen
2012-09-25 13:12 ` [PATCH v2 2/2] Update sched_domains_numa_masks when new cpus are onlined Tang Chen
2012-10-04 12:30 ` Peter Zijlstra
2012-10-05 13:02 ` [tip:sched/urgent] sched: Update sched_domains_numa_masks[][] " tip-bot for Tang Chen
2012-10-02 11:33 ` [PATCH v2 0/2] Update sched_domains_numa_masks " Peter Zijlstra
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.