linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Disable sched_numa_balancing on uma systems
@ 2015-08-11 11:00 Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing Srikar Dronamraju
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Srikar Dronamraju @ 2015-08-11 11:00 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

With recent commit 2a1ed24 ("sched/numa: Prefer NUMA hotness over cache
hotness") sets sched feature NUMA to true. This can enable numa hinting
faults on a uma system.

This patchset ensures that numa hinting faults occur only on a numa system
by setting/resetting sched_numa_balancing.

This patchset
- Renames numabalancing_enabled to sched_numa_balancing
- Makes sched_numa_balancing common to CONFIG_SCHED_DEBUG and
  !CONFIG_SCHED_DEBUG. Earlier it was only in !CONFIG_SCHED_DEBUG
- Checks for sched_numa_balancing instead of sched_feat(NUMA)
- Removes NUMA sched feature

Srikar Dronamraju (3):
  sched/numa: Rename numabalancing_enabled to sched_numa_balancing
  sched/numa: Disable sched_numa_balancing on uma systems
  sched/numa: Remove NUMA sched feature

 kernel/sched/core.c     | 16 +++-------------
 kernel/sched/fair.c     |  8 ++++----
 kernel/sched/features.h | 16 ----------------
 kernel/sched/sched.h    | 10 ++--------
 4 files changed, 9 insertions(+), 41 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing
  2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
@ 2015-08-11 11:00 ` Srikar Dronamraju
  2015-09-13 11:01   ` [tip:sched/core] " tip-bot for Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 2/3] sched/numa: Disable sched_numa_balancing on uma systems Srikar Dronamraju
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Srikar Dronamraju @ 2015-08-11 11:00 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

Simple rename of numabalancing_enabled variable to sched_numa_balancing.
No functional changes.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 kernel/sched/core.c  | 6 +++---
 kernel/sched/fair.c  | 4 ++--
 kernel/sched/sched.h | 6 +++---
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 655557d..71c1d25 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2067,11 +2067,11 @@ void set_numabalancing_state(bool enabled)
 		sched_feat_set("NO_NUMA");
 }
 #else
-__read_mostly bool numabalancing_enabled;
+__read_mostly bool sched_numa_balancing;
 
 void set_numabalancing_state(bool enabled)
 {
-	numabalancing_enabled = enabled;
+	sched_numa_balancing = enabled;
 }
 #endif /* CONFIG_SCHED_DEBUG */
 
@@ -2081,7 +2081,7 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 {
 	struct ctl_table t;
 	int err;
-	int state = numabalancing_enabled;
+	int state = sched_numa_balancing;
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 858b94a..3ec9b0b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2069,7 +2069,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	int local = !!(flags & TNF_FAULT_LOCAL);
 	int priv;
 
-	if (!numabalancing_enabled)
+	if (!sched_numa_balancing)
 		return;
 
 	/* for example, ksmd faulting in a user's mm */
@@ -7810,7 +7810,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		entity_tick(cfs_rq, se, queued);
 	}
 
-	if (numabalancing_enabled)
+	if (sched_numa_balancing)
 		task_tick_numa(rq, curr);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 22ccc55..a02bd8d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1006,13 +1006,13 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #ifdef CONFIG_NUMA_BALANCING
 #define sched_feat_numa(x) sched_feat(x)
 #ifdef CONFIG_SCHED_DEBUG
-#define numabalancing_enabled sched_feat_numa(NUMA)
+#define sched_numa_balancing sched_feat_numa(NUMA)
 #else
-extern bool numabalancing_enabled;
+extern bool sched_numa_balancing;
 #endif /* CONFIG_SCHED_DEBUG */
 #else
 #define sched_feat_numa(x) (0)
-#define numabalancing_enabled (0)
+#define sched_numa_balancing (0)
 #endif /* CONFIG_NUMA_BALANCING */
 
 static inline u64 global_rt_period(void)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/3] sched/numa: Disable sched_numa_balancing on uma systems
  2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing Srikar Dronamraju
@ 2015-08-11 11:00 ` Srikar Dronamraju
  2015-09-13 11:02   ` [tip:sched/core] sched/numa: Disable sched_numa_balancing on UMA systems tip-bot for Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 3/3] sched/numa: Remove NUMA sched feature Srikar Dronamraju
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Srikar Dronamraju @ 2015-08-11 11:00 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

Commit 2a1ed24 ("sched/numa: Prefer NUMA hotness over cache hotness")
sets sched feature NUMA to true. However this can enable numa hinting
faults on a uma system.

This commit ensures that numa hinting faults occur only on a numa system
by setting/resetting sched_numa_balancing.

This commit
- Makes sched_numa_balancing common to CONFIG_SCHED_DEBUG and
  !CONFIG_SCHED_DEBUG. Earlier it was only in !CONFIG_SCHED_DEBUG
- Checks for sched_numa_balancing instead of sched_feat(NUMA)

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 kernel/sched/core.c  | 14 +++++---------
 kernel/sched/fair.c  |  4 ++--
 kernel/sched/sched.h |  6 ------
 3 files changed, 7 insertions(+), 17 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 71c1d25..7cbdf44 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2058,22 +2058,18 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
-#ifdef CONFIG_SCHED_DEBUG
+__read_mostly bool sched_numa_balancing;
+
 void set_numabalancing_state(bool enabled)
 {
+	sched_numa_balancing = enabled;
+#ifdef CONFIG_SCHED_DEBUG
 	if (enabled)
 		sched_feat_set("NUMA");
 	else
 		sched_feat_set("NO_NUMA");
-}
-#else
-__read_mostly bool sched_numa_balancing;
-
-void set_numabalancing_state(bool enabled)
-{
-	sched_numa_balancing = enabled;
-}
 #endif /* CONFIG_SCHED_DEBUG */
+}
 
 #ifdef CONFIG_PROC_SYSCTL
 int sysctl_numa_balancing(struct ctl_table *table, int write,
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3ec9b0b..f67f2bc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5524,10 +5524,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+	if (!sched_numa_balancing)
 		return -1;
 
-	if (!sched_feat(NUMA))
+	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
 		return -1;
 
 	src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a02bd8d..953be0f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1004,14 +1004,8 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
 
 #ifdef CONFIG_NUMA_BALANCING
-#define sched_feat_numa(x) sched_feat(x)
-#ifdef CONFIG_SCHED_DEBUG
-#define sched_numa_balancing sched_feat_numa(NUMA)
-#else
 extern bool sched_numa_balancing;
-#endif /* CONFIG_SCHED_DEBUG */
 #else
-#define sched_feat_numa(x) (0)
 #define sched_numa_balancing (0)
 #endif /* CONFIG_NUMA_BALANCING */
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/3] sched/numa: Remove NUMA sched feature
  2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing Srikar Dronamraju
  2015-08-11 11:00 ` [PATCH v2 2/3] sched/numa: Disable sched_numa_balancing on uma systems Srikar Dronamraju
@ 2015-08-11 11:00 ` Srikar Dronamraju
  2015-08-11 11:15   ` Peter Zijlstra
  2015-09-13 11:02   ` [tip:sched/core] sched/numa: Remove the NUMA sched_feature tip-bot for Srikar Dronamraju
  2015-08-11 16:24 ` [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch Srikar Dronamraju
  2015-09-02  9:33 ` [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Peter Zijlstra
  4 siblings, 2 replies; 14+ messages in thread
From: Srikar Dronamraju @ 2015-08-11 11:00 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

Variable sched_numa_balancing is available for both CONFIG_SCHED_DEBUG
and !CONFIG_SCHED_DEBUG. All code paths now check for
sched_numa_balancing. Hence remove sched_feat(NUMA).

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 kernel/sched/core.c     |  6 ------
 kernel/sched/features.h | 16 ----------------
 2 files changed, 22 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7cbdf44..d02570b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2063,12 +2063,6 @@ __read_mostly bool sched_numa_balancing;
 void set_numabalancing_state(bool enabled)
 {
 	sched_numa_balancing = enabled;
-#ifdef CONFIG_SCHED_DEBUG
-	if (enabled)
-		sched_feat_set("NUMA");
-	else
-		sched_feat_set("NO_NUMA");
-#endif /* CONFIG_SCHED_DEBUG */
 }
 
 #ifdef CONFIG_PROC_SYSCTL
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 83a50e7..8baa708 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -72,19 +72,3 @@ SCHED_FEAT(RT_PUSH_IPI, true)
 SCHED_FEAT(FORCE_SD_OVERLAP, false)
 SCHED_FEAT(RT_RUNTIME_SHARE, true)
 SCHED_FEAT(LB_MIN, false)
-
-/*
- * Apply the automatic NUMA scheduling policy. Enabled automatically
- * at runtime if running on a NUMA machine. Can be controlled via
- * numa_balancing=
- */
-#ifdef CONFIG_NUMA_BALANCING
-
-/*
- * NUMA will favor moving tasks towards nodes where a higher number of
- * hinting faults are recorded during active load balancing. It will
- * resist moving tasks towards nodes where a lower number of hinting
- * faults have been recorded.
- */
-SCHED_FEAT(NUMA,	true)
-#endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/3] sched/numa: Remove NUMA sched feature
  2015-08-11 11:00 ` [PATCH v2 3/3] sched/numa: Remove NUMA sched feature Srikar Dronamraju
@ 2015-08-11 11:15   ` Peter Zijlstra
  2015-09-13 11:02   ` [tip:sched/core] sched/numa: Remove the NUMA sched_feature tip-bot for Srikar Dronamraju
  1 sibling, 0 replies; 14+ messages in thread
From: Peter Zijlstra @ 2015-08-11 11:15 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Ingo Molnar, linux-kernel, Rik van Riel, Mel Gorman

On Tue, Aug 11, 2015 at 04:30:13PM +0530, Srikar Dronamraju wrote:
> Variable sched_numa_balancing is available for both CONFIG_SCHED_DEBUG
> and !CONFIG_SCHED_DEBUG. All code paths now check for
> sched_numa_balancing. Hence remove sched_feat(NUMA).
> 
> Suggested-by: Ingo Molnar <mingo@kernel.org>
> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> ---
>  kernel/sched/core.c     |  6 ------
>  kernel/sched/features.h | 16 ----------------
>  2 files changed, 22 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7cbdf44..d02570b 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2063,12 +2063,6 @@ __read_mostly bool sched_numa_balancing;
>  void set_numabalancing_state(bool enabled)
>  {
>  	sched_numa_balancing = enabled;
> -#ifdef CONFIG_SCHED_DEBUG
> -	if (enabled)
> -		sched_feat_set("NUMA");
> -	else
> -		sched_feat_set("NO_NUMA");
> -#endif /* CONFIG_SCHED_DEBUG */
>  }

Could you at least replace sched_numa_balancing with a static_key
thingy?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch
  2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
                   ` (2 preceding siblings ...)
  2015-08-11 11:00 ` [PATCH v2 3/3] sched/numa: Remove NUMA sched feature Srikar Dronamraju
@ 2015-08-11 16:24 ` Srikar Dronamraju
  2015-09-06  4:01   ` Wanpeng Li
  2015-09-13 11:03   ` [tip:sched/core] " tip-bot for Srikar Dronamraju
  2015-09-02  9:33 ` [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Peter Zijlstra
  4 siblings, 2 replies; 14+ messages in thread
From: Srikar Dronamraju @ 2015-08-11 16:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, srikar, Rik van Riel, Mel Gorman

Variable sched_numa_balancing toggles numa_balancing feature. Hence
moving from a simple read mostly variable to a more apt static_branch.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 kernel/sched/core.c  | 10 +++++++---
 kernel/sched/fair.c  |  6 +++---
 kernel/sched/sched.h |  6 +-----
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d02570b..5f330d5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2057,12 +2057,16 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 #endif /* CONFIG_NUMA_BALANCING */
 }
 
+DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
+
 #ifdef CONFIG_NUMA_BALANCING
-__read_mostly bool sched_numa_balancing;
 
 void set_numabalancing_state(bool enabled)
 {
-	sched_numa_balancing = enabled;
+	if (enabled)
+		static_branch_enable(&sched_numa_balancing);
+	else
+		static_branch_disable(&sched_numa_balancing);
 }
 
 #ifdef CONFIG_PROC_SYSCTL
@@ -2071,7 +2075,7 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 {
 	struct ctl_table t;
 	int err;
-	int state = sched_numa_balancing;
+	int state = static_branch_likely(&sched_numa_balancing);
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f67f2bc..aab730e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2069,7 +2069,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	int local = !!(flags & TNF_FAULT_LOCAL);
 	int priv;
 
-	if (!sched_numa_balancing)
+	if (!static_branch_likely(&sched_numa_balancing))
 		return;
 
 	/* for example, ksmd faulting in a user's mm */
@@ -5524,7 +5524,7 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!sched_numa_balancing)
+	if (!static_branch_likely(&sched_numa_balancing))
 		return -1;
 
 	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
@@ -7810,7 +7810,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		entity_tick(cfs_rq, se, queued);
 	}
 
-	if (sched_numa_balancing)
+	if (!static_branch_unlikely(&sched_numa_balancing))
 		task_tick_numa(rq, curr);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 953be0f..72cae5d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1003,11 +1003,7 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
 #endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
 
-#ifdef CONFIG_NUMA_BALANCING
-extern bool sched_numa_balancing;
-#else
-#define sched_numa_balancing (0)
-#endif /* CONFIG_NUMA_BALANCING */
+extern struct static_key_false sched_numa_balancing;
 
 static inline u64 global_rt_period(void)
 {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 0/3] Disable sched_numa_balancing on uma systems
  2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
                   ` (3 preceding siblings ...)
  2015-08-11 16:24 ` [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch Srikar Dronamraju
@ 2015-09-02  9:33 ` Peter Zijlstra
  4 siblings, 0 replies; 14+ messages in thread
From: Peter Zijlstra @ 2015-09-02  9:33 UTC (permalink / raw)
  To: Srikar Dronamraju; +Cc: Ingo Molnar, linux-kernel, Rik van Riel, Mel Gorman

On Tue, Aug 11, 2015 at 04:30:10PM +0530, Srikar Dronamraju wrote:
> With recent commit 2a1ed24 ("sched/numa: Prefer NUMA hotness over cache
> hotness") sets sched feature NUMA to true. This can enable numa hinting
> faults on a uma system.
> 
> This patchset ensures that numa hinting faults occur only on a numa system
> by setting/resetting sched_numa_balancing.
> 
> This patchset
> - Renames numabalancing_enabled to sched_numa_balancing
> - Makes sched_numa_balancing common to CONFIG_SCHED_DEBUG and
>   !CONFIG_SCHED_DEBUG. Earlier it was only in !CONFIG_SCHED_DEBUG
> - Checks for sched_numa_balancing instead of sched_feat(NUMA)
> - Removes NUMA sched feature
> 
> Srikar Dronamraju (3):
>   sched/numa: Rename numabalancing_enabled to sched_numa_balancing
>   sched/numa: Disable sched_numa_balancing on uma systems
>   sched/numa: Remove NUMA sched feature
> 
>  kernel/sched/core.c     | 16 +++-------------
>  kernel/sched/fair.c     |  8 ++++----
>  kernel/sched/features.h | 16 ----------------
>  kernel/sched/sched.h    | 10 ++--------
>  4 files changed, 9 insertions(+), 41 deletions(-)

Thanks!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch
  2015-08-11 16:24 ` [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch Srikar Dronamraju
@ 2015-09-06  4:01   ` Wanpeng Li
  2015-09-07  4:10     ` Srikar Dronamraju
  2015-09-13 11:03   ` [tip:sched/core] " tip-bot for Srikar Dronamraju
  1 sibling, 1 reply; 14+ messages in thread
From: Wanpeng Li @ 2015-09-06  4:01 UTC (permalink / raw)
  To: Srikar Dronamraju, Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, Rik van Riel, Mel Gorman

Hi Srikar,
On 8/12/15 12:24 AM, Srikar Dronamraju wrote:
> Variable sched_numa_balancing toggles numa_balancing feature. Hence
> moving from a simple read mostly variable to a more apt static_branch.
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
>
This commit breaks Peter's queue:

kernel/sched/fair.c: In function ‘task_numa_fault’:
kernel/sched/fair.c:2072: error: implicit declaration of function 
‘static_branch_likely’
kernel/sched/fair.c: In function ‘task_tick_fair’:
kernel/sched/fair.c:7867: error: implicit declaration of function 
‘static_branch_unlikely’
make[1]: *** [kernel/sched/fair.o] Error 1
make[1]: *** Waiting for unfinished jobs....
kernel/sched/core.c:2116: warning: data definition has no type or 
storage class
kernel/sched/core.c:2116: error: type defaults to ‘int’ in declaration 
of ‘DEFINE_STATIC_KEY_FALSE’
kernel/sched/core.c:2116: warning: parameter names (without types) in 
function declaration
kernel/sched/core.c: In function ‘set_numabalancing_state’:
kernel/sched/core.c:2123: error: implicit declaration of function 
‘static_branch_enable’
kernel/sched/core.c:2125: error: implicit declaration of function 
‘static_branch_disable’
kernel/sched/core.c: In function ‘sysctl_numa_balancing’:
kernel/sched/core.c:2134: error: implicit declaration of function 
‘static_branch_likely’
make[1]: *** [kernel/sched/core.o] Error 1
make: *** [kernel/sched/] Error 2

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch
  2015-09-06  4:01   ` Wanpeng Li
@ 2015-09-07  4:10     ` Srikar Dronamraju
  2015-09-07  5:21       ` Wanpeng Li
  0 siblings, 1 reply; 14+ messages in thread
From: Srikar Dronamraju @ 2015-09-07  4:10 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Rik van Riel, Mel Gorman

* Wanpeng Li <wanpeng.li@hotmail.com> [2015-09-06 12:01:10]:


Hi Wanpeng Li, 


> Hi Srikar,
> On 8/12/15 12:24 AM, Srikar Dronamraju wrote:
> >Variable sched_numa_balancing toggles numa_balancing feature. Hence
> >moving from a simple read mostly variable to a more apt static_branch.
> >
> >Suggested-by: Peter Zijlstra <peterz@infradead.org>
> >Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> >
> This commit breaks Peter's queue:
> 
> kernel/sched/fair.c: In function ?task_numa_fault?:
> kernel/sched/fair.c:2072: error: implicit declaration of function
> ?static_branch_likely?
> kernel/sched/fair.c: In function ?task_tick_fair?:
> kernel/sched/fair.c:7867: error: implicit declaration of function
> ?static_branch_unlikely?
> make[1]: *** [kernel/sched/fair.o] Error 1
> make[1]: *** Waiting for unfinished jobs....
> kernel/sched/core.c:2116: warning: data definition has no type or storage
> class

Thanks for reporting.  Can you please confirm if you have the commit
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=11276d5306b8e5b438a36bbff855fe792d7eaa61

>From 11276d5306b8e5b438a36bbff855fe792d7eaa61 Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <peterz@infradead.org>
Date: Fri, 24 Jul 2015 15:09:55 +0200
Subject: [PATCH] locking/static_keys: Add a new static_key interface


-- 
Thanks and Regards
Srikar Dronamraju


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch
  2015-09-07  4:10     ` Srikar Dronamraju
@ 2015-09-07  5:21       ` Wanpeng Li
  0 siblings, 0 replies; 14+ messages in thread
From: Wanpeng Li @ 2015-09-07  5:21 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Ingo Molnar, Peter Zijlstra, linux-kernel, Rik van Riel, Mel Gorman

On 9/7/15 12:10 PM, Srikar Dronamraju wrote:
> * Wanpeng Li <wanpeng.li@hotmail.com> [2015-09-06 12:01:10]:
>
>
> Hi Wanpeng Li,
>
>
>> Hi Srikar,
>> On 8/12/15 12:24 AM, Srikar Dronamraju wrote:
>>> Variable sched_numa_balancing toggles numa_balancing feature. Hence
>>> moving from a simple read mostly variable to a more apt static_branch.
>>>
>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
>>>
>> This commit breaks Peter's queue:
>>
>> kernel/sched/fair.c: In function ?task_numa_fault?:
>> kernel/sched/fair.c:2072: error: implicit declaration of function
>> ?static_branch_likely?
>> kernel/sched/fair.c: In function ?task_tick_fair?:
>> kernel/sched/fair.c:7867: error: implicit declaration of function
>> ?static_branch_unlikely?
>> make[1]: *** [kernel/sched/fair.o] Error 1
>> make[1]: *** Waiting for unfinished jobs....
>> kernel/sched/core.c:2116: warning: data definition has no type or storage
>> class
> Thanks for reporting.  Can you please confirm if you have the commit
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=11276d5306b8e5b438a36bbff855fe792d7eaa61

It seems that the sched/core branch of Peterz's queue doesn't include 
this commit. So your patch should be good.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [tip:sched/core] sched/numa: Rename numabalancing_enabled to sched_numa_balancing
  2015-08-11 11:00 ` [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing Srikar Dronamraju
@ 2015-09-13 11:01   ` tip-bot for Srikar Dronamraju
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Srikar Dronamraju @ 2015-09-13 11:01 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: riel, srikar, mingo, hpa, efault, mgorman, tglx, peterz,
	linux-kernel, torvalds

Commit-ID:  78a9c54649ea220065aad9902460a1d137c7eafd
Gitweb:     http://git.kernel.org/tip/78a9c54649ea220065aad9902460a1d137c7eafd
Author:     Srikar Dronamraju <srikar@linux.vnet.ibm.com>
AuthorDate: Tue, 11 Aug 2015 16:30:11 +0530
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 13 Sep 2015 09:52:52 +0200

sched/numa: Rename numabalancing_enabled to sched_numa_balancing

Simple rename of the 'numabalancing_enabled' variable to 'sched_numa_balancing'.
No functional changes.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1439290813-6683-2-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c  | 6 +++---
 kernel/sched/fair.c  | 4 ++--
 kernel/sched/sched.h | 6 +++---
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 37ab6f9..2656af0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2124,11 +2124,11 @@ void set_numabalancing_state(bool enabled)
 		sched_feat_set("NO_NUMA");
 }
 #else
-__read_mostly bool numabalancing_enabled;
+__read_mostly bool sched_numa_balancing;
 
 void set_numabalancing_state(bool enabled)
 {
-	numabalancing_enabled = enabled;
+	sched_numa_balancing = enabled;
 }
 #endif /* CONFIG_SCHED_DEBUG */
 
@@ -2138,7 +2138,7 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 {
 	struct ctl_table t;
 	int err;
-	int state = numabalancing_enabled;
+	int state = sched_numa_balancing;
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 36774e5..3a6ac55 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2069,7 +2069,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	int local = !!(flags & TNF_FAULT_LOCAL);
 	int priv;
 
-	if (!numabalancing_enabled)
+	if (!sched_numa_balancing)
 		return;
 
 	/* for example, ksmd faulting in a user's mm */
@@ -7874,7 +7874,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		entity_tick(cfs_rq, se, queued);
 	}
 
-	if (numabalancing_enabled)
+	if (sched_numa_balancing)
 		task_tick_numa(rq, curr);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 637d5ae..d0b303d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1006,13 +1006,13 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #ifdef CONFIG_NUMA_BALANCING
 #define sched_feat_numa(x) sched_feat(x)
 #ifdef CONFIG_SCHED_DEBUG
-#define numabalancing_enabled sched_feat_numa(NUMA)
+#define sched_numa_balancing sched_feat_numa(NUMA)
 #else
-extern bool numabalancing_enabled;
+extern bool sched_numa_balancing;
 #endif /* CONFIG_SCHED_DEBUG */
 #else
 #define sched_feat_numa(x) (0)
-#define numabalancing_enabled (0)
+#define sched_numa_balancing (0)
 #endif /* CONFIG_NUMA_BALANCING */
 
 static inline u64 global_rt_period(void)

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:sched/core] sched/numa: Disable sched_numa_balancing on UMA systems
  2015-08-11 11:00 ` [PATCH v2 2/3] sched/numa: Disable sched_numa_balancing on uma systems Srikar Dronamraju
@ 2015-09-13 11:02   ` tip-bot for Srikar Dronamraju
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Srikar Dronamraju @ 2015-09-13 11:02 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, linux-kernel, hpa, tglx, efault, riel, torvalds, mingo,
	mgorman, srikar

Commit-ID:  c3b9bc5bbfc3750570d788afffd431263ef695c6
Gitweb:     http://git.kernel.org/tip/c3b9bc5bbfc3750570d788afffd431263ef695c6
Author:     Srikar Dronamraju <srikar@linux.vnet.ibm.com>
AuthorDate: Tue, 11 Aug 2015 16:30:12 +0530
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 13 Sep 2015 09:52:53 +0200

sched/numa: Disable sched_numa_balancing on UMA systems

Commit 2a1ed24 ("sched/numa: Prefer NUMA hotness over cache hotness")
sets sched feature NUMA to true. However this can enable NUMA hinting
faults on a UMA system.

This commit ensures that NUMA hinting faults occur only on a NUMA system
by setting/resetting sched_numa_balancing.

This commit:

  - Makes sched_numa_balancing common to CONFIG_SCHED_DEBUG and
    !CONFIG_SCHED_DEBUG. Earlier it was only in !CONFIG_SCHED_DEBUG.

  - Checks for sched_numa_balancing instead of sched_feat(NUMA).

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1439290813-6683-3-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c  | 14 +++++---------
 kernel/sched/fair.c  |  4 ++--
 kernel/sched/sched.h |  6 ------
 3 files changed, 7 insertions(+), 17 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2656af0..ca665f8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2115,22 +2115,18 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
-#ifdef CONFIG_SCHED_DEBUG
+__read_mostly bool sched_numa_balancing;
+
 void set_numabalancing_state(bool enabled)
 {
+	sched_numa_balancing = enabled;
+#ifdef CONFIG_SCHED_DEBUG
 	if (enabled)
 		sched_feat_set("NUMA");
 	else
 		sched_feat_set("NO_NUMA");
-}
-#else
-__read_mostly bool sched_numa_balancing;
-
-void set_numabalancing_state(bool enabled)
-{
-	sched_numa_balancing = enabled;
-}
 #endif /* CONFIG_SCHED_DEBUG */
+}
 
 #ifdef CONFIG_PROC_SYSCTL
 int sysctl_numa_balancing(struct ctl_table *table, int write,
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3a6ac55..e8f0828 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5562,10 +5562,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
+	if (!sched_numa_balancing)
 		return -1;
 
-	if (!sched_feat(NUMA))
+	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
 		return -1;
 
 	src_nid = cpu_to_node(env->src_cpu);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d0b303d..0d8f885 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1004,14 +1004,8 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
 
 #ifdef CONFIG_NUMA_BALANCING
-#define sched_feat_numa(x) sched_feat(x)
-#ifdef CONFIG_SCHED_DEBUG
-#define sched_numa_balancing sched_feat_numa(NUMA)
-#else
 extern bool sched_numa_balancing;
-#endif /* CONFIG_SCHED_DEBUG */
 #else
-#define sched_feat_numa(x) (0)
 #define sched_numa_balancing (0)
 #endif /* CONFIG_NUMA_BALANCING */
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:sched/core] sched/numa: Remove the NUMA sched_feature
  2015-08-11 11:00 ` [PATCH v2 3/3] sched/numa: Remove NUMA sched feature Srikar Dronamraju
  2015-08-11 11:15   ` Peter Zijlstra
@ 2015-09-13 11:02   ` tip-bot for Srikar Dronamraju
  1 sibling, 0 replies; 14+ messages in thread
From: tip-bot for Srikar Dronamraju @ 2015-09-13 11:02 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: srikar, peterz, riel, torvalds, mingo, efault, linux-kernel, hpa,
	mgorman, tglx

Commit-ID:  2b49d84b259fc18e131026e5d38e7855352f71b9
Gitweb:     http://git.kernel.org/tip/2b49d84b259fc18e131026e5d38e7855352f71b9
Author:     Srikar Dronamraju <srikar@linux.vnet.ibm.com>
AuthorDate: Tue, 11 Aug 2015 16:30:13 +0530
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 13 Sep 2015 09:52:53 +0200

sched/numa: Remove the NUMA sched_feature

Variable sched_numa_balancing is available for both CONFIG_SCHED_DEBUG
and !CONFIG_SCHED_DEBUG. All code paths now check for
sched_numa_balancing. Hence remove sched_feat(NUMA).

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1439290813-6683-4-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c     |  6 ------
 kernel/sched/features.h | 16 ----------------
 2 files changed, 22 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ca665f8..e0bd88b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2120,12 +2120,6 @@ __read_mostly bool sched_numa_balancing;
 void set_numabalancing_state(bool enabled)
 {
 	sched_numa_balancing = enabled;
-#ifdef CONFIG_SCHED_DEBUG
-	if (enabled)
-		sched_feat_set("NUMA");
-	else
-		sched_feat_set("NO_NUMA");
-#endif /* CONFIG_SCHED_DEBUG */
 }
 
 #ifdef CONFIG_PROC_SYSCTL
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index e6fd23b..edf5902 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -72,21 +72,5 @@ SCHED_FEAT(RT_PUSH_IPI, true)
 SCHED_FEAT(FORCE_SD_OVERLAP, false)
 SCHED_FEAT(RT_RUNTIME_SHARE, true)
 SCHED_FEAT(LB_MIN, false)
-
 SCHED_FEAT(ATTACH_AGE_LOAD, true)
 
-/*
- * Apply the automatic NUMA scheduling policy. Enabled automatically
- * at runtime if running on a NUMA machine. Can be controlled via
- * numa_balancing=
- */
-#ifdef CONFIG_NUMA_BALANCING
-
-/*
- * NUMA will favor moving tasks towards nodes where a higher number of
- * hinting faults are recorded during active load balancing. It will
- * resist moving tasks towards nodes where a lower number of hinting
- * faults have been recorded.
- */
-SCHED_FEAT(NUMA,	true)
-#endif

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:sched/core] sched/numa: Convert sched_numa_balancing to a static_branch
  2015-08-11 16:24 ` [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch Srikar Dronamraju
  2015-09-06  4:01   ` Wanpeng Li
@ 2015-09-13 11:03   ` tip-bot for Srikar Dronamraju
  1 sibling, 0 replies; 14+ messages in thread
From: tip-bot for Srikar Dronamraju @ 2015-09-13 11:03 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, efault, hpa, peterz, mgorman, linux-kernel, tglx,
	srikar, riel, mingo

Commit-ID:  2a595721a1fa6b684c1c818f379bef834ac3d65e
Gitweb:     http://git.kernel.org/tip/2a595721a1fa6b684c1c818f379bef834ac3d65e
Author:     Srikar Dronamraju <srikar@linux.vnet.ibm.com>
AuthorDate: Tue, 11 Aug 2015 21:54:21 +0530
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 13 Sep 2015 09:52:54 +0200

sched/numa: Convert sched_numa_balancing to a static_branch

Variable sched_numa_balancing toggles numa_balancing feature. Hence
moving from a simple read mostly variable to a more apt static_branch.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1439310261-16124-1-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c  | 10 +++++++---
 kernel/sched/fair.c  |  6 +++---
 kernel/sched/sched.h |  6 +-----
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e0bd88b..b621271 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2114,12 +2114,16 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 #endif /* CONFIG_NUMA_BALANCING */
 }
 
+DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
+
 #ifdef CONFIG_NUMA_BALANCING
-__read_mostly bool sched_numa_balancing;
 
 void set_numabalancing_state(bool enabled)
 {
-	sched_numa_balancing = enabled;
+	if (enabled)
+		static_branch_enable(&sched_numa_balancing);
+	else
+		static_branch_disable(&sched_numa_balancing);
 }
 
 #ifdef CONFIG_PROC_SYSCTL
@@ -2128,7 +2132,7 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 {
 	struct ctl_table t;
 	int err;
-	int state = sched_numa_balancing;
+	int state = static_branch_likely(&sched_numa_balancing);
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e8f0828..47ece22 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2069,7 +2069,7 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	int local = !!(flags & TNF_FAULT_LOCAL);
 	int priv;
 
-	if (!sched_numa_balancing)
+	if (!static_branch_likely(&sched_numa_balancing))
 		return;
 
 	/* for example, ksmd faulting in a user's mm */
@@ -5562,7 +5562,7 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
 	unsigned long src_faults, dst_faults;
 	int src_nid, dst_nid;
 
-	if (!sched_numa_balancing)
+	if (!static_branch_likely(&sched_numa_balancing))
 		return -1;
 
 	if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
@@ -7874,7 +7874,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		entity_tick(cfs_rq, se, queued);
 	}
 
-	if (sched_numa_balancing)
+	if (!static_branch_unlikely(&sched_numa_balancing))
 		task_tick_numa(rq, curr);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 0d8f885..2e8530d0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1003,11 +1003,7 @@ extern struct static_key sched_feat_keys[__SCHED_FEAT_NR];
 #define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
 #endif /* SCHED_DEBUG && HAVE_JUMP_LABEL */
 
-#ifdef CONFIG_NUMA_BALANCING
-extern bool sched_numa_balancing;
-#else
-#define sched_numa_balancing (0)
-#endif /* CONFIG_NUMA_BALANCING */
+extern struct static_key_false sched_numa_balancing;
 
 static inline u64 global_rt_period(void)
 {

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-09-13 11:03 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-11 11:00 [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Srikar Dronamraju
2015-08-11 11:00 ` [PATCH v2 1/3] sched/numa: Rename numabalancing_enabled to sched_numa_balancing Srikar Dronamraju
2015-09-13 11:01   ` [tip:sched/core] " tip-bot for Srikar Dronamraju
2015-08-11 11:00 ` [PATCH v2 2/3] sched/numa: Disable sched_numa_balancing on uma systems Srikar Dronamraju
2015-09-13 11:02   ` [tip:sched/core] sched/numa: Disable sched_numa_balancing on UMA systems tip-bot for Srikar Dronamraju
2015-08-11 11:00 ` [PATCH v2 3/3] sched/numa: Remove NUMA sched feature Srikar Dronamraju
2015-08-11 11:15   ` Peter Zijlstra
2015-09-13 11:02   ` [tip:sched/core] sched/numa: Remove the NUMA sched_feature tip-bot for Srikar Dronamraju
2015-08-11 16:24 ` [PATCH v2 4/4] sched/numa: Convert sched_numa_balancing to a static_branch Srikar Dronamraju
2015-09-06  4:01   ` Wanpeng Li
2015-09-07  4:10     ` Srikar Dronamraju
2015-09-07  5:21       ` Wanpeng Li
2015-09-13 11:03   ` [tip:sched/core] " tip-bot for Srikar Dronamraju
2015-09-02  9:33 ` [PATCH v2 0/3] Disable sched_numa_balancing on uma systems Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).