* [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork()
@ 2019-07-15 10:25 Valentin Schneider
2019-07-15 10:25 ` [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work() Valentin Schneider
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Valentin Schneider @ 2019-07-15 10:25 UTC (permalink / raw)
To: linux-kernel; +Cc: mingo, peterz, mgorman, riel
A TODO has been sitting in task_tick_numa() regarding init'ing the
task_numa_work() task_work in sched_fork() rather than in task_tick_numa(),
so I figured I'd have a go at it.
Patches 1 & 2 do that, and patch 3 is a freebie cleanup.
Briefly tested on a 2 * (Xeon E5-2690) system, didn't see any obvious
breakage.
Valentin Schneider (3):
sched/fair: Move init_numa_balancing() below task_numa_work()
sched/fair: Move task_numa_work() init to init_numa_balancing()
sched/fair: Change task_numa_work() storage to static
kernel/sched/fair.c | 93 +++++++++++++++++++++++----------------------
1 file changed, 47 insertions(+), 46 deletions(-)
--
2.22.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work()
2019-07-15 10:25 [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Valentin Schneider
@ 2019-07-15 10:25 ` Valentin Schneider
2019-07-25 16:11 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:25 ` [PATCH 2/3] sched/fair: Move task_numa_work() init to init_numa_balancing() Valentin Schneider
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Valentin Schneider @ 2019-07-15 10:25 UTC (permalink / raw)
To: linux-kernel; +Cc: mingo, peterz, mgorman, riel
To reference task_numa_work() from within init_numa_balancing(), we
need the former to be declared before the latter. Do just that.
This is a pure code movement.
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
kernel/sched/fair.c | 82 ++++++++++++++++++++++-----------------------
1 file changed, 41 insertions(+), 41 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 036be95a87e9..476b0201a8fb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1169,47 +1169,6 @@ static unsigned int task_scan_max(struct task_struct *p)
return max(smin, smax);
}
-void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
-{
- int mm_users = 0;
- struct mm_struct *mm = p->mm;
-
- if (mm) {
- mm_users = atomic_read(&mm->mm_users);
- if (mm_users == 1) {
- mm->numa_next_scan = jiffies + msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
- mm->numa_scan_seq = 0;
- }
- }
- p->node_stamp = 0;
- p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
- p->numa_scan_period = sysctl_numa_balancing_scan_delay;
- p->numa_work.next = &p->numa_work;
- p->numa_faults = NULL;
- p->numa_group = NULL;
- p->last_task_numa_placement = 0;
- p->last_sum_exec_runtime = 0;
-
- /* New address space, reset the preferred nid */
- if (!(clone_flags & CLONE_VM)) {
- p->numa_preferred_nid = NUMA_NO_NODE;
- return;
- }
-
- /*
- * New thread, keep existing numa_preferred_nid which should be copied
- * already by arch_dup_task_struct but stagger when scans start.
- */
- if (mm) {
- unsigned int delay;
-
- delay = min_t(unsigned int, task_scan_max(current),
- current->numa_scan_period * mm_users * NSEC_PER_MSEC);
- delay += 2 * TICK_NSEC;
- p->node_stamp = delay;
- }
-}
-
static void account_numa_enqueue(struct rq *rq, struct task_struct *p)
{
rq->nr_numa_running += (p->numa_preferred_nid != NUMA_NO_NODE);
@@ -2611,6 +2570,47 @@ void task_numa_work(struct callback_head *work)
}
}
+void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
+{
+ int mm_users = 0;
+ struct mm_struct *mm = p->mm;
+
+ if (mm) {
+ mm_users = atomic_read(&mm->mm_users);
+ if (mm_users == 1) {
+ mm->numa_next_scan = jiffies + msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+ mm->numa_scan_seq = 0;
+ }
+ }
+ p->node_stamp = 0;
+ p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
+ p->numa_scan_period = sysctl_numa_balancing_scan_delay;
+ p->numa_work.next = &p->numa_work;
+ p->numa_faults = NULL;
+ p->numa_group = NULL;
+ p->last_task_numa_placement = 0;
+ p->last_sum_exec_runtime = 0;
+
+ /* New address space, reset the preferred nid */
+ if (!(clone_flags & CLONE_VM)) {
+ p->numa_preferred_nid = NUMA_NO_NODE;
+ return;
+ }
+
+ /*
+ * New thread, keep existing numa_preferred_nid which should be copied
+ * already by arch_dup_task_struct but stagger when scans start.
+ */
+ if (mm) {
+ unsigned int delay;
+
+ delay = min_t(unsigned int, task_scan_max(current),
+ current->numa_scan_period * mm_users * NSEC_PER_MSEC);
+ delay += 2 * TICK_NSEC;
+ p->node_stamp = delay;
+ }
+}
+
/*
* Drive the periodic memory faults..
*/
--
2.22.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] sched/fair: Move task_numa_work() init to init_numa_balancing()
2019-07-15 10:25 [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Valentin Schneider
2019-07-15 10:25 ` [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work() Valentin Schneider
@ 2019-07-15 10:25 ` Valentin Schneider
2019-07-25 16:12 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:25 ` [PATCH 3/3] sched/fair: Change task_numa_work() storage to static Valentin Schneider
2019-07-15 10:48 ` [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Peter Zijlstra
3 siblings, 1 reply; 8+ messages in thread
From: Valentin Schneider @ 2019-07-15 10:25 UTC (permalink / raw)
To: linux-kernel; +Cc: mingo, peterz, mgorman, riel
We only need to set the callback_head worker function once, do it
during sched_fork().
While at it, move the comment regarding double task_work addition to
init_numa_balancing(), since the double add sentinel is first set there.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
kernel/sched/fair.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 476b0201a8fb..74faa55bc52a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2441,7 +2441,7 @@ void task_numa_work(struct callback_head *work)
SCHED_WARN_ON(p != container_of(work, struct task_struct, numa_work));
- work->next = work; /* protect against double add */
+ work->next = work;
/*
* Who cares about NUMA placement when they're dying.
*
@@ -2585,12 +2585,15 @@ void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
p->node_stamp = 0;
p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
p->numa_scan_period = sysctl_numa_balancing_scan_delay;
+ /* Protect against double add, see task_tick_numa and task_numa_work */
p->numa_work.next = &p->numa_work;
p->numa_faults = NULL;
p->numa_group = NULL;
p->last_task_numa_placement = 0;
p->last_sum_exec_runtime = 0;
+ init_task_work(&p->numa_work, task_numa_work);
+
/* New address space, reset the preferred nid */
if (!(clone_flags & CLONE_VM)) {
p->numa_preferred_nid = NUMA_NO_NODE;
@@ -2639,10 +2642,8 @@ static void task_tick_numa(struct rq *rq, struct task_struct *curr)
curr->numa_scan_period = task_scan_start(curr);
curr->node_stamp += period;
- if (!time_before(jiffies, curr->mm->numa_next_scan)) {
- init_task_work(work, task_numa_work); /* TODO: move this into sched_fork() */
+ if (!time_before(jiffies, curr->mm->numa_next_scan))
task_work_add(curr, work, true);
- }
}
}
--
2.22.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] sched/fair: Change task_numa_work() storage to static
2019-07-15 10:25 [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Valentin Schneider
2019-07-15 10:25 ` [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work() Valentin Schneider
2019-07-15 10:25 ` [PATCH 2/3] sched/fair: Move task_numa_work() init to init_numa_balancing() Valentin Schneider
@ 2019-07-15 10:25 ` Valentin Schneider
2019-07-25 16:13 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:48 ` [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Peter Zijlstra
3 siblings, 1 reply; 8+ messages in thread
From: Valentin Schneider @ 2019-07-15 10:25 UTC (permalink / raw)
To: linux-kernel; +Cc: mingo, peterz, mgorman, riel
There are no callers outside of fair.c.
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 74faa55bc52a..c747ce05e726 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2428,7 +2428,7 @@ static void reset_ptenuma_scan(struct task_struct *p)
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
*/
-void task_numa_work(struct callback_head *work)
+static void task_numa_work(struct callback_head *work)
{
unsigned long migrate, next_scan, now = jiffies;
struct task_struct *p = current;
--
2.22.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork()
2019-07-15 10:25 [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Valentin Schneider
` (2 preceding siblings ...)
2019-07-15 10:25 ` [PATCH 3/3] sched/fair: Change task_numa_work() storage to static Valentin Schneider
@ 2019-07-15 10:48 ` Peter Zijlstra
3 siblings, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2019-07-15 10:48 UTC (permalink / raw)
To: Valentin Schneider; +Cc: linux-kernel, mingo, mgorman, riel
On Mon, Jul 15, 2019 at 11:25:05AM +0100, Valentin Schneider wrote:
> A TODO has been sitting in task_tick_numa() regarding init'ing the
> task_numa_work() task_work in sched_fork() rather than in task_tick_numa(),
> so I figured I'd have a go at it.
>
> Patches 1 & 2 do that, and patch 3 is a freebie cleanup.
>
> Briefly tested on a 2 * (Xeon E5-2690) system, didn't see any obvious
> breakage.
>
> Valentin Schneider (3):
> sched/fair: Move init_numa_balancing() below task_numa_work()
> sched/fair: Move task_numa_work() init to init_numa_balancing()
> sched/fair: Change task_numa_work() storage to static
>
> kernel/sched/fair.c | 93 +++++++++++++++++++++++----------------------
> 1 file changed, 47 insertions(+), 46 deletions(-)
Thanks!
^ permalink raw reply [flat|nested] 8+ messages in thread
* [tip:sched/core] sched/fair: Move init_numa_balancing() below task_numa_work()
2019-07-15 10:25 ` [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work() Valentin Schneider
@ 2019-07-25 16:11 ` tip-bot for Valentin Schneider
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-07-25 16:11 UTC (permalink / raw)
To: linux-tip-commits
Cc: mingo, peterz, tglx, hpa, linux-kernel, torvalds, valentin.schneider
Commit-ID: d35927a144641700c8328d707d1c89d305b4ecb8
Gitweb: https://git.kernel.org/tip/d35927a144641700c8328d707d1c89d305b4ecb8
Author: Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 15 Jul 2019 11:25:06 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 25 Jul 2019 15:51:51 +0200
sched/fair: Move init_numa_balancing() below task_numa_work()
To reference task_numa_work() from within init_numa_balancing(), we
need the former to be declared before the latter. Do just that.
This is a pure code movement.
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: mgorman@suse.de
Cc: riel@surriel.com
Link: https://lkml.kernel.org/r/20190715102508.32434-2-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/fair.c | 82 ++++++++++++++++++++++++++---------------------------
1 file changed, 41 insertions(+), 41 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc9cfeaac8bd..f0c488015649 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1188,47 +1188,6 @@ static unsigned int task_scan_max(struct task_struct *p)
return max(smin, smax);
}
-void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
-{
- int mm_users = 0;
- struct mm_struct *mm = p->mm;
-
- if (mm) {
- mm_users = atomic_read(&mm->mm_users);
- if (mm_users == 1) {
- mm->numa_next_scan = jiffies + msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
- mm->numa_scan_seq = 0;
- }
- }
- p->node_stamp = 0;
- p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
- p->numa_scan_period = sysctl_numa_balancing_scan_delay;
- p->numa_work.next = &p->numa_work;
- p->numa_faults = NULL;
- RCU_INIT_POINTER(p->numa_group, NULL);
- p->last_task_numa_placement = 0;
- p->last_sum_exec_runtime = 0;
-
- /* New address space, reset the preferred nid */
- if (!(clone_flags & CLONE_VM)) {
- p->numa_preferred_nid = NUMA_NO_NODE;
- return;
- }
-
- /*
- * New thread, keep existing numa_preferred_nid which should be copied
- * already by arch_dup_task_struct but stagger when scans start.
- */
- if (mm) {
- unsigned int delay;
-
- delay = min_t(unsigned int, task_scan_max(current),
- current->numa_scan_period * mm_users * NSEC_PER_MSEC);
- delay += 2 * TICK_NSEC;
- p->node_stamp = delay;
- }
-}
-
static void account_numa_enqueue(struct rq *rq, struct task_struct *p)
{
rq->nr_numa_running += (p->numa_preferred_nid != NUMA_NO_NODE);
@@ -2665,6 +2624,47 @@ out:
}
}
+void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
+{
+ int mm_users = 0;
+ struct mm_struct *mm = p->mm;
+
+ if (mm) {
+ mm_users = atomic_read(&mm->mm_users);
+ if (mm_users == 1) {
+ mm->numa_next_scan = jiffies + msecs_to_jiffies(sysctl_numa_balancing_scan_delay);
+ mm->numa_scan_seq = 0;
+ }
+ }
+ p->node_stamp = 0;
+ p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
+ p->numa_scan_period = sysctl_numa_balancing_scan_delay;
+ p->numa_work.next = &p->numa_work;
+ p->numa_faults = NULL;
+ RCU_INIT_POINTER(p->numa_group, NULL);
+ p->last_task_numa_placement = 0;
+ p->last_sum_exec_runtime = 0;
+
+ /* New address space, reset the preferred nid */
+ if (!(clone_flags & CLONE_VM)) {
+ p->numa_preferred_nid = NUMA_NO_NODE;
+ return;
+ }
+
+ /*
+ * New thread, keep existing numa_preferred_nid which should be copied
+ * already by arch_dup_task_struct but stagger when scans start.
+ */
+ if (mm) {
+ unsigned int delay;
+
+ delay = min_t(unsigned int, task_scan_max(current),
+ current->numa_scan_period * mm_users * NSEC_PER_MSEC);
+ delay += 2 * TICK_NSEC;
+ p->node_stamp = delay;
+ }
+}
+
/*
* Drive the periodic memory faults..
*/
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [tip:sched/core] sched/fair: Move task_numa_work() init to init_numa_balancing()
2019-07-15 10:25 ` [PATCH 2/3] sched/fair: Move task_numa_work() init to init_numa_balancing() Valentin Schneider
@ 2019-07-25 16:12 ` tip-bot for Valentin Schneider
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-07-25 16:12 UTC (permalink / raw)
To: linux-tip-commits
Cc: peterz, mingo, hpa, valentin.schneider, torvalds, linux-kernel, tglx
Commit-ID: b34920d4ce6e6fc9424c20a4be98676eb543122f
Gitweb: https://git.kernel.org/tip/b34920d4ce6e6fc9424c20a4be98676eb543122f
Author: Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 15 Jul 2019 11:25:07 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 25 Jul 2019 15:51:51 +0200
sched/fair: Move task_numa_work() init to init_numa_balancing()
We only need to set the callback_head worker function once, do it
during sched_fork().
While at it, move the comment regarding double task_work addition to
init_numa_balancing(), since the double add sentinel is first set there.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: mgorman@suse.de
Cc: riel@surriel.com
Link: https://lkml.kernel.org/r/20190715102508.32434-3-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/fair.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f0c488015649..fd391fc00ed8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2495,7 +2495,7 @@ void task_numa_work(struct callback_head *work)
SCHED_WARN_ON(p != container_of(work, struct task_struct, numa_work));
- work->next = work; /* protect against double add */
+ work->next = work;
/*
* Who cares about NUMA placement when they're dying.
*
@@ -2639,12 +2639,15 @@ void init_numa_balancing(unsigned long clone_flags, struct task_struct *p)
p->node_stamp = 0;
p->numa_scan_seq = mm ? mm->numa_scan_seq : 0;
p->numa_scan_period = sysctl_numa_balancing_scan_delay;
+ /* Protect against double add, see task_tick_numa and task_numa_work */
p->numa_work.next = &p->numa_work;
p->numa_faults = NULL;
RCU_INIT_POINTER(p->numa_group, NULL);
p->last_task_numa_placement = 0;
p->last_sum_exec_runtime = 0;
+ init_task_work(&p->numa_work, task_numa_work);
+
/* New address space, reset the preferred nid */
if (!(clone_flags & CLONE_VM)) {
p->numa_preferred_nid = NUMA_NO_NODE;
@@ -2693,10 +2696,8 @@ static void task_tick_numa(struct rq *rq, struct task_struct *curr)
curr->numa_scan_period = task_scan_start(curr);
curr->node_stamp += period;
- if (!time_before(jiffies, curr->mm->numa_next_scan)) {
- init_task_work(work, task_numa_work); /* TODO: move this into sched_fork() */
+ if (!time_before(jiffies, curr->mm->numa_next_scan))
task_work_add(curr, work, true);
- }
}
}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [tip:sched/core] sched/fair: Change task_numa_work() storage to static
2019-07-15 10:25 ` [PATCH 3/3] sched/fair: Change task_numa_work() storage to static Valentin Schneider
@ 2019-07-25 16:13 ` tip-bot for Valentin Schneider
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot for Valentin Schneider @ 2019-07-25 16:13 UTC (permalink / raw)
To: linux-tip-commits
Cc: valentin.schneider, mingo, tglx, torvalds, peterz, hpa, linux-kernel
Commit-ID: 9434f9f5d117302cc7ddf038e7879f6871dc7a81
Gitweb: https://git.kernel.org/tip/9434f9f5d117302cc7ddf038e7879f6871dc7a81
Author: Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Mon, 15 Jul 2019 11:25:08 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 25 Jul 2019 15:51:52 +0200
sched/fair: Change task_numa_work() storage to static
There are no callers outside of fair.c.
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: mgorman@suse.de
Cc: riel@surriel.com
Link: https://lkml.kernel.org/r/20190715102508.32434-4-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fd391fc00ed8..b5546a15206c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2482,7 +2482,7 @@ static void reset_ptenuma_scan(struct task_struct *p)
* The expensive part of numa migration is done from task_work context.
* Triggered from task_tick_numa().
*/
-void task_numa_work(struct callback_head *work)
+static void task_numa_work(struct callback_head *work)
{
unsigned long migrate, next_scan, now = jiffies;
struct task_struct *p = current;
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-07-25 16:13 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-15 10:25 [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Valentin Schneider
2019-07-15 10:25 ` [PATCH 1/3] sched/fair: Move init_numa_balancing() below task_numa_work() Valentin Schneider
2019-07-25 16:11 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:25 ` [PATCH 2/3] sched/fair: Move task_numa_work() init to init_numa_balancing() Valentin Schneider
2019-07-25 16:12 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:25 ` [PATCH 3/3] sched/fair: Change task_numa_work() storage to static Valentin Schneider
2019-07-25 16:13 ` [tip:sched/core] " tip-bot for Valentin Schneider
2019-07-15 10:48 ` [PATCH 0/3] sched/fair: Init NUMA task_work in sched_fork() Peter Zijlstra
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.