From: Mel Gorman <mgorman@suse.de> To: Peter Zijlstra <a.p.zijlstra@chello.nl>, Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org>, Andrea Arcangeli <aarcange@redhat.com>, Johannes Weiner <hannes@cmpxchg.org>, Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de> Subject: [PATCH 08/18] sched: Reschedule task on preferred NUMA node once selected Date: Mon, 15 Jul 2013 16:20:10 +0100 [thread overview] Message-ID: <1373901620-2021-9-git-send-email-mgorman@suse.de> (raw) In-Reply-To: <1373901620-2021-1-git-send-email-mgorman@suse.de> A preferred node is selected based on the node the most NUMA hinting faults was incurred on. There is no guarantee that the task is running on that node at the time so this patch rescheules the task to run on the most idle CPU of the selected node when selected. This avoids waiting for the balancer to make a decision. Signed-off-by: Mel Gorman <mgorman@suse.de> --- kernel/sched/core.c | 17 +++++++++++++++++ kernel/sched/fair.c | 46 +++++++++++++++++++++++++++++++++++++++++++++- kernel/sched/sched.h | 1 + 3 files changed, 63 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5e02507..b67a102 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4856,6 +4856,23 @@ fail: return ret; } +#ifdef CONFIG_NUMA_BALANCING +/* Migrate current task p to target_cpu */ +int migrate_task_to(struct task_struct *p, int target_cpu) +{ + struct migration_arg arg = { p, target_cpu }; + int curr_cpu = task_cpu(p); + + if (curr_cpu == target_cpu) + return 0; + + if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p))) + return -EINVAL; + + return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg); +} +#endif + /* * migration_cpu_stop - this will be executed by a highprio stopper thread * and performs thread migration by bumping thread off CPU then diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 49396e1..f68fad5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -800,6 +800,31 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000; */ unsigned int sysctl_numa_balancing_settle_count __read_mostly = 3; +static unsigned long weighted_cpuload(const int cpu); + + +static int +find_idlest_cpu_node(int this_cpu, int nid) +{ + unsigned long load, min_load = ULONG_MAX; + int i, idlest_cpu = this_cpu; + + BUG_ON(cpu_to_node(this_cpu) == nid); + + rcu_read_lock(); + for_each_cpu(i, cpumask_of_node(nid)) { + load = weighted_cpuload(i); + + if (load < min_load) { + min_load = load; + idlest_cpu = i; + } + } + rcu_read_unlock(); + + return idlest_cpu; +} + static void task_numa_placement(struct task_struct *p) { int seq, nid, max_nid = -1; @@ -829,10 +854,29 @@ static void task_numa_placement(struct task_struct *p) } } - /* Update the tasks preferred node if necessary */ + /* + * Record the preferred node as the node with the most faults, + * requeue the task to be running on the idlest CPU on the + * preferred node and reset the scanning rate to recheck + * the working set placement. + */ if (max_faults && max_nid != p->numa_preferred_nid) { + int preferred_cpu; + + /* + * If the task is not on the preferred node then find the most + * idle CPU to migrate to. + */ + preferred_cpu = task_cpu(p); + if (cpu_to_node(preferred_cpu) != max_nid) { + preferred_cpu = find_idlest_cpu_node(preferred_cpu, + max_nid); + } + + /* Update the preferred nid and migrate task if possible */ p->numa_preferred_nid = max_nid; p->numa_migrate_seq = 0; + migrate_task_to(p, preferred_cpu); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c5f773d..795346d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -504,6 +504,7 @@ DECLARE_PER_CPU(struct rq, runqueues); #define raw_rq() (&__raw_get_cpu_var(runqueues)) #ifdef CONFIG_NUMA_BALANCING +extern int migrate_task_to(struct task_struct *p, int cpu); static inline void task_numa_free(struct task_struct *p) { kfree(p->numa_faults); -- 1.8.1.4
WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de> To: Peter Zijlstra <a.p.zijlstra@chello.nl>, Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org>, Andrea Arcangeli <aarcange@redhat.com>, Johannes Weiner <hannes@cmpxchg.org>, Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de> Subject: [PATCH 08/18] sched: Reschedule task on preferred NUMA node once selected Date: Mon, 15 Jul 2013 16:20:10 +0100 [thread overview] Message-ID: <1373901620-2021-9-git-send-email-mgorman@suse.de> (raw) In-Reply-To: <1373901620-2021-1-git-send-email-mgorman@suse.de> A preferred node is selected based on the node the most NUMA hinting faults was incurred on. There is no guarantee that the task is running on that node at the time so this patch rescheules the task to run on the most idle CPU of the selected node when selected. This avoids waiting for the balancer to make a decision. Signed-off-by: Mel Gorman <mgorman@suse.de> --- kernel/sched/core.c | 17 +++++++++++++++++ kernel/sched/fair.c | 46 +++++++++++++++++++++++++++++++++++++++++++++- kernel/sched/sched.h | 1 + 3 files changed, 63 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5e02507..b67a102 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4856,6 +4856,23 @@ fail: return ret; } +#ifdef CONFIG_NUMA_BALANCING +/* Migrate current task p to target_cpu */ +int migrate_task_to(struct task_struct *p, int target_cpu) +{ + struct migration_arg arg = { p, target_cpu }; + int curr_cpu = task_cpu(p); + + if (curr_cpu == target_cpu) + return 0; + + if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p))) + return -EINVAL; + + return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg); +} +#endif + /* * migration_cpu_stop - this will be executed by a highprio stopper thread * and performs thread migration by bumping thread off CPU then diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 49396e1..f68fad5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -800,6 +800,31 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000; */ unsigned int sysctl_numa_balancing_settle_count __read_mostly = 3; +static unsigned long weighted_cpuload(const int cpu); + + +static int +find_idlest_cpu_node(int this_cpu, int nid) +{ + unsigned long load, min_load = ULONG_MAX; + int i, idlest_cpu = this_cpu; + + BUG_ON(cpu_to_node(this_cpu) == nid); + + rcu_read_lock(); + for_each_cpu(i, cpumask_of_node(nid)) { + load = weighted_cpuload(i); + + if (load < min_load) { + min_load = load; + idlest_cpu = i; + } + } + rcu_read_unlock(); + + return idlest_cpu; +} + static void task_numa_placement(struct task_struct *p) { int seq, nid, max_nid = -1; @@ -829,10 +854,29 @@ static void task_numa_placement(struct task_struct *p) } } - /* Update the tasks preferred node if necessary */ + /* + * Record the preferred node as the node with the most faults, + * requeue the task to be running on the idlest CPU on the + * preferred node and reset the scanning rate to recheck + * the working set placement. + */ if (max_faults && max_nid != p->numa_preferred_nid) { + int preferred_cpu; + + /* + * If the task is not on the preferred node then find the most + * idle CPU to migrate to. + */ + preferred_cpu = task_cpu(p); + if (cpu_to_node(preferred_cpu) != max_nid) { + preferred_cpu = find_idlest_cpu_node(preferred_cpu, + max_nid); + } + + /* Update the preferred nid and migrate task if possible */ p->numa_preferred_nid = max_nid; p->numa_migrate_seq = 0; + migrate_task_to(p, preferred_cpu); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c5f773d..795346d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -504,6 +504,7 @@ DECLARE_PER_CPU(struct rq, runqueues); #define raw_rq() (&__raw_get_cpu_var(runqueues)) #ifdef CONFIG_NUMA_BALANCING +extern int migrate_task_to(struct task_struct *p, int cpu); static inline void task_numa_free(struct task_struct *p) { kfree(p->numa_faults); -- 1.8.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-07-15 15:20 UTC|newest] Thread overview: 200+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-07-15 15:20 [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 01/18] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 02/18] sched: Track NUMA hinting faults on per-node basis Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-17 10:50 ` Peter Zijlstra 2013-07-17 10:50 ` Peter Zijlstra 2013-07-31 7:54 ` Mel Gorman 2013-07-31 7:54 ` Mel Gorman 2013-07-29 10:10 ` Peter Zijlstra 2013-07-29 10:10 ` Peter Zijlstra 2013-07-31 7:54 ` Mel Gorman 2013-07-31 7:54 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 03/18] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-17 0:33 ` Hillf Danton 2013-07-17 0:33 ` Hillf Danton 2013-07-17 1:26 ` Wanpeng Li 2013-07-17 1:26 ` Wanpeng Li 2013-07-15 15:20 ` [PATCH 04/18] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-17 11:00 ` Peter Zijlstra 2013-07-17 11:00 ` Peter Zijlstra 2013-07-31 8:11 ` Mel Gorman 2013-07-31 8:11 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 05/18] sched: Select a preferred node with the most numa hinting faults Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 06/18] sched: Update NUMA hinting faults once per scan Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 07/18] sched: Favour moving tasks towards the preferred node Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-25 10:40 ` [PATCH] sched, numa: migrates_degrades_locality() Peter Zijlstra 2013-07-25 10:40 ` Peter Zijlstra 2013-07-31 8:44 ` Mel Gorman 2013-07-31 8:44 ` Mel Gorman 2013-07-31 8:50 ` Peter Zijlstra 2013-07-31 8:50 ` Peter Zijlstra 2013-07-15 15:20 ` Mel Gorman [this message] 2013-07-15 15:20 ` [PATCH 08/18] sched: Reschedule task on preferred NUMA node once selected Mel Gorman 2013-07-17 1:31 ` Hillf Danton 2013-07-17 1:31 ` Hillf Danton 2013-07-31 9:07 ` Mel Gorman 2013-07-31 9:07 ` Mel Gorman 2013-07-31 9:38 ` Srikar Dronamraju 2013-07-31 9:38 ` Srikar Dronamraju 2013-08-01 4:47 ` Srikar Dronamraju 2013-08-01 4:47 ` Srikar Dronamraju 2013-08-01 15:38 ` Mel Gorman 2013-08-01 15:38 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 09/18] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-17 2:17 ` Hillf Danton 2013-07-17 2:17 ` Hillf Danton 2013-07-31 9:08 ` Mel Gorman 2013-07-31 9:08 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 10/18] sched: Increase NUMA PTE scanning when a new preferred node is selected Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 11/18] sched: Check current->mm before allocating NUMA faults Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 12/18] sched: Set the scan rate proportional to the size of the task being scanned Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 13/18] mm: numa: Scan pages with elevated page_mapcount Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-17 5:22 ` Sam Ben 2013-07-17 5:22 ` Sam Ben 2013-07-31 9:13 ` Mel Gorman 2013-07-31 9:13 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 14/18] sched: Remove check that skips small VMAs Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-18 1:53 ` [PATCH 15/18] fix compilation with !CONFIG_NUMA_BALANCING Rik van Riel 2013-07-18 1:53 ` Rik van Riel 2013-07-31 9:19 ` Mel Gorman 2013-07-31 9:19 ` Mel Gorman 2013-07-26 11:20 ` [PATCH 15/18] sched: Set preferred NUMA node based on number of private faults Peter Zijlstra 2013-07-26 11:20 ` Peter Zijlstra 2013-07-31 9:29 ` Mel Gorman 2013-07-31 9:29 ` Mel Gorman 2013-07-31 9:34 ` Peter Zijlstra 2013-07-31 9:34 ` Peter Zijlstra 2013-07-31 10:10 ` Mel Gorman 2013-07-31 10:10 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 16/18] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 20:03 ` Peter Zijlstra 2013-07-15 20:03 ` Peter Zijlstra 2013-07-16 8:23 ` Mel Gorman 2013-07-16 8:23 ` Mel Gorman 2013-07-16 10:35 ` Peter Zijlstra 2013-07-16 10:35 ` Peter Zijlstra 2013-07-16 15:55 ` Hillf Danton 2013-07-16 15:55 ` Hillf Danton 2013-07-16 16:01 ` Mel Gorman 2013-07-16 16:01 ` Mel Gorman 2013-07-17 10:54 ` Peter Zijlstra 2013-07-17 10:54 ` Peter Zijlstra 2013-07-31 9:49 ` Mel Gorman 2013-07-31 9:49 ` Mel Gorman 2013-08-01 7:10 ` Srikar Dronamraju 2013-08-01 7:10 ` Srikar Dronamraju 2013-08-01 15:42 ` Mel Gorman 2013-08-01 15:42 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 17/18] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-25 10:33 ` Peter Zijlstra 2013-07-25 10:33 ` Peter Zijlstra 2013-07-31 10:03 ` Mel Gorman 2013-07-31 10:03 ` Mel Gorman 2013-07-31 10:05 ` Peter Zijlstra 2013-07-31 10:05 ` Peter Zijlstra 2013-07-31 10:07 ` Mel Gorman 2013-07-31 10:07 ` Mel Gorman 2013-07-25 10:35 ` Peter Zijlstra 2013-07-25 10:35 ` Peter Zijlstra 2013-08-01 5:13 ` Srikar Dronamraju 2013-08-01 5:13 ` Srikar Dronamraju 2013-08-01 15:46 ` Mel Gorman 2013-08-01 15:46 ` Mel Gorman 2013-07-15 15:20 ` [PATCH 18/18] sched: Swap tasks when reschuling if a CPU on a target node is imbalanced Mel Gorman 2013-07-15 15:20 ` Mel Gorman 2013-07-15 20:11 ` Peter Zijlstra 2013-07-15 20:11 ` Peter Zijlstra 2013-07-16 9:41 ` Mel Gorman 2013-07-16 9:41 ` Mel Gorman 2013-08-01 4:59 ` Srikar Dronamraju 2013-08-01 4:59 ` Srikar Dronamraju 2013-08-01 15:48 ` Mel Gorman 2013-08-01 15:48 ` Mel Gorman 2013-07-15 20:14 ` [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Peter Zijlstra 2013-07-15 20:14 ` Peter Zijlstra 2013-07-16 15:10 ` Srikar Dronamraju 2013-07-16 15:10 ` Srikar Dronamraju 2013-07-25 10:36 ` Peter Zijlstra 2013-07-25 10:36 ` Peter Zijlstra 2013-07-31 10:30 ` Mel Gorman 2013-07-31 10:30 ` Mel Gorman 2013-07-31 10:48 ` Peter Zijlstra 2013-07-31 10:48 ` Peter Zijlstra 2013-07-31 11:57 ` Mel Gorman 2013-07-31 11:57 ` Mel Gorman 2013-07-31 15:30 ` Peter Zijlstra 2013-07-31 15:30 ` Peter Zijlstra 2013-07-31 16:11 ` Mel Gorman 2013-07-31 16:11 ` Mel Gorman 2013-07-31 16:39 ` Peter Zijlstra 2013-07-31 16:39 ` Peter Zijlstra 2013-08-01 15:51 ` Mel Gorman 2013-08-01 15:51 ` Mel Gorman 2013-07-25 10:38 ` [PATCH] mm, numa: Sanitize task_numa_fault() callsites Peter Zijlstra 2013-07-25 10:38 ` Peter Zijlstra 2013-07-31 11:25 ` Mel Gorman 2013-07-31 11:25 ` Mel Gorman 2013-07-25 10:41 ` [PATCH] sched, numa: Improve scanner Peter Zijlstra 2013-07-25 10:41 ` Peter Zijlstra 2013-07-25 10:46 ` [PATCH] mm, sched, numa: Create a per-task MPOL_INTERLEAVE policy Peter Zijlstra 2013-07-25 10:46 ` Peter Zijlstra 2013-07-26 9:55 ` Peter Zijlstra 2013-07-26 9:55 ` Peter Zijlstra 2013-08-26 16:10 ` Peter Zijlstra 2013-08-26 16:10 ` Peter Zijlstra 2013-08-26 16:14 ` Peter Zijlstra 2013-08-26 16:14 ` Peter Zijlstra 2013-07-30 11:24 ` [PATCH] mm, numa: Change page last {nid,pid} into {cpu,pid} Peter Zijlstra 2013-07-30 11:24 ` Peter Zijlstra 2013-08-01 22:33 ` Rik van Riel 2013-08-01 22:33 ` Rik van Riel 2013-07-30 11:38 ` [PATCH] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra 2013-07-30 11:38 ` Peter Zijlstra 2013-07-31 15:07 ` Peter Zijlstra 2013-07-31 15:07 ` Peter Zijlstra 2013-07-31 15:38 ` Peter Zijlstra 2013-07-31 15:38 ` Peter Zijlstra 2013-07-31 15:45 ` Don Morris 2013-07-31 15:45 ` Don Morris 2013-07-31 16:05 ` Peter Zijlstra 2013-07-31 16:05 ` Peter Zijlstra 2013-08-02 16:47 ` [PATCH -v3] " Peter Zijlstra 2013-08-02 16:47 ` Peter Zijlstra 2013-08-02 16:50 ` [PATCH] mm, numa: Do not group on RO pages Peter Zijlstra 2013-08-02 16:50 ` Peter Zijlstra 2013-08-02 19:56 ` Peter Zijlstra 2013-08-02 19:56 ` Peter Zijlstra 2013-08-05 19:36 ` [PATCH] numa,sched: use group fault statistics in numa placement Rik van Riel 2013-08-05 19:36 ` Rik van Riel 2013-08-09 13:55 ` Don Morris 2013-08-28 16:41 ` [PATCH -v3] sched, numa: Use {cpu, pid} to create task groups for shared faults Peter Zijlstra 2013-08-28 16:41 ` Peter Zijlstra 2013-08-28 17:10 ` Rik van Riel 2013-08-28 17:10 ` Rik van Riel 2013-08-01 6:23 ` [PATCH,RFC] numa,sched: use group fault statistics in numa placement Rik van Riel 2013-08-01 6:23 ` Rik van Riel 2013-08-01 10:37 ` Peter Zijlstra 2013-08-01 10:37 ` Peter Zijlstra 2013-08-01 16:35 ` Rik van Riel 2013-08-01 16:35 ` Rik van Riel 2013-08-01 22:36 ` [RFC PATCH -v2] " Rik van Riel 2013-08-01 22:36 ` Rik van Riel 2013-07-30 13:58 ` [PATCH 0/18] Basic scheduler support for automatic NUMA balancing V5 Andrew Theurer
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1373901620-2021-9-git-send-email-mgorman@suse.de \ --to=mgorman@suse.de \ --cc=a.p.zijlstra@chello.nl \ --cc=aarcange@redhat.com \ --cc=hannes@cmpxchg.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mingo@kernel.org \ --cc=srikar@linux.vnet.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.