* [PATCH v7 sched part 0/4] sched: numa: several fixups
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Wanpeng Li (4):
sched/numa: drop sysctl_numa_balancing_settle_count sysctl
sched/numa: use wrapper function task_node to get node which task is on
sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
sched/numa: fix period_slot recalculation
include/linux/sched/sysctl.h | 1 -
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 17 ++++-------------
kernel/sysctl.c | 7 -------
4 files changed, 5 insertions(+), 22 deletions(-)
--
1.8.3.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v7 sched part 0/4] sched: numa: several fixups
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Wanpeng Li (4):
sched/numa: drop sysctl_numa_balancing_settle_count sysctl
sched/numa: use wrapper function task_node to get node which task is on
sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
sched/numa: fix period_slot recalculation
include/linux/sched/sysctl.h | 1 -
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 17 ++++-------------
kernel/sysctl.c | 7 -------
4 files changed, 5 insertions(+), 22 deletions(-)
--
1.8.3.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 0:12 ` Wanpeng Li
-1 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
commit 887c290e (sched/numa: Decide whether to favour task or group weights
based on swap candidate relationships) drop the check against
sysctl_numa_balancing_settle_count, this patch remove the sysctl.
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
include/linux/sched/sysctl.h | 1 -
kernel/sched/fair.c | 9 ---------
kernel/sysctl.c | 7 -------
3 files changed, 17 deletions(-)
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 41467f8..31e0193 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -48,7 +48,6 @@ extern unsigned int sysctl_numa_balancing_scan_delay;
extern unsigned int sysctl_numa_balancing_scan_period_min;
extern unsigned int sysctl_numa_balancing_scan_period_max;
extern unsigned int sysctl_numa_balancing_scan_size;
-extern unsigned int sysctl_numa_balancing_settle_count;
#ifdef CONFIG_SCHED_DEBUG
extern unsigned int sysctl_sched_migration_cost;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 49aa01f..cdceb8e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -886,15 +886,6 @@ static unsigned int task_scan_max(struct task_struct *p)
return max(smin, smax);
}
-/*
- * Once a preferred node is selected the scheduler balancer will prefer moving
- * a task to that node for sysctl_numa_balancing_settle_count number of PTE
- * scans. This will give the process the chance to accumulate more faults on
- * the preferred node but still allow the scheduler to move the task again if
- * the nodes CPUs are overloaded.
- */
-unsigned int sysctl_numa_balancing_settle_count __read_mostly = 4;
-
static void account_numa_enqueue(struct rq *rq, struct task_struct *p)
{
rq->nr_numa_running += (p->numa_preferred_nid != -1);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 34a6047..c8da99f 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -385,13 +385,6 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_dointvec,
},
{
- .procname = "numa_balancing_settle_count",
- .data = &sysctl_numa_balancing_settle_count,
- .maxlen = sizeof(unsigned int),
- .mode = 0644,
- .proc_handler = proc_dointvec,
- },
- {
.procname = "numa_balancing_migrate_deferred",
.data = &sysctl_numa_balancing_migrate_deferred,
.maxlen = sizeof(unsigned int),
--
1.8.3.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
commit 887c290e (sched/numa: Decide whether to favour task or group weights
based on swap candidate relationships) drop the check against
sysctl_numa_balancing_settle_count, this patch remove the sysctl.
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
include/linux/sched/sysctl.h | 1 -
kernel/sched/fair.c | 9 ---------
kernel/sysctl.c | 7 -------
3 files changed, 17 deletions(-)
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 41467f8..31e0193 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -48,7 +48,6 @@ extern unsigned int sysctl_numa_balancing_scan_delay;
extern unsigned int sysctl_numa_balancing_scan_period_min;
extern unsigned int sysctl_numa_balancing_scan_period_max;
extern unsigned int sysctl_numa_balancing_scan_size;
-extern unsigned int sysctl_numa_balancing_settle_count;
#ifdef CONFIG_SCHED_DEBUG
extern unsigned int sysctl_sched_migration_cost;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 49aa01f..cdceb8e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -886,15 +886,6 @@ static unsigned int task_scan_max(struct task_struct *p)
return max(smin, smax);
}
-/*
- * Once a preferred node is selected the scheduler balancer will prefer moving
- * a task to that node for sysctl_numa_balancing_settle_count number of PTE
- * scans. This will give the process the chance to accumulate more faults on
- * the preferred node but still allow the scheduler to move the task again if
- * the nodes CPUs are overloaded.
- */
-unsigned int sysctl_numa_balancing_settle_count __read_mostly = 4;
-
static void account_numa_enqueue(struct rq *rq, struct task_struct *p)
{
rq->nr_numa_running += (p->numa_preferred_nid != -1);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 34a6047..c8da99f 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -385,13 +385,6 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_dointvec,
},
{
- .procname = "numa_balancing_settle_count",
- .data = &sysctl_numa_balancing_settle_count,
- .maxlen = sizeof(unsigned int),
- .mode = 0644,
- .proc_handler = proc_dointvec,
- },
- {
.procname = "numa_balancing_migrate_deferred",
.data = &sysctl_numa_balancing_migrate_deferred,
.maxlen = sizeof(unsigned int),
--
1.8.3.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 2/4] sched/numa: use wrapper function task_node to get node which task is on
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 0:12 ` Wanpeng Li
-1 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Changelog:
v2 -> v3:
* tranlate cpu_to_node(task_cpu(p)) to task_node(p) in sched/debug.c
Use wrapper function task_node to get node which task is on.
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 5c34d18..374fe04 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -139,7 +139,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
0LL, 0LL, 0LL, 0L, 0LL, 0L, 0LL, 0L);
#endif
#ifdef CONFIG_NUMA_BALANCING
- SEQ_printf(m, " %d", cpu_to_node(task_cpu(p)));
+ SEQ_printf(m, " %d", task_node(p));
#endif
#ifdef CONFIG_CGROUP_SCHED
SEQ_printf(m, " %s", task_group_path(task_group(p)));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cdceb8e..178eaac 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1216,7 +1216,7 @@ static int task_numa_migrate(struct task_struct *p)
* elsewhere, so there is no point in (re)trying.
*/
if (unlikely(!sd)) {
- p->numa_preferred_nid = cpu_to_node(task_cpu(p));
+ p->numa_preferred_nid = task_node(p);
return -EINVAL;
}
@@ -1283,7 +1283,7 @@ static void numa_migrate_preferred(struct task_struct *p)
p->numa_migrate_retry = jiffies + HZ;
/* Success if task is already running on preferred CPU */
- if (cpu_to_node(task_cpu(p)) == p->numa_preferred_nid)
+ if (task_node(p) == p->numa_preferred_nid)
return;
/* Otherwise, try migrate to a CPU on the preferred node */
--
1.8.3.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 2/4] sched/numa: use wrapper function task_node to get node which task is on
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Changelog:
v2 -> v3:
* tranlate cpu_to_node(task_cpu(p)) to task_node(p) in sched/debug.c
Use wrapper function task_node to get node which task is on.
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 5c34d18..374fe04 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -139,7 +139,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
0LL, 0LL, 0LL, 0L, 0LL, 0L, 0LL, 0L);
#endif
#ifdef CONFIG_NUMA_BALANCING
- SEQ_printf(m, " %d", cpu_to_node(task_cpu(p)));
+ SEQ_printf(m, " %d", task_node(p));
#endif
#ifdef CONFIG_CGROUP_SCHED
SEQ_printf(m, " %s", task_group_path(task_group(p)));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index cdceb8e..178eaac 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1216,7 +1216,7 @@ static int task_numa_migrate(struct task_struct *p)
* elsewhere, so there is no point in (re)trying.
*/
if (unlikely(!sd)) {
- p->numa_preferred_nid = cpu_to_node(task_cpu(p));
+ p->numa_preferred_nid = task_node(p);
return -EINVAL;
}
@@ -1283,7 +1283,7 @@ static void numa_migrate_preferred(struct task_struct *p)
p->numa_migrate_retry = jiffies + HZ;
/* Success if task is already running on preferred CPU */
- if (cpu_to_node(task_cpu(p)) == p->numa_preferred_nid)
+ if (task_node(p) == p->numa_preferred_nid)
return;
/* Otherwise, try migrate to a CPU on the preferred node */
--
1.8.3.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 3/4] sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 0:12 ` Wanpeng Li
-1 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Use wrapper function task_faults_idx to calculate index in group_faults.
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 178eaac..d93c86f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -935,7 +935,8 @@ static inline unsigned long group_faults(struct task_struct *p, int nid)
if (!p->numa_group)
return 0;
- return p->numa_group->faults[2*nid] + p->numa_group->faults[2*nid+1];
+ return p->numa_group->faults[task_faults_idx(nid, 0)] +
+ p->numa_group->faults[task_faults_idx(nid, 1)];
}
/*
--
1.8.3.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 3/4] sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Use wrapper function task_faults_idx to calculate index in group_faults.
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 178eaac..d93c86f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -935,7 +935,8 @@ static inline unsigned long group_faults(struct task_struct *p, int nid)
if (!p->numa_group)
return 0;
- return p->numa_group->faults[2*nid] + p->numa_group->faults[2*nid+1];
+ return p->numa_group->faults[task_faults_idx(nid, 0)] +
+ p->numa_group->faults[task_faults_idx(nid, 1)];
}
/*
--
1.8.3.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 4/4] sched/numa: fix period_slot recalculation
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 0:12 ` Wanpeng Li
-1 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Changelog:
v3 -> v4:
* remove period_slot recalculation
The original code is as intended and was meant to scale the difference
between the NUMA_PERIOD_THRESHOLD and local/remote ratio when adjusting
the scan period. The period_slot recalculation can be dropped.
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/fair.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d93c86f..c962167 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1356,7 +1356,6 @@ static void update_task_scan_period(struct task_struct *p,
* scanning faster if shared accesses dominate as it may
* simply bounce migrations uselessly
*/
- period_slot = DIV_ROUND_UP(diff, NUMA_PERIOD_SLOTS);
ratio = DIV_ROUND_UP(private * NUMA_PERIOD_SLOTS, (private + shared));
diff = (diff * ratio) / NUMA_PERIOD_SLOTS;
}
--
1.8.3.2
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v7 4/4] sched/numa: fix period_slot recalculation
@ 2013-12-12 0:12 ` Wanpeng Li
0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 0:12 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Andrew Morton, Rik van Riel, Mel Gorman, Naoya Horiguchi,
linux-kernel, linux-mm, Wanpeng Li
Changelog:
v3 -> v4:
* remove period_slot recalculation
The original code is as intended and was meant to scale the difference
between the NUMA_PERIOD_THRESHOLD and local/remote ratio when adjusting
the scan period. The period_slot recalculation can be dropped.
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
kernel/sched/fair.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d93c86f..c962167 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1356,7 +1356,6 @@ static void update_task_scan_period(struct task_struct *p,
* scanning faster if shared accesses dominate as it may
* simply bounce migrations uselessly
*/
- period_slot = DIV_ROUND_UP(diff, NUMA_PERIOD_SLOTS);
ratio = DIV_ROUND_UP(private * NUMA_PERIOD_SLOTS, (private + shared));
diff = (diff * ratio) / NUMA_PERIOD_SLOTS;
}
--
1.8.3.2
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 6:41 ` David Rientjes
-1 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:41 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> commit 887c290e (sched/numa: Decide whether to favour task or group weights
> based on swap candidate relationships) drop the check against
> sysctl_numa_balancing_settle_count, this patch remove the sysctl.
>
What about the references to it in Documentation/sysctl/kernel.txt?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl
@ 2013-12-12 6:41 ` David Rientjes
0 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:41 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> commit 887c290e (sched/numa: Decide whether to favour task or group weights
> based on swap candidate relationships) drop the check against
> sysctl_numa_balancing_settle_count, this patch remove the sysctl.
>
What about the references to it in Documentation/sysctl/kernel.txt?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 2/4] sched/numa: use wrapper function task_node to get node which task is on
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 6:44 ` David Rientjes
-1 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:44 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Changelog:
> v2 -> v3:
> * tranlate cpu_to_node(task_cpu(p)) to task_node(p) in sched/debug.c
>
> Use wrapper function task_node to get node which task is on.
>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Reviewed-by: Rik van Riel <riel@redhat.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 2/4] sched/numa: use wrapper function task_node to get node which task is on
@ 2013-12-12 6:44 ` David Rientjes
0 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:44 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Changelog:
> v2 -> v3:
> * tranlate cpu_to_node(task_cpu(p)) to task_node(p) in sched/debug.c
>
> Use wrapper function task_node to get node which task is on.
>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Reviewed-by: Rik van Riel <riel@redhat.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl
2013-12-12 6:41 ` David Rientjes
(?)
@ 2013-12-12 6:48 ` Wanpeng Li
-1 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2013-12-12 6:48 UTC (permalink / raw)
To: David Rientjes
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Wed, Dec 11, 2013 at 10:41:47PM -0800, David Rientjes wrote:
>On Thu, 12 Dec 2013, Wanpeng Li wrote:
>
>> commit 887c290e (sched/numa: Decide whether to favour task or group weights
>> based on swap candidate relationships) drop the check against
>> sysctl_numa_balancing_settle_count, this patch remove the sysctl.
>>
>
>What about the references to it in Documentation/sysctl/kernel.txt?
Ah, ok, I will fix it. Thanks.
Regards,
Wanpeng Li
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 3/4] sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 6:48 ` David Rientjes
-1 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:48 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Use wrapper function task_faults_idx to calculate index in group_faults.
>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
The naming of task_faults_idx() is a little unfortunate since it is now
used to index into both task_faults() and group_faults(), though.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 3/4] sched/numa: use wrapper function task_faults_idx to calculate index in group_faults
@ 2013-12-12 6:48 ` David Rientjes
0 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:48 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Use wrapper function task_faults_idx to calculate index in group_faults.
>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
The naming of task_faults_idx() is a little unfortunate since it is now
used to index into both task_faults() and group_faults(), though.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 4/4] sched/numa: fix period_slot recalculation
2013-12-12 0:12 ` Wanpeng Li
@ 2013-12-12 6:50 ` David Rientjes
-1 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:50 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Changelog:
> v3 -> v4:
> * remove period_slot recalculation
>
> The original code is as intended and was meant to scale the difference
> between the NUMA_PERIOD_THRESHOLD and local/remote ratio when adjusting
> the scan period. The period_slot recalculation can be dropped.
>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v7 4/4] sched/numa: fix period_slot recalculation
@ 2013-12-12 6:50 ` David Rientjes
0 siblings, 0 replies; 19+ messages in thread
From: David Rientjes @ 2013-12-12 6:50 UTC (permalink / raw)
To: Wanpeng Li
Cc: Ingo Molnar, Peter Zijlstra, Andrew Morton, Rik van Riel,
Mel Gorman, Naoya Horiguchi, linux-kernel, linux-mm
On Thu, 12 Dec 2013, Wanpeng Li wrote:
> Changelog:
> v3 -> v4:
> * remove period_slot recalculation
>
> The original code is as intended and was meant to scale the difference
> between the NUMA_PERIOD_THRESHOLD and local/remote ratio when adjusting
> the scan period. The period_slot recalculation can be dropped.
>
> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2013-12-12 6:50 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-12 0:12 [PATCH v7 sched part 0/4] sched: numa: several fixups Wanpeng Li
2013-12-12 0:12 ` Wanpeng Li
2013-12-12 0:12 ` [PATCH v7 1/4] sched/numa: drop sysctl_numa_balancing_settle_count sysctl Wanpeng Li
2013-12-12 0:12 ` Wanpeng Li
2013-12-12 6:41 ` David Rientjes
2013-12-12 6:41 ` David Rientjes
2013-12-12 6:48 ` Wanpeng Li
2013-12-12 0:12 ` [PATCH v7 2/4] sched/numa: use wrapper function task_node to get node which task is on Wanpeng Li
2013-12-12 0:12 ` Wanpeng Li
2013-12-12 6:44 ` David Rientjes
2013-12-12 6:44 ` David Rientjes
2013-12-12 0:12 ` [PATCH v7 3/4] sched/numa: use wrapper function task_faults_idx to calculate index in group_faults Wanpeng Li
2013-12-12 0:12 ` Wanpeng Li
2013-12-12 6:48 ` David Rientjes
2013-12-12 6:48 ` David Rientjes
2013-12-12 0:12 ` [PATCH v7 4/4] sched/numa: fix period_slot recalculation Wanpeng Li
2013-12-12 0:12 ` Wanpeng Li
2013-12-12 6:50 ` David Rientjes
2013-12-12 6:50 ` David Rientjes
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.