* [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu
@ 2021-09-30 6:59 Li RongQing
2021-09-30 8:34 ` Dietmar Eggemann
0 siblings, 1 reply; 2+ messages in thread
From: Li RongQing @ 2021-09-30 6:59 UTC (permalink / raw)
To: linux-kernel, lirongqing, mingo, peterz
recent_used_cpu has been set to prev before check
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
kernel/sched/fair.c | 8 +-------
1 files changed, 1 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7b9fe8c..ec42eaa 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6437,14 +6437,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
cpus_share_cache(recent_used_cpu, target) &&
(available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
- asym_fits_capacity(task_util, recent_used_cpu)) {
- /*
- * Replace recent_used_cpu with prev as it is a potential
- * candidate for the next wake:
- */
- p->recent_used_cpu = prev;
+ asym_fits_capacity(task_util, recent_used_cpu))
return recent_used_cpu;
- }
/*
* For asymmetric CPU capacity systems, our domain of interest is
--
1.7.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu
2021-09-30 6:59 [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu Li RongQing
@ 2021-09-30 8:34 ` Dietmar Eggemann
0 siblings, 0 replies; 2+ messages in thread
From: Dietmar Eggemann @ 2021-09-30 8:34 UTC (permalink / raw)
To: Li RongQing, linux-kernel, mingo, peterz
On 30/09/2021 08:59, Li RongQing wrote:
> recent_used_cpu has been set to prev before check
>
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
> kernel/sched/fair.c | 8 +-------
> 1 files changed, 1 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7b9fe8c..ec42eaa 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6437,14 +6437,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> cpus_share_cache(recent_used_cpu, target) &&
> (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
> cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
> - asym_fits_capacity(task_util, recent_used_cpu)) {
> - /*
> - * Replace recent_used_cpu with prev as it is a potential
> - * candidate for the next wake:
> - */
> - p->recent_used_cpu = prev;
> + asym_fits_capacity(task_util, recent_used_cpu))
> return recent_used_cpu;
> - }
>
> /*
> * For asymmetric CPU capacity systems, our domain of interest is
>
Looks like this has been already fixed in:
https://lore.kernel.org/r/20210928103544.27489-1-vincent.guittot@linaro.org
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-09-30 8:34 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-30 6:59 [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu Li RongQing
2021-09-30 8:34 ` Dietmar Eggemann
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.