linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
@ 2022-05-13  6:24 Tianchen Ding
  2022-05-13  6:37 ` Peter Zijlstra
  2022-05-17 13:58 ` Mel Gorman
  0 siblings, 2 replies; 5+ messages in thread
From: Tianchen Ding @ 2022-05-13  6:24 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira
  Cc: linux-kernel

We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
when crossing cache boundaries") disabled queuing tasks on wakelist when
the cpus share llc. This is because, at that time, the scheduler must
send IPIs to do ttwu_queue_wakelist. Nowadays, ttwu_queue_wakelist also
supports TIF_POLLING, so this is not a problem now when the wakee cpu is
in idle polling.

Benefits:
  Queuing the task on idle cpu can help improving performance on waker cpu
  and utilization on wakee cpu, and further improve locality because
  the wakee cpu can handle its own rq. This patch helps improving rt on
  our real java workloads where wakeup happens frequently.

Does this patch bring IPI flooding?
  For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
  difference if the wakee cpu is idle polling. If the wakee cpu is idle
  but not polling, the later check_preempt_curr() will send IPI too.

  For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
  unavoidable, since the later check_preempt_curr() will send IPI when
  wakee cpu is idle.

Benchmark:
running schbench -m 2 -t 8 on 8269CY:

without patch:
Latency percentiles (usec)
        50.0000th: 10
        75.0000th: 14
        90.0000th: 16
        95.0000th: 16
        *99.0000th: 17
        99.5000th: 20
        99.9000th: 23
        min=0, max=28

with patch:
Latency percentiles (usec)
        50.0000th: 6
        75.0000th: 8
        90.0000th: 9
        95.0000th: 9
        *99.0000th: 10
        99.5000th: 10
        99.9000th: 14
        min=0, max=16

We've also tested unixbench and see about 10% improvement on Pipe-based
Context Switching, and no performance regression on other test cases.

For arm64, we've tested schbench and unixbench on Kunpeng920, the
results show that, the improvement is not as obvious as on x86, and
there's no performance regression.

Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com>
---
 kernel/sched/core.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 51efaabac3e4..cae5011a8b1f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3820,6 +3820,9 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
 	if (!cpu_active(cpu))
 		return false;
 
+	if (cpu == smp_processor_id())
+		return false;
+
 	/*
 	 * If the CPU does not share cache, then queue the task on the
 	 * remote rqs wakelist to avoid accessing remote data.
@@ -3827,6 +3830,12 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
 	if (!cpus_share_cache(smp_processor_id(), cpu))
 		return true;
 
+	/*
+	 * If the CPU is idle, let itself do activation to improve utilization.
+	 */
+	if (available_idle_cpu(cpu))
+		return true;
+
 	/*
 	 * If the task is descheduling and the only running task on the
 	 * CPU then use the wakelist to offload the task activation to
@@ -3842,9 +3851,6 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
 static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
 {
 	if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
-		if (WARN_ON_ONCE(cpu == smp_processor_id()))
-			return false;
-
 		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
 		__ttwu_queue_wakelist(p, cpu, wake_flags);
 		return true;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
  2022-05-13  6:24 [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle Tianchen Ding
@ 2022-05-13  6:37 ` Peter Zijlstra
  2022-05-13  7:05   ` Tianchen Ding
  2022-05-17 13:58 ` Mel Gorman
  1 sibling, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2022-05-13  6:37 UTC (permalink / raw)
  To: Tianchen Ding
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel

On Fri, May 13, 2022 at 02:24:27PM +0800, Tianchen Ding wrote:
> We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
> when crossing cache boundaries") disabled queuing tasks on wakelist when
> the cpus share llc. This is because, at that time, the scheduler must
> send IPIs to do ttwu_queue_wakelist.

No; this was because of cache bouncing.

> Nowadays, ttwu_queue_wakelist also
> supports TIF_POLLING, so this is not a problem now when the wakee cpu is
> in idle polling.
> 
> Benefits:
>   Queuing the task on idle cpu can help improving performance on waker cpu
>   and utilization on wakee cpu, and further improve locality because
>   the wakee cpu can handle its own rq. This patch helps improving rt on
>   our real java workloads where wakeup happens frequently.
> 
> Does this patch bring IPI flooding?
>   For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
>   difference if the wakee cpu is idle polling. If the wakee cpu is idle
>   but not polling, the later check_preempt_curr() will send IPI too.
> 
>   For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
>   unavoidable, since the later check_preempt_curr() will send IPI when
>   wakee cpu is idle.
> 
> Benchmark:
> running schbench -m 2 -t 8 on 8269CY:
> 
> without patch:
> Latency percentiles (usec)
>         50.0000th: 10
>         75.0000th: 14
>         90.0000th: 16
>         95.0000th: 16
>         *99.0000th: 17
>         99.5000th: 20
>         99.9000th: 23
>         min=0, max=28
> 
> with patch:
> Latency percentiles (usec)
>         50.0000th: 6
>         75.0000th: 8
>         90.0000th: 9
>         95.0000th: 9
>         *99.0000th: 10
>         99.5000th: 10
>         99.9000th: 14
>         min=0, max=16
> 
> We've also tested unixbench and see about 10% improvement on Pipe-based
> Context Switching, and no performance regression on other test cases.
> 
> For arm64, we've tested schbench and unixbench on Kunpeng920, the
> results show that,

What is a kunpeng and how does it's topology look?

> the improvement is not as obvious as on x86, and
> there's no performance regression.

x86 is wide and varied; what x86 did you test?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
  2022-05-13  6:37 ` Peter Zijlstra
@ 2022-05-13  7:05   ` Tianchen Ding
  0 siblings, 0 replies; 5+ messages in thread
From: Tianchen Ding @ 2022-05-13  7:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, linux-kernel

On 2022/5/13 14:37, Peter Zijlstra wrote:
> On Fri, May 13, 2022 at 02:24:27PM +0800, Tianchen Ding wrote:
>> We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
>> when crossing cache boundaries") disabled queuing tasks on wakelist when
>> the cpus share llc. This is because, at that time, the scheduler must
>> send IPIs to do ttwu_queue_wakelist.
> 
> No; this was because of cache bouncing.

As I understand, avoiding cache bouncing is the reason to do 
queue_wakelist accross llc. This can be the same reason why we try to do 
queue_wakelist within the same llc now. It should be better for the 
wakee cpu handling its own rq. Will there be some other side effects?

> 
>> Nowadays, ttwu_queue_wakelist also
>> supports TIF_POLLING, so this is not a problem now when the wakee cpu is
>> in idle polling.
>>
>> Benefits:
>>    Queuing the task on idle cpu can help improving performance on waker cpu
>>    and utilization on wakee cpu, and further improve locality because
>>    the wakee cpu can handle its own rq. This patch helps improving rt on
>>    our real java workloads where wakeup happens frequently.
>>
>> Does this patch bring IPI flooding?
>>    For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
>>    difference if the wakee cpu is idle polling. If the wakee cpu is idle
>>    but not polling, the later check_preempt_curr() will send IPI too.
>>
>>    For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
>>    unavoidable, since the later check_preempt_curr() will send IPI when
>>    wakee cpu is idle.
>>
>> Benchmark:
>> running schbench -m 2 -t 8 on 8269CY:
>>
>> without patch:
>> Latency percentiles (usec)
>>          50.0000th: 10
>>          75.0000th: 14
>>          90.0000th: 16
>>          95.0000th: 16
>>          *99.0000th: 17
>>          99.5000th: 20
>>          99.9000th: 23
>>          min=0, max=28
>>
>> with patch:
>> Latency percentiles (usec)
>>          50.0000th: 6
>>          75.0000th: 8
>>          90.0000th: 9
>>          95.0000th: 9
>>          *99.0000th: 10
>>          99.5000th: 10
>>          99.9000th: 14
>>          min=0, max=16
>>
>> We've also tested unixbench and see about 10% improvement on Pipe-based
>> Context Switching, and no performance regression on other test cases.
>>
>> For arm64, we've tested schbench and unixbench on Kunpeng920, the
>> results show that,
> 
> What is a kunpeng and how does it's topology look?

It's an arm64 processor produced by Huawei. It's topology has NUMA and 
cluster. See the commit log of c5e22feffdd7 ("topology: Represent 
clusters of CPUs within a die") for detail.
In fact I also tried to test on Ampere. But there maybe sth wrong on my 
machine and the kernel only get upto l2 cache info. (Which means each 
cpu has a different sd_llc_id so the patch will take no effect.) :-(

> 
>> the improvement is not as obvious as on x86, and
>> there's no performance regression.
> 
> x86 is wide and varied; what x86 did you test?

I've tested on Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz. Do you 
need more info on other machines?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
  2022-05-13  6:24 [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle Tianchen Ding
  2022-05-13  6:37 ` Peter Zijlstra
@ 2022-05-17 13:58 ` Mel Gorman
  2022-05-18  8:05   ` Tianchen Ding
  1 sibling, 1 reply; 5+ messages in thread
From: Mel Gorman @ 2022-05-17 13:58 UTC (permalink / raw)
  To: Tianchen Ding
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, linux-kernel

On Fri, May 13, 2022 at 02:24:27PM +0800, Tianchen Ding wrote:
> We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
> when crossing cache boundaries") disabled queuing tasks on wakelist when
> the cpus share llc. This is because, at that time, the scheduler must
> send IPIs to do ttwu_queue_wakelist. Nowadays, ttwu_queue_wakelist also
> supports TIF_POLLING, so this is not a problem now when the wakee cpu is
> in idle polling.
> 
> Benefits:
>   Queuing the task on idle cpu can help improving performance on waker cpu
>   and utilization on wakee cpu, and further improve locality because
>   the wakee cpu can handle its own rq. This patch helps improving rt on
>   our real java workloads where wakeup happens frequently.
> 
> Does this patch bring IPI flooding?
>   For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
>   difference if the wakee cpu is idle polling. If the wakee cpu is idle
>   but not polling, the later check_preempt_curr() will send IPI too.
> 

That's a big if. Polling does not last very long -- somewhere between 10
and 62 microseconds for HZ=1000 or 250 microseconds for HZ=250. It may
not bring IPI flooding depending on the workload but it will increase
IPI counts.

>   For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
>   unavoidable, since the later check_preempt_curr() will send IPI when
>   wakee cpu is idle.
> 
> Benchmark:
> running schbench -m 2 -t 8 on 8269CY:
> 
> without patch:
> Latency percentiles (usec)
>         50.0000th: 10
>         75.0000th: 14
>         90.0000th: 16
>         95.0000th: 16
>         *99.0000th: 17
>         99.5000th: 20
>         99.9000th: 23
>         min=0, max=28
> 
> with patch:
> Latency percentiles (usec)
>         50.0000th: 6
>         75.0000th: 8
>         90.0000th: 9
>         95.0000th: 9
>         *99.0000th: 10
>         99.5000th: 10
>         99.9000th: 14
>         min=0, max=16
> 
> We've also tested unixbench and see about 10% improvement on Pipe-based
> Context Switching, and no performance regression on other test cases.

It'll show a benefit for any heavily communicating tasks that rapidly
enters/exits idle because the wakee CPU may be still polling due to the
rapid enter/exit pattern.

> Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com>
> ---
>  kernel/sched/core.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 51efaabac3e4..cae5011a8b1f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3820,6 +3820,9 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>  	if (!cpu_active(cpu))
>  		return false;
>  
> +	if (cpu == smp_processor_id())
> +		return false;
> +
>  	/*
>  	 * If the CPU does not share cache, then queue the task on the
>  	 * remote rqs wakelist to avoid accessing remote data.

Is this suggesting that the running CPU should try sending an IPI to
itself?

> @@ -3827,6 +3830,12 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>  	if (!cpus_share_cache(smp_processor_id(), cpu))
>  		return true;
>  
> +	/*
> +	 * If the CPU is idle, let itself do activation to improve utilization.
> +	 */
> +	if (available_idle_cpu(cpu))
> +		return true;
> +
>  	/*
>  	 * If the task is descheduling and the only running task on the
>  	 * CPU then use the wakelist to offload the task activation to

It is highly likely that the target CPU is idle given that we almost
certainly called select_idle_sibling() before reaching here.

I suspect what you are trying to do is use the wakelist regardless of
locality if the CPU is polling because polling means an IPI is avoided
but it's not what the patch does.

> @@ -3842,9 +3851,6 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>  static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
>  {
>  	if (sched_feat(TTWU_QUEUE) && ttwu_queue_cond(cpu, wake_flags)) {
> -		if (WARN_ON_ONCE(cpu == smp_processor_id()))
> -			return false;
> -
>  		sched_clock_cpu(cpu); /* Sync clocks across CPUs */
>  		__ttwu_queue_wakelist(p, cpu, wake_flags);
>  		return true;




> -- 
> 2.27.0
> 

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle
  2022-05-17 13:58 ` Mel Gorman
@ 2022-05-18  8:05   ` Tianchen Ding
  0 siblings, 0 replies; 5+ messages in thread
From: Tianchen Ding @ 2022-05-18  8:05 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, linux-kernel

On 2022/5/17 21:58, Mel Gorman wrote:
> On Fri, May 13, 2022 at 02:24:27PM +0800, Tianchen Ding wrote:
>> We notice the commit 518cd6234178 ("sched: Only queue remote wakeups
>> when crossing cache boundaries") disabled queuing tasks on wakelist when
>> the cpus share llc. This is because, at that time, the scheduler must
>> send IPIs to do ttwu_queue_wakelist. Nowadays, ttwu_queue_wakelist also
>> supports TIF_POLLING, so this is not a problem now when the wakee cpu is
>> in idle polling.
>>
>> Benefits:
>>    Queuing the task on idle cpu can help improving performance on waker cpu
>>    and utilization on wakee cpu, and further improve locality because
>>    the wakee cpu can handle its own rq. This patch helps improving rt on
>>    our real java workloads where wakeup happens frequently.
>>
>> Does this patch bring IPI flooding?
>>    For archs with TIF_POLLING_NRFLAG (e.g., x86), there will be no
>>    difference if the wakee cpu is idle polling. If the wakee cpu is idle
>>    but not polling, the later check_preempt_curr() will send IPI too.
>>
> 
> That's a big if. Polling does not last very long -- somewhere between 10
> and 62 microseconds for HZ=1000 or 250 microseconds for HZ=250. It may
> not bring IPI flooding depending on the workload but it will increase
> IPI counts.

This patch only takes effect when the wakee cpu is:
1) idle polling
2) idle not polling

For 1), there will be no IPI with or without this patch.

For 2), there will always be an IPI with or without this patch.
Without this patch: waker cpu will enqueue task and check preempt. Since 
"idle" will be sure to be preempted, waker cpu must send an resched IPI.
With this patch: waker cpu will put the task to the wakelist of wakee 
cpu, and send an IPI.

So there should be no difference about IPI counts.

> 
>>    For archs without TIF_POLLING_NRFLAG (e.g., arm64), the IPI is
>>    unavoidable, since the later check_preempt_curr() will send IPI when
>>    wakee cpu is idle.
>>
>> Benchmark:
>> running schbench -m 2 -t 8 on 8269CY:
>>
>> without patch:
>> Latency percentiles (usec)
>>          50.0000th: 10
>>          75.0000th: 14
>>          90.0000th: 16
>>          95.0000th: 16
>>          *99.0000th: 17
>>          99.5000th: 20
>>          99.9000th: 23
>>          min=0, max=28
>>
>> with patch:
>> Latency percentiles (usec)
>>          50.0000th: 6
>>          75.0000th: 8
>>          90.0000th: 9
>>          95.0000th: 9
>>          *99.0000th: 10
>>          99.5000th: 10
>>          99.9000th: 14
>>          min=0, max=16
>>
>> We've also tested unixbench and see about 10% improvement on Pipe-based
>> Context Switching, and no performance regression on other test cases.
> 
> It'll show a benefit for any heavily communicating tasks that rapidly
> enters/exits idle because the wakee CPU may be still polling due to the
> rapid enter/exit pattern.
> 
>> Signed-off-by: Tianchen Ding <dtcccc@linux.alibaba.com>
>> ---
>>   kernel/sched/core.c | 12 +++++++++---
>>   1 file changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 51efaabac3e4..cae5011a8b1f 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -3820,6 +3820,9 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>>   	if (!cpu_active(cpu))
>>   		return false;
>>   
>> +	if (cpu == smp_processor_id())
>> +		return false;
>> +
>>   	/*
>>   	 * If the CPU does not share cache, then queue the task on the
>>   	 * remote rqs wakelist to avoid accessing remote data.
> 
> Is this suggesting that the running CPU should try sending an IPI to
> itself?
> 

No. When the running CPU is the same as wakee cpu, ttwu_queue_cond() 
will return false. So ttwu_queue_wakelist() will be skipped. This logic 
is not changed with or without this patch.

We move this if() forward to ttwu_queue_cond(), it is originally at 
ttwu_queue_wakelist().

The reason we need to check it is at b6e13e85829f0 ("sched/core: Fix 
ttwu() race").
The reason we move it forward is that, without this patch, 
!cpus_share_cache() can cover the condition. But with this patch, we 
need an explicit check.

>> @@ -3827,6 +3830,12 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
>>   	if (!cpus_share_cache(smp_processor_id(), cpu))
>>   		return true;
>>   
>> +	/*
>> +	 * If the CPU is idle, let itself do activation to improve utilization.
>> +	 */
>> +	if (available_idle_cpu(cpu))
>> +		return true;
>> +
>>   	/*
>>   	 * If the task is descheduling and the only running task on the
>>   	 * CPU then use the wakelist to offload the task activation to
> 
> It is highly likely that the target CPU is idle given that we almost
> certainly called select_idle_sibling() before reaching here.
> 
> I suspect what you are trying to do is use the wakelist regardless of
> locality if the CPU is polling because polling means an IPI is avoided
> but it's not what the patch does.
> 

As I explained above, IPI is not my key point. In fact, without my 
patch, if the wakee cpu is polling, there will be no IPI, too. See 
resched_curr() -> trace_sched_wake_idle_without_ipi().

My point is to improve rt and idle cpu utilization.

Consider the normal condition (CPU0 and CPU1 share same llc) without 
this patch (origin path):

               CPU0                            CPU1

             select_task_rq()                 idle
             rq_lock(CPU1->rq)
             enqueue_task(CPU1->rq)
             notify CPU1 (by sending IPI or CPU1 polling)

                                              resched()

With this patch:

               CPU0                            CPU1

             select_task_rq()                 idle
             add to wakelist of CPU1
             notify CPU1 (by sending IPI or CPU1 polling)

                                              rq_lock(CPU1->rq)
                                              enqueue_task(CPU1->rq)
                                              resched()

We see CPU0 can finish its work earlier. It only needs to put task to 
wakelist and return.
While CPU1 is idle, so let itself handle its own runqueue data.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-05-18  8:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-13  6:24 [RFC PATCH] sched: Queue task on wakelist in the same llc if the wakee cpu is idle Tianchen Ding
2022-05-13  6:37 ` Peter Zijlstra
2022-05-13  7:05   ` Tianchen Ding
2022-05-17 13:58 ` Mel Gorman
2022-05-18  8:05   ` Tianchen Ding

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).