Linux-Trace-Devel Archive on lore.kernel.org
 help / color / Atom feed
* Re: [PATCH] sched/fair: Load balance aggressively for SCHED_IDLE CPUs
       [not found]   ` <20200107112518.fqqzldnflqxonptf@vireshk-i7>
@ 2020-01-07 17:31     ` Steven Rostedt
  2020-01-08  6:36       ` Viresh Kumar
  0 siblings, 1 reply; 2+ messages in thread
From: Steven Rostedt @ 2020-01-07 17:31 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Ben Segall, Mel Gorman, linux-kernel,
	Yordan Karadzhov, Linux Trace Devel

On Tue, 7 Jan 2020 16:55:18 +0530
Viresh Kumar <viresh.kumar@linaro.org> wrote:

> Hi Steven,
> 
> On 02-01-20, 12:29, Steven Rostedt wrote:
> > On Tue, 24 Dec 2019 10:43:30 +0530
> > Viresh Kumar <viresh.kumar@linaro.org> wrote:
> >   
> > > This is tested on ARM64 Hikey620 platform (octa-core) with the help of
> > > rt-app and it is verified, using kernel traces, that the newly
> > > SCHED_IDLE CPU does load balancing shortly after it becomes SCHED_IDLE
> > > and pulls tasks from other busy CPUs.  
> > 
> > Can you post the actual steps you used to test this and show the before
> > and after results? Then others can reproduce what you have shown and
> > even run other tests to see if this change has any other side effects.  
> 
> I have attached the json file I used on my octa-core hikey platform along with
> before/after kernelshark screenshots with this email.
> 
> The json file does following:
> 
> - it first creates 8 always running sched_idle tasks (thread-idle-X) and let
>   them spread on all 8 CPUs.
> 
> - it then creates 8 cfs tasks (thread-cfs-X) that run 50ms every 100ms which
>   will also spread on the 8 cores.
>   
>   one of these threads (thread-cfs2-7) run only 1ms instead of 50ms once every 6
>   periods. During this 6th period, a 9th task (thread-cfs3-8) wakes up.
> 
> - The 9th cfs task (thread-cfs3-8) is timed in a way that it wakes up only
>   during the 6th period of thread-cfs2-7. This thread runs 50ms every 600ms.
>   
>   Most of the time, thread-cfs3-8 doesn't wakeup on the cpu with the short
>   thread-cfs2-7 task so after 1ms, we have 1 cpu running only sched_idle task
>   and on another CPU 2 CFS tasks compete during 100ms.
>   
>   - the 9th task has to wait a full sched slice (12ms) before its 1st schedule
>   - the 2 cfs tasks that compete for the same CPU, need 100ms to complete
>     instead of 50ms (51ms in this case).
> 
> The before.jpg image shows what happened before this patch was applied. The
> thread-cfs3-8 doesn't migrate to CPU4 which was only running sched-idle stuff at
> the 6th period of thread-cfs2-7. The migration happened though when the
> thread-cfs3-8 woke up next time (after 600 ms), this isn't shown in the picture.
> 
> The after.jpg image shows what happened after this patch was applied. On the
> very first instance when thread-cfs3-8 gets a chance to run, the load balancer
> starts balancing the CPUs. It migrates lot of sched-idle tasks to CPU7 first
> (CPU7 was running thread-cfs2-7 then), and finally migrates the thread-cfs3-8
> task to CPU7.
> 
> I have done some markings on the jpg files as well to show the tasks and
> migration points.
> 
> Please lemme know in case someone needs further clarification. Thanks.
> 

Thanks. I think I was able to reproduce it. Speaking of, I'd
recommend that you download and install the latest KernelShark
(https://www.kernelshark.org), as it looks like you're still using the
pre-1.0 version (which is now deprecated). One nice feature of the
latest is that it has json session files that you can pass to others.
If you install KernelShark 1.0, then you can do:

 1) download http://rostedt.org/private/sched_idle_ks_data.tar.bz2
 2) extract it:
     $ cd /tmp
     $ wget http://rostedt.org/private/sched_idle_ks_data.tar.bz2
     $ tar xvf sched_idle_ks_data.tar.bz2
     $ cd sched_idle_ks_data
 3) Open up each of the data files and it will bring you right to
    where you want to be.
     $ kernelshark -s sched_idle_ks-before.json &
     $ kernelshark -s sched_idle_ks-after.json &

And you can see if I duplicated what you explained ;-)

-- Steve

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] sched/fair: Load balance aggressively for SCHED_IDLE CPUs
  2020-01-07 17:31     ` [PATCH] sched/fair: Load balance aggressively for SCHED_IDLE CPUs Steven Rostedt
@ 2020-01-08  6:36       ` Viresh Kumar
  0 siblings, 0 replies; 2+ messages in thread
From: Viresh Kumar @ 2020-01-08  6:36 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Ben Segall, Mel Gorman, linux-kernel,
	Yordan Karadzhov, Linux Trace Devel

On 07-01-20, 12:31, Steven Rostedt wrote:
> Thanks. I think I was able to reproduce it.

Great.

> Speaking of, I'd
> recommend that you download and install the latest KernelShark
> (https://www.kernelshark.org), as it looks like you're still using the
> pre-1.0 version (which is now deprecated).

I have trace-cmd of the latest version since a long time, not sure how
kernelshark was left out (I must have done install_gui as well). Thanks for
noticing though.

> One nice feature of the
> latest is that it has json session files that you can pass to others.

Nice.

-- 
viresh

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <885b1be9af68d124f44a863f54e337f8eb6c4917.1577090998.git.viresh.kumar@linaro.org>
     [not found] ` <20200102122901.6acbf857@gandalf.local.home>
     [not found]   ` <20200107112518.fqqzldnflqxonptf@vireshk-i7>
2020-01-07 17:31     ` [PATCH] sched/fair: Load balance aggressively for SCHED_IDLE CPUs Steven Rostedt
2020-01-08  6:36       ` Viresh Kumar

Linux-Trace-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-trace-devel/0 linux-trace-devel/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-trace-devel linux-trace-devel/ https://lore.kernel.org/linux-trace-devel \
		linux-trace-devel@vger.kernel.org
	public-inbox-index linux-trace-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-trace-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git