* Re: scale sysctl_sched_shares_ratelimit with nr_cpus
[not found] <37E52D09333DE2469A03574C88DBF40F024EBD2F@pdsmsx414.ccr.corp.intel.com>
@ 2008-08-18 6:52 ` Ingo Molnar
2008-08-18 6:54 ` Zhang, Yanmin
0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2008-08-18 6:52 UTC (permalink / raw)
To: Zhang, Yanmin; +Cc: a.p.zijlstra, Linux Kernel Mailing List
* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
> Ingo,
>
> My linux mailbox is locked, so I send from my another mail box.
>
> I tested the patch of scale sysctl_sched_shares_ratelimit with nr_cpus
> on 16-core tigerton by volanoMark. CONFIG_GROUP_SCHED=y. Comparing
> with pure 2.6.27-rc3, the patched kernel has about 15% improvement,
> and it seems volanoMark runs more smoothly with the patched kernel.
cool. It's already upstream (post-rc3 commit):
55cd534: sched: scale sysctl_sched_shares_ratelimit with nr_cpus
Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: scale sysctl_sched_shares_ratelimit with nr_cpus
2008-08-18 6:52 ` scale sysctl_sched_shares_ratelimit with nr_cpus Ingo Molnar
@ 2008-08-18 6:54 ` Zhang, Yanmin
2008-08-18 7:01 ` Ingo Molnar
0 siblings, 1 reply; 6+ messages in thread
From: Zhang, Yanmin @ 2008-08-18 6:54 UTC (permalink / raw)
To: Ingo Molnar; +Cc: a.p.zijlstra, Linux Kernel Mailing List
>>-----Original Message-----
>>From: Ingo Molnar [mailto:mingo@elte.hu]
>>Sent: Monday, August 18, 2008 2:52 PM
>>To: Zhang, Yanmin
>>Cc: a.p.zijlstra@chello.nl; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
>>
>>> Ingo,
>>>
>>> My linux mailbox is locked, so I send from my another mail box.
>>>
>>> I tested the patch of scale sysctl_sched_shares_ratelimit with
nr_cpus
>>> on 16-core tigerton by volanoMark. CONFIG_GROUP_SCHED=y. Comparing
>>> with pure 2.6.27-rc3, the patched kernel has about 15% improvement,
>>> and it seems volanoMark runs more smoothly with the patched kernel.
>>
>>cool. It's already upstream (post-rc3 commit):
[YM] But comparing with 2.6.26, volanoMark result still has about 60%
regression.
>>
>> 55cd534: sched: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>> Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: scale sysctl_sched_shares_ratelimit with nr_cpus
2008-08-18 6:54 ` Zhang, Yanmin
@ 2008-08-18 7:01 ` Ingo Molnar
2008-08-18 8:26 ` Zhang, Yanmin
0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2008-08-18 7:01 UTC (permalink / raw)
To: Zhang, Yanmin; +Cc: a.p.zijlstra, Linux Kernel Mailing List
* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
> >>cool. It's already upstream (post-rc3 commit):
> [YM] But comparing with 2.6.26, volanoMark result still has about 60%
> regression.
Does a scheduler trace show anything about why that drop happens? Do
something like this to trace the scheduler:
assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
echo 1 > /debug/tracing/tracing_cpumask
echo sched_switch > /debug/tracing/available_tracers
cat /debug/tracing/trace_pipe > trace.txt
( regarding tracing_cpumask: trace only 1 CPU to make sure volanomark is
not disturbed too much by many-CPUs tracing. )
Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: scale sysctl_sched_shares_ratelimit with nr_cpus
2008-08-18 7:01 ` Ingo Molnar
@ 2008-08-18 8:26 ` Zhang, Yanmin
2008-08-18 8:42 ` Ingo Molnar
0 siblings, 1 reply; 6+ messages in thread
From: Zhang, Yanmin @ 2008-08-18 8:26 UTC (permalink / raw)
To: Ingo Molnar; +Cc: a.p.zijlstra, Linux Kernel Mailing List
>>-----Original Message-----
>>From: Ingo Molnar [mailto:mingo@elte.hu]
>>Sent: Monday, August 18, 2008 3:02 PM
>>To: Zhang, Yanmin
>>Cc: a.p.zijlstra@chello.nl; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
>>
>>> >>cool. It's already upstream (post-rc3 commit):
>>> [YM] But comparing with 2.6.26, volanoMark result still has about
60%
>>> regression.
>>
>>Does a scheduler trace show anything about why that drop happens? Do
>>something like this to trace the scheduler:
>>
>>assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
>>
>> echo 1 > /debug/tracing/tracing_cpumask
>> echo sched_switch > /debug/tracing/available_tracers
>> cat /debug/tracing/trace_pipe > trace.txt
[YM] Thanks for your good pointer. I collected the data and didn't find
anything abnormal except the pid about waker.
Receiver-197-13665 [00] 1369.966423: 13665:120:R + 13607:120:S
Receiver-197-13665 [00] 1369.966440: 13665:120:R + 13611:120:S
Receiver-197-13665 [00] 1369.966458: 13665:120:R + 13615:120:S
Receiver-197-13665 [00] 1369.966463: 13665:120:R + 13619:120:S
Receiver-197-13665 [00] 1369.966466: 13665:120:R + 13623:120:S
Receiver-197-13665 [00] 1369.966469: 13665:120:R + 13627:120:S
Receiver-197-13665 [00] 1369.966475: 13665:120:R + 13631:120:S
Receiver-197-13665 [00] 1369.966480: 13665:120:R + 13635:120:S
Receiver-197-13665 [00] 1369.966485: 13665:120:R + 13639:120:S
Receiver-197-13665 [00] 1369.966495: 13665:120:R + 13643:120:S
Receiver-197-13665 [00] 1369.966507: 13871:120:R + 13647:120:S
Above waker pid is 13871 while the current pid is 13665. I found lots of
such mismatch data.
Receiver-197-13665 [00] 1369.966513: 13465:120:R + 13651:120:S
Receiver-197-13665 [00] 1369.966516: 13665:120:R + 13655:120:S
Receiver-197-13665 [00] 1369.966521: 13665:120:R + 13659:120:S
Receiver-197-13665 [00] 1369.966530: 13665:120:R + 13667:120:S
Receiver-197-13665 [00] 1369.966544: 13883:120:R + 13663:120:S
Receiver-197-13665 [00] 1369.966549: 13665:120:R ==> 13667:120:R
Sender-140-13667 [00] 1369.966573: 13351:120:R + 13668:120:S
Sender-140-13667 [00] 1369.966578: 13667:120:R ==> 13659:120:R
BTW, I analyzed schedstat data and found wake_affine and
load_balance_newidle
seem abnormal. 2.6.27-rc has more task pulls.
I set CONFIG_GROUP_SCHED=n with above testing.
>>
>>( regarding tracing_cpumask: trace only 1 CPU to make sure volanomark
is
>> not disturbed too much by many-CPUs tracing. )
>>
>> Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: scale sysctl_sched_shares_ratelimit with nr_cpus
2008-08-18 8:26 ` Zhang, Yanmin
@ 2008-08-18 8:42 ` Ingo Molnar
2008-08-18 8:45 ` Zhang, Yanmin
0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2008-08-18 8:42 UTC (permalink / raw)
To: Zhang, Yanmin; +Cc: a.p.zijlstra, Linux Kernel Mailing List
* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
> >>Does a scheduler trace show anything about why that drop happens? Do
> >>something like this to trace the scheduler:
> >>
> >>assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
> >>
> >> echo 1 > /debug/tracing/tracing_cpumask
> >> echo sched_switch > /debug/tracing/available_tracers
> >> cat /debug/tracing/trace_pipe > trace.txt
> [YM] Thanks for your good pointer. I collected the data and didn't find
> anything abnormal except the pid about waker.
>
> Receiver-197-13665 [00] 1369.966423: 13665:120:R + 13607:120:S
> Receiver-197-13665 [00] 1369.966440: 13665:120:R + 13611:120:S
> Receiver-197-13665 [00] 1369.966458: 13665:120:R + 13615:120:S
> Receiver-197-13665 [00] 1369.966463: 13665:120:R + 13619:120:S
> Receiver-197-13665 [00] 1369.966466: 13665:120:R + 13623:120:S
> Receiver-197-13665 [00] 1369.966469: 13665:120:R + 13627:120:S
> Receiver-197-13665 [00] 1369.966475: 13665:120:R + 13631:120:S
> Receiver-197-13665 [00] 1369.966480: 13665:120:R + 13635:120:S
> Receiver-197-13665 [00] 1369.966485: 13665:120:R + 13639:120:S
> Receiver-197-13665 [00] 1369.966495: 13665:120:R + 13643:120:S
> Receiver-197-13665 [00] 1369.966507: 13871:120:R + 13647:120:S
> Above waker pid is 13871 while the current pid is 13665. I found lots of
> such mismatch data.
>
> Receiver-197-13665 [00] 1369.966513: 13465:120:R + 13651:120:S
> Receiver-197-13665 [00] 1369.966516: 13665:120:R + 13655:120:S
> Receiver-197-13665 [00] 1369.966521: 13665:120:R + 13659:120:S
> Receiver-197-13665 [00] 1369.966530: 13665:120:R + 13667:120:S
> Receiver-197-13665 [00] 1369.966544: 13883:120:R + 13663:120:S
> Receiver-197-13665 [00] 1369.966549: 13665:120:R ==> 13667:120:R
> Sender-140-13667 [00] 1369.966573: 13351:120:R + 13668:120:S
> Sender-140-13667 [00] 1369.966578: 13667:120:R ==> 13659:120:R
>
>
> BTW, I analyzed schedstat data and found wake_affine and
> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
> set CONFIG_GROUP_SCHED=n with above testing.
hm, does this mean there's too much idle time during the testrun,
because we dont load-balance agressively enough?
Ingo
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: scale sysctl_sched_shares_ratelimit with nr_cpus
2008-08-18 8:42 ` Ingo Molnar
@ 2008-08-18 8:45 ` Zhang, Yanmin
0 siblings, 0 replies; 6+ messages in thread
From: Zhang, Yanmin @ 2008-08-18 8:45 UTC (permalink / raw)
To: Ingo Molnar; +Cc: a.p.zijlstra, Linux Kernel Mailing List
>>-----Original Message-----
>>From: Ingo Molnar [mailto:mingo@elte.hu]
>>Sent: Monday, August 18, 2008 4:42 PM
>>To: Zhang, Yanmin
>>Cc: a.p.zijlstra@chello.nl; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <yanmin.zhang@intel.com> wrote:
>>
>>> >>Does a scheduler trace show anything about why that drop happens?
Do
>>> >>something like this to trace the scheduler:
>>> >>
>>> >>assuming debugfs is mounted under /debug and
CONFIG_SCHED_TRACER=y:
>>> >>
>>> >> echo 1 > /debug/tracing/tracing_cpumask
>>> >> echo sched_switch > /debug/tracing/available_tracers
>>> >> cat /debug/tracing/trace_pipe > trace.txt
>>> [YM] Thanks for your good pointer. I collected the data and didn't
find
>>> anything abnormal except the pid about waker.
>>>
>>> Receiver-197-13665 [00] 1369.966423: 13665:120:R +
13607:120:S
>>> Receiver-197-13665 [00] 1369.966440: 13665:120:R +
13611:120:S
>>> Receiver-197-13665 [00] 1369.966458: 13665:120:R +
13615:120:S
>>> Receiver-197-13665 [00] 1369.966463: 13665:120:R +
13619:120:S
>>> Receiver-197-13665 [00] 1369.966466: 13665:120:R +
13623:120:S
>>> Receiver-197-13665 [00] 1369.966469: 13665:120:R +
13627:120:S
>>> Receiver-197-13665 [00] 1369.966475: 13665:120:R +
13631:120:S
>>> Receiver-197-13665 [00] 1369.966480: 13665:120:R +
13635:120:S
>>> Receiver-197-13665 [00] 1369.966485: 13665:120:R +
13639:120:S
>>> Receiver-197-13665 [00] 1369.966495: 13665:120:R +
13643:120:S
>>> Receiver-197-13665 [00] 1369.966507: 13871:120:R +
13647:120:S
>>> Above waker pid is 13871 while the current pid is 13665. I found
lots of
>>> such mismatch data.
>>>
>>> Receiver-197-13665 [00] 1369.966513: 13465:120:R +
13651:120:S
>>> Receiver-197-13665 [00] 1369.966516: 13665:120:R +
13655:120:S
>>> Receiver-197-13665 [00] 1369.966521: 13665:120:R +
13659:120:S
>>> Receiver-197-13665 [00] 1369.966530: 13665:120:R +
13667:120:S
>>> Receiver-197-13665 [00] 1369.966544: 13883:120:R +
13663:120:S
>>> Receiver-197-13665 [00] 1369.966549: 13665:120:R ==>
13667:120:R
>>> Sender-140-13667 [00] 1369.966573: 13351:120:R +
13668:120:S
>>> Sender-140-13667 [00] 1369.966578: 13667:120:R ==>
13659:120:R
>>>
>>>
>>> BTW, I analyzed schedstat data and found wake_affine and
>>> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
>>> set CONFIG_GROUP_SCHED=n with above testing.
>>
>>hm, does this mean there's too much idle time during the testrun,
>>because we dont load-balance agressively enough?
[YM] With 2.6.26, cpu idle is about 6%; with 2.6.27-rc, idle is about
0~1%.
It seems volanoMark prefers some idle. I diff the sched source codes and
couldn't
find why load balance pulls more tasks successfully in 2.6.27-rc.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-08-18 8:46 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <37E52D09333DE2469A03574C88DBF40F024EBD2F@pdsmsx414.ccr.corp.intel.com>
2008-08-18 6:52 ` scale sysctl_sched_shares_ratelimit with nr_cpus Ingo Molnar
2008-08-18 6:54 ` Zhang, Yanmin
2008-08-18 7:01 ` Ingo Molnar
2008-08-18 8:26 ` Zhang, Yanmin
2008-08-18 8:42 ` Ingo Molnar
2008-08-18 8:45 ` Zhang, Yanmin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).