All of lore.kernel.org
 help / color / mirror / Atom feed
* [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
@ 2015-02-09  5:58 ` Huang Ying
  0 siblings, 0 replies; 10+ messages in thread
From: Huang Ying @ 2015-02-09  5:58 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Ingo Molnar, LKML, LKP ML

[-- Attachment #1: Type: text/plain, Size: 4708 bytes --]

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")


testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket

cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
   1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
  41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
       388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
     12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
     30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
      2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
   1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
      1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
     11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
     19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
      5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
      5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
     37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
    229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
     35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
      5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
     49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
     26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
        18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
      1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
        25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
      1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
      0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
       148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
       109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
      2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
       147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
       111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
       110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
       112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
       113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
    789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
     15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
   2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
   2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
    155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
      8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
   2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
   2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
     71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
      8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in

xps2: Nehalem
Memory: 4G

To reproduce:

        apt-get install ruby ruby-oj
        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/setup-local job.yaml # the job file attached in this email
        bin/run-local   job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying


[-- Attachment #2: reproduce --]
[-- Type: text/plain, Size: 51 bytes --]

tbench_srv &
tbench 4 127.0.0.1
killall tbench_srv

[-- Attachment #3: Type: text/plain, Size: 89 bytes --]

_______________________________________________
LKP mailing list
LKP@linux.intel.com
\r

[-- Attachment #4: job.yaml --]
[-- Type: application/x-yaml, Size: 1685 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
@ 2015-02-09  5:58 ` Huang Ying
  0 siblings, 0 replies; 10+ messages in thread
From: Huang Ying @ 2015-02-09  5:58 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 4879 bytes --]

FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")


testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket

cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
   1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
  41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
       388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
     12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
     30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
      2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
      2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
   1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
      1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
     11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
     19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
      5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
      5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
     37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
    229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
     35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
      5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
      5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
     49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
     26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
        18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
      1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
        25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
      1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
      0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
       148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
       109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
      2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
       147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
       111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
       110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
       112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
       113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
    789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
     15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
   2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
   2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
    155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
      8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
   2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
   2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
     71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
      8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in

xps2: Nehalem
Memory: 4G

To reproduce:

        apt-get install ruby ruby-oj
        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/setup-local job.yaml # the job file attached in this email
        bin/run-local   job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying


_______________________________________________
LKP mailing list
LKP(a)linux.intel.com


[-- Attachment #2: reproduce.ksh --]
[-- Type: text/plain, Size: 51 bytes --]

tbench_srv &
tbench 4 127.0.0.1
killall tbench_srv

[-- Attachment #3: job.yaml --]
[-- Type: application/x-yaml, Size: 1685 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
  2015-02-09  5:58 ` Huang Ying
@ 2015-02-09  8:31   ` Peter Zijlstra
  -1 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2015-02-09  8:31 UTC (permalink / raw)
  To: Huang Ying; +Cc: Ingo Molnar, LKML, LKP ML

On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
> 
> 
> testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
> 
> cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
> ----------------  --------------------------  
>          %stddev     %change         %stddev
>              \          |                \  
>    1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
>   41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
>        388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
>      12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
>      30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
>       2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
>       2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
>    1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
>       1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
>      11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
>      19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
>       5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
>       5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
>      37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
>     229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
>      35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
>       5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
>       5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
>      49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
>      26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
>         18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
>       1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
>         25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
>       1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
>       0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
>        148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
>        109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
>       2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
>        147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
>        111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
>        110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
>        112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
>        113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
>     789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
>      15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
>    2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
>    2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
>     155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
>       8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
>    2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
>    2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
>      71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
>       8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in
> 

OK, so the interesting number is total runtime; I cannot find it.
Therefore I cannot say what if anything changed. This is just a bunch of
random numbers afaict.

> To reproduce:
> 
>         apt-get install ruby ruby-oj

Mostly likely, you requiring ruby just means I will not be reproducing.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
@ 2015-02-09  8:31   ` Peter Zijlstra
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2015-02-09  8:31 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 4710 bytes --]

On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
> 
> 
> testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
> 
> cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
> ----------------  --------------------------  
>          %stddev     %change         %stddev
>              \          |                \  
>    1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
>   41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
>        388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
>      12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
>      30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
>       2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
>       2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
>    1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
>       1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
>      11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
>      19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
>       5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
>       5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
>      37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
>     229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
>      35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
>       5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
>       5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
>      49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
>      26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
>         18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
>       1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
>         25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
>       1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
>       0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
>        148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
>        109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
>       2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
>        147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
>        111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
>        110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
>        112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
>        113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
>     789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
>      15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
>    2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
>    2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
>     155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
>       8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
>    2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
>    2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
>      71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
>       8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in
> 

OK, so the interesting number is total runtime; I cannot find it.
Therefore I cannot say what if anything changed. This is just a bunch of
random numbers afaict.

> To reproduce:
> 
>         apt-get install ruby ruby-oj

Mostly likely, you requiring ruby just means I will not be reproducing.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
  2015-02-09  8:31   ` Peter Zijlstra
  (?)
@ 2015-02-09  8:47   ` huang ying
  2015-02-09  9:27       ` Peter Zijlstra
  -1 siblings, 1 reply; 10+ messages in thread
From: huang ying @ 2015-02-09  8:47 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 5347 bytes --]

On Mon, Feb 9, 2015 at 4:31 PM, Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> > FYI, we noticed the below changes on
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> > commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework
> rq->clock update skips")
> >
> >
> > testbox/testcase/testparams:
> xps2/hackbench/performance-1600%-process-socket
> >
> > cebde6d681aa45f9  9edfbfed3f544a7830d99b341f
> > ----------------  --------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >    1839273 ą  6%     +88.2%    3462337 ą  4%
> hackbench.time.involuntary_context_switches
> >   41965851 ą  5%      +5.6%   44307403 ą  1%
> hackbench.time.voluntary_context_switches
> >        388 ą 39%     -58.6%        160 ą 10%
> sched_debug.cfs_rq[1]:/.tg_load_contrib
> >      12957 ą 14%     -60.5%       5117 ą 11%
> sched_debug.cfs_rq[2]:/.tg_load_avg
> >      30505 ą 14%     -57.7%      12905 ą  6%
> sched_debug.cfs_rq[3]:/.tg_load_avg
> >       2790 ą 24%     -65.4%        964 ą 32%
> sched_debug.cfs_rq[3]:/.blocked_load_avg
> >       2915 ą 23%     -62.2%       1101 ą 29%
> sched_debug.cfs_rq[3]:/.tg_load_contrib
> >    1839273 ą  6%     +88.2%    3462337 ą  4%
> time.involuntary_context_switches
> >       1474 ą 28%     -61.7%        565 ą 43%
> sched_debug.cfs_rq[2]:/.tg_load_contrib
> >      11830 ą 15%     +63.0%      19285 ą 11%
> sched_debug.cpu#4.sched_goidle
> >      19319 ą 29%     +91.1%      36913 ą  7%
> sched_debug.cpu#3.sched_goidle
> >       5899 ą 31%     -35.6%       3801 ą 11%
> sched_debug.cfs_rq[4]:/.blocked_load_avg
> >       5999 ą 30%     -34.5%       3929 ą 11%
> sched_debug.cfs_rq[4]:/.tg_load_contrib
> >      37884 ą 13%     -33.5%      25207 ą  7%
> sched_debug.cfs_rq[4]:/.tg_load_avg
> >     229547 ą  5%     +47.9%     339519 ą  5%  cpuidle.C1-NHM.usage
> >      35712 ą  3%     +31.7%      47036 ą  9%  cpuidle.C3-NHM.usage
> >       5010 ą  9%     -29.0%       3556 ą 20%
> sched_debug.cfs_rq[6]:/.blocked_load_avg
> >       5139 ą  9%     -28.2%       3690 ą 19%
> sched_debug.cfs_rq[6]:/.tg_load_contrib
> >      49568 ą  6%     +24.8%      61867 ą  7%
> sched_debug.cpu#1.sched_goidle
> >      26369 ą 35%     -42.0%      15289 ą 29%  cpuidle.C6-NHM.usage
> >         18 ą 16%     +36.5%         25 ą  7%
> sched_debug.cpu#4.nr_running
> >       1.41 ą 12%     -19.3%       1.14 ą 13%
> perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
> >         25 ą 15%     +28.7%         32 ą  9%
> sched_debug.cpu#3.nr_running
> >       1.63 ą 11%     -18.0%       1.34 ą 12%
> perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
> >       0.57 ą  8%      +9.6%       0.62 ą  5%  turbostat.CPU%c1
> >        148 ą 11%     -16.7%        123 ą  7%
> sched_debug.cfs_rq[1]:/.load
> >        109 ą  6%     +17.1%        128 ą  6%
> sched_debug.cpu#6.cpu_load[0]
> >       2.41 ą  8%     -13.3%       2.09 ą 11%
> perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
> >        147 ą 12%     -16.4%        123 ą  7%  sched_debug.cpu#1.load
> >        111 ą  5%     +15.4%        129 ą  5%
> sched_debug.cpu#6.cpu_load[2]
> >        110 ą  5%     +14.9%        127 ą  5%
> sched_debug.cfs_rq[6]:/.runnable_load_avg
> >        112 ą  5%     +14.5%        128 ą  4%
> sched_debug.cpu#6.cpu_load[3]
> >        113 ą  5%     +13.2%        128 ą  3%
> sched_debug.cpu#6.cpu_load[4]
> >     789953 ą  2%     -10.8%     704528 ą  4%  sched_debug.cpu#3.avg_idle
> >      15471 ą  5%      -7.7%      14278 ą  2%  sched_debug.cpu#5.curr->pid
> >    2675106 ą 10%     +16.2%    3109411 ą  1%
> sched_debug.cpu#4.nr_switches
> >    2675140 ą 10%     +16.2%    3109440 ą  1%
> sched_debug.cpu#4.sched_count
> >     155201 ą  5%     +14.6%     177901 ą  3%  softirqs.RCU
> >       8.64 ą  6%      -9.6%       7.82 ą  5%
> perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
> >    2658351 ą 11%     +13.7%    3021564 ą  2%
> sched_debug.cpu#5.sched_count
> >    2658326 ą 11%     +13.7%    3021539 ą  2%
> sched_debug.cpu#5.nr_switches
> >      71443 ą  5%      +9.9%      78486 ą  0%  vmstat.system.cs
> >       8209 ą  5%      +7.3%       8805 ą  0%  vmstat.system.in
> >
>
> OK, so the interesting number is total runtime; I cannot find it.
>

There are no distinguishable difference between the parent and the child
for hackbench throughput number.

Usually you will not consider statistics such as involuntary context
switches?


> Therefore I cannot say what if anything changed. This is just a bunch of
> random numbers afaict.
>
> > To reproduce:
> >
> >         apt-get install ruby ruby-oj
>
> Mostly likely, you requiring ruby just means I will not be reproducing.
>

Sorry.  The test tool is developed with ruby.  Maybe we can make some
improvement in the future to require only bash to reproduce.

Best Regards,
Huang, Ying

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 6854 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
  2015-02-09  8:47   ` huang ying
@ 2015-02-09  9:27       ` Peter Zijlstra
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2015-02-09  9:27 UTC (permalink / raw)
  To: huang ying; +Cc: Huang Ying, Ingo Molnar, LKML, LKP ML, Wu, Fengguang

On Mon, Feb 09, 2015 at 04:47:07PM +0800, huang ying wrote:
> There are no distinguishable difference between the parent and the child
> for hackbench throughput number.
> 
> Usually you will not consider statistics such as involuntary context
> switches?

Only if there's a 'problem' with the primary performance metric (total
runtime in case of hackbench).

Once the primary metric shifts, you go look at what the cause of this
change might be, at that point things like # context switches etc.. are
interesting. As long as the primary performance metric is stable, meh.

As such; I would suggest _always_ reporting the primary metric for each
benchmark, preferably on top and not hidden somewhere in the mass of
numbers.

I now had to very carefully waste a few minutes of my time reading those
numbers to see if there was anything useful in; there wasn't.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
@ 2015-02-09  9:27       ` Peter Zijlstra
  0 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2015-02-09  9:27 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 880 bytes --]

On Mon, Feb 09, 2015 at 04:47:07PM +0800, huang ying wrote:
> There are no distinguishable difference between the parent and the child
> for hackbench throughput number.
> 
> Usually you will not consider statistics such as involuntary context
> switches?

Only if there's a 'problem' with the primary performance metric (total
runtime in case of hackbench).

Once the primary metric shifts, you go look at what the cause of this
change might be, at that point things like # context switches etc.. are
interesting. As long as the primary performance metric is stable, meh.

As such; I would suggest _always_ reporting the primary metric for each
benchmark, preferably on top and not hidden somewhere in the mass of
numbers.

I now had to very carefully waste a few minutes of my time reading those
numbers to see if there was anything useful in; there wasn't.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
  2015-02-09  9:27       ` Peter Zijlstra
  (?)
@ 2015-02-10  0:24       ` huang ying
  -1 siblings, 0 replies; 10+ messages in thread
From: huang ying @ 2015-02-10  0:24 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 1199 bytes --]

On Mon, Feb 9, 2015 at 5:27 PM, Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Feb 09, 2015 at 04:47:07PM +0800, huang ying wrote:
> > There are no distinguishable difference between the parent and the child
> > for hackbench throughput number.
> >
> > Usually you will not consider statistics such as involuntary context
> > switches?
>
> Only if there's a 'problem' with the primary performance metric (total
> runtime in case of hackbench).
>
> Once the primary metric shifts, you go look at what the cause of this
> change might be, at that point things like # context switches etc.. are
> interesting. As long as the primary performance metric is stable, meh.
>
> As such; I would suggest _always_ reporting the primary metric for each
> benchmark, preferably on top and not hidden somewhere in the mass of
> numbers.
>
> I now had to very carefully waste a few minutes of my time reading those
> numbers to see if there was anything useful in; there wasn't.
>

I see.  We will at least report primary metric and some other metrics in
subject.  Even if there is no change in primary metric, we will make it
explicit in subject.

Best Regards,
Huang, Ying

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 1604 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
  2015-02-09  8:31   ` Peter Zijlstra
@ 2015-02-10  7:26     ` Markus Pargmann
  -1 siblings, 0 replies; 10+ messages in thread
From: Markus Pargmann @ 2015-02-10  7:26 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Huang Ying, Ingo Molnar, LKML, LKP ML

[-- Attachment #1: Type: text/plain, Size: 5296 bytes --]

Hi,

On Mon, Feb 09, 2015 at 09:31:20AM +0100, Peter Zijlstra wrote:
> On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> > FYI, we noticed the below changes on
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> > commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
> > 
> > 
> > testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
> > 
> > cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
> > ----------------  --------------------------  
> >          %stddev     %change         %stddev
> >              \          |                \  
> >    1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
> >   41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
> >        388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
> >      12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
> >      30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
> >       2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
> >       2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
> >    1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
> >       1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
> >      11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
> >      19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
> >       5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
> >       5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
> >      37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
> >     229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
> >      35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
> >       5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
> >       5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
> >      49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
> >      26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
> >         18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
> >       1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
> >         25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
> >       1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
> >       0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
> >        148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
> >        109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
> >       2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
> >        147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
> >        111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
> >        110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
> >        112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
> >        113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
> >     789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
> >      15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
> >    2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
> >    2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
> >     155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
> >       8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
> >    2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
> >    2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
> >      71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
> >       8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in
> > 
> 
> OK, so the interesting number is total runtime; I cannot find it.
> Therefore I cannot say what if anything changed. This is just a bunch of
> random numbers afaict.

The total runtime of hackbench in v3.19 compared to v3.15 to v3.18 is
shown here [1] (Group 5 -> linux_perf.hackbench). It is not the same
project so the profiling data is not related and not recorded. The
results should be reproducable by simply running hackbench with the same
options as cbenchsuite [2] did. The options are always listed in the
result browser [1] below the plots.

Best Regards,

Markus

[1] http://results.cbenchsuite.org/plots/2015-02-09__v3.15-v3.19-quad/detailed/
[2] http://cbenchsuite.org


[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches
@ 2015-02-10  7:26     ` Markus Pargmann
  0 siblings, 0 replies; 10+ messages in thread
From: Markus Pargmann @ 2015-02-10  7:26 UTC (permalink / raw)
  To: lkp

[-- Attachment #1: Type: text/plain, Size: 5296 bytes --]

Hi,

On Mon, Feb 09, 2015 at 09:31:20AM +0100, Peter Zijlstra wrote:
> On Mon, Feb 09, 2015 at 01:58:33PM +0800, Huang Ying wrote:
> > FYI, we noticed the below changes on
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
> > commit 9edfbfed3f544a7830d99b341f0c175995a02950 ("sched/core: Rework rq->clock update skips")
> > 
> > 
> > testbox/testcase/testparams: xps2/hackbench/performance-1600%-process-socket
> > 
> > cebde6d681aa45f9  9edfbfed3f544a7830d99b341f  
> > ----------------  --------------------------  
> >          %stddev     %change         %stddev
> >              \          |                \  
> >    1839273 ±  6%     +88.2%    3462337 ±  4%  hackbench.time.involuntary_context_switches
> >   41965851 ±  5%      +5.6%   44307403 ±  1%  hackbench.time.voluntary_context_switches
> >        388 ± 39%     -58.6%        160 ± 10%  sched_debug.cfs_rq[1]:/.tg_load_contrib
> >      12957 ± 14%     -60.5%       5117 ± 11%  sched_debug.cfs_rq[2]:/.tg_load_avg
> >      30505 ± 14%     -57.7%      12905 ±  6%  sched_debug.cfs_rq[3]:/.tg_load_avg
> >       2790 ± 24%     -65.4%        964 ± 32%  sched_debug.cfs_rq[3]:/.blocked_load_avg
> >       2915 ± 23%     -62.2%       1101 ± 29%  sched_debug.cfs_rq[3]:/.tg_load_contrib
> >    1839273 ±  6%     +88.2%    3462337 ±  4%  time.involuntary_context_switches
> >       1474 ± 28%     -61.7%        565 ± 43%  sched_debug.cfs_rq[2]:/.tg_load_contrib
> >      11830 ± 15%     +63.0%      19285 ± 11%  sched_debug.cpu#4.sched_goidle
> >      19319 ± 29%     +91.1%      36913 ±  7%  sched_debug.cpu#3.sched_goidle
> >       5899 ± 31%     -35.6%       3801 ± 11%  sched_debug.cfs_rq[4]:/.blocked_load_avg
> >       5999 ± 30%     -34.5%       3929 ± 11%  sched_debug.cfs_rq[4]:/.tg_load_contrib
> >      37884 ± 13%     -33.5%      25207 ±  7%  sched_debug.cfs_rq[4]:/.tg_load_avg
> >     229547 ±  5%     +47.9%     339519 ±  5%  cpuidle.C1-NHM.usage
> >      35712 ±  3%     +31.7%      47036 ±  9%  cpuidle.C3-NHM.usage
> >       5010 ±  9%     -29.0%       3556 ± 20%  sched_debug.cfs_rq[6]:/.blocked_load_avg
> >       5139 ±  9%     -28.2%       3690 ± 19%  sched_debug.cfs_rq[6]:/.tg_load_contrib
> >      49568 ±  6%     +24.8%      61867 ±  7%  sched_debug.cpu#1.sched_goidle
> >      26369 ± 35%     -42.0%      15289 ± 29%  cpuidle.C6-NHM.usage
> >         18 ± 16%     +36.5%         25 ±  7%  sched_debug.cpu#4.nr_running
> >       1.41 ± 12%     -19.3%       1.14 ± 13%  perf-profile.cpu-cycles.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
> >         25 ± 15%     +28.7%         32 ±  9%  sched_debug.cpu#3.nr_running
> >       1.63 ± 11%     -18.0%       1.34 ± 12%  perf-profile.cpu-cycles.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg
> >       0.57 ±  8%      +9.6%       0.62 ±  5%  turbostat.CPU%c1
> >        148 ± 11%     -16.7%        123 ±  7%  sched_debug.cfs_rq[1]:/.load
> >        109 ±  6%     +17.1%        128 ±  6%  sched_debug.cpu#6.cpu_load[0]
> >       2.41 ±  8%     -13.3%       2.09 ± 11%  perf-profile.cpu-cycles.skb_release_head_state.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read
> >        147 ± 12%     -16.4%        123 ±  7%  sched_debug.cpu#1.load
> >        111 ±  5%     +15.4%        129 ±  5%  sched_debug.cpu#6.cpu_load[2]
> >        110 ±  5%     +14.9%        127 ±  5%  sched_debug.cfs_rq[6]:/.runnable_load_avg
> >        112 ±  5%     +14.5%        128 ±  4%  sched_debug.cpu#6.cpu_load[3]
> >        113 ±  5%     +13.2%        128 ±  3%  sched_debug.cpu#6.cpu_load[4]
> >     789953 ±  2%     -10.8%     704528 ±  4%  sched_debug.cpu#3.avg_idle
> >      15471 ±  5%      -7.7%      14278 ±  2%  sched_debug.cpu#5.curr->pid
> >    2675106 ± 10%     +16.2%    3109411 ±  1%  sched_debug.cpu#4.nr_switches
> >    2675140 ± 10%     +16.2%    3109440 ±  1%  sched_debug.cpu#4.sched_count
> >     155201 ±  5%     +14.6%     177901 ±  3%  softirqs.RCU
> >       8.64 ±  6%      -9.6%       7.82 ±  5%  perf-profile.cpu-cycles.skb_release_all.consume_skb.unix_stream_recvmsg.sock_aio_read.sock_aio_read
> >    2658351 ± 11%     +13.7%    3021564 ±  2%  sched_debug.cpu#5.sched_count
> >    2658326 ± 11%     +13.7%    3021539 ±  2%  sched_debug.cpu#5.nr_switches
> >      71443 ±  5%      +9.9%      78486 ±  0%  vmstat.system.cs
> >       8209 ±  5%      +7.3%       8805 ±  0%  vmstat.system.in
> > 
> 
> OK, so the interesting number is total runtime; I cannot find it.
> Therefore I cannot say what if anything changed. This is just a bunch of
> random numbers afaict.

The total runtime of hackbench in v3.19 compared to v3.15 to v3.18 is
shown here [1] (Group 5 -> linux_perf.hackbench). It is not the same
project so the profiling data is not related and not recorded. The
results should be reproducable by simply running hackbench with the same
options as cbenchsuite [2] did. The options are always listed in the
result browser [1] below the plots.

Best Regards,

Markus

[1] http://results.cbenchsuite.org/plots/2015-02-09__v3.15-v3.19-quad/detailed/
[2] http://cbenchsuite.org


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-02-10  7:26 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-09  5:58 [LKP] [sched/core] 9edfbfed3f5: +88.2% hackbench.time.involuntary_context_switches Huang Ying
2015-02-09  5:58 ` Huang Ying
2015-02-09  8:31 ` [LKP] " Peter Zijlstra
2015-02-09  8:31   ` Peter Zijlstra
2015-02-09  8:47   ` huang ying
2015-02-09  9:27     ` [LKP] " Peter Zijlstra
2015-02-09  9:27       ` Peter Zijlstra
2015-02-10  0:24       ` huang ying
2015-02-10  7:26   ` [LKP] " Markus Pargmann
2015-02-10  7:26     ` Markus Pargmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.