FYI, we noticed a -6.3% regression of unixbench.score due to commit: commit 5c0a85fad949212b3e059692deecdeed74ae7ec7 ("mm: make faultaround produce old ptes") https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master in testcase: unixbench on test machine: lituya: 16 threads Haswell High-end Desktop (i7-5960X 3.0G) with 16G memory with following parameters: cpufreq_governor=performance/nr_task=1/test=shell8 Details are as below: --------------------------------------------------------------------------------------------------> ========================================================================================= compiler/cpufreq_governor/kconfig/nr_task/rootfs/tbox_group/test/testcase: gcc-4.9/performance/x86_64-rhel/1/debian-x86_64-2015-02-07.cgz/lituya/shell8/unixbench commit: 4b50bcc7eda4d3cc9e3f2a0aa60e590fedf728c5 5c0a85fad949212b3e059692deecdeed74ae7ec7 4b50bcc7eda4d3cc 5c0a85fad949212b3e059692de ---------------- -------------------------- fail:runs %reproduction fail:runs | | | 3:4 -75% :4 kmsg.DHCP/BOOTP:Reply_not_for_us,op[#]xid[#] %stddev %change %stddev \ | \ 14321 ± 0% -6.3% 13425 ± 0% unixbench.score 1996897 ± 0% -6.1% 1874635 ± 0% unixbench.time.involuntary_context_switches 1.721e+08 ± 0% -6.2% 1.613e+08 ± 0% unixbench.time.minor_page_faults 758.65 ± 0% -3.0% 735.86 ± 0% unixbench.time.system_time 387.66 ± 0% +5.4% 408.49 ± 0% unixbench.time.user_time 5950278 ± 0% -6.2% 5583456 ± 0% unixbench.time.voluntary_context_switches 1960642 ± 0% -11.4% 1737753 ± 0% cpuidle.C1-HSW.usage 5851 ± 0% -43.8% 3286 ± 1% proc-vmstat.nr_active_file 46185 ± 0% -21.2% 36385 ± 2% meminfo.Active 23404 ± 0% -43.8% 13147 ± 1% meminfo.Active(file) 4109 ± 5% -19.6% 3302 ± 4% slabinfo.pid.active_objs 4109 ± 5% -19.6% 3302 ± 4% slabinfo.pid.num_objs 94603 ± 0% -5.7% 89247 ± 0% vmstat.system.cs 8976 ± 0% -2.5% 8754 ± 0% vmstat.system.in 3.38 ± 2% +11.8% 3.77 ± 0% turbostat.CPU%c3 0.24 ±101% -86.3% 0.03 ± 54% turbostat.Pkg%pc3 66.53 ± 0% -1.7% 65.41 ± 0% turbostat.PkgWatt 2061 ± 1% -8.5% 1886 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev 737154 ± 5% +10.8% 817107 ± 3% sched_debug.cpu.avg_idle.max 133057 ± 5% -33.2% 88864 ± 11% sched_debug.cpu.avg_idle.min 181562 ± 8% +15.9% 210434 ± 3% sched_debug.cpu.avg_idle.stddev 0.97 ± 7% +19.0% 1.16 ± 8% sched_debug.cpu.clock.stddev 0.97 ± 7% +19.0% 1.16 ± 8% sched_debug.cpu.clock_task.stddev 248.06 ± 11% +31.0% 324.94 ± 8% sched_debug.cpu.cpu_load[1].max 55.65 ± 14% +28.1% 71.30 ± 8% sched_debug.cpu.cpu_load[1].stddev 233.38 ± 10% +34.4% 313.56 ± 8% sched_debug.cpu.cpu_load[2].max 49.79 ± 15% +35.6% 67.50 ± 9% sched_debug.cpu.cpu_load[2].stddev 233.25 ± 12% +29.9% 302.94 ± 6% sched_debug.cpu.cpu_load[3].max 46.56 ± 8% +12.2% 52.25 ± 6% sched_debug.cpu.cpu_load[3].min 48.51 ± 15% +31.4% 63.76 ± 7% sched_debug.cpu.cpu_load[3].stddev 238.44 ± 12% +19.0% 283.69 ± 3% sched_debug.cpu.cpu_load[4].max 49.56 ± 9% +13.4% 56.19 ± 4% sched_debug.cpu.cpu_load[4].min 48.22 ± 13% +20.1% 57.93 ± 5% sched_debug.cpu.cpu_load[4].stddev 14792 ± 30% +71.9% 25424 ± 17% sched_debug.cpu.curr->pid.avg 42862 ± 1% +42.6% 61121 ± 0% sched_debug.cpu.curr->pid.max 19466 ± 10% +35.4% 26351 ± 9% sched_debug.cpu.curr->pid.stddev 1067 ± 6% -14.9% 909.35 ± 4% sched_debug.cpu.ttwu_local.stddev To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Xiaolong