linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [lkp] [mm] 39a1aa8e19: will-it-scale.per_process_ops +5.2% improvement
@ 2016-03-30  5:46 kernel test robot
  0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2016-03-30  5:46 UTC (permalink / raw)
  To: Andrey Ryabinin; +Cc: LKML, lkp

[-- Attachment #1: Type: text/plain, Size: 5968 bytes --]

FYI, we noticed that will-it-scale.per_process_ops +5.2% improvement with your commit.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 39a1aa8e194ab67983de3b9d0b204ccee12e689a ("mm: deduplicate memory overcommitment code")


=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb42/malloc1/will-it-scale

commit: 
  ea606cf5d8df370e7932460dfd960b21f20e7c6d
  39a1aa8e194ab67983de3b9d0b204ccee12e689a

ea606cf5d8df370e 39a1aa8e194ab67983de3b9d0b 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    101461 ±  0%      +5.2%     106703 ±  0%  will-it-scale.per_process_ops
      0.10 ±  0%     +31.8%       0.13 ±  0%  will-it-scale.scalability
      6966 ±  8%      -9.9%       6278 ± 10%  meminfo.AnonHugePages
  62767556 ± 10%     -24.3%   47486848 ±  7%  cpuidle.C3-IVT.time
    232686 ±  5%     -18.4%     189919 ±  9%  cpuidle.C3-IVT.usage
   6823970 ±  3%      -8.9%    6214309 ±  4%  cpuidle.C6-IVT.usage
     66703 ±  0%     +12.2%      74872 ±  2%  numa-vmstat.node0.numa_other
  37441585 ±  0%     +26.4%   47319887 ±  0%  numa-vmstat.node1.numa_hit
  37417000 ±  0%     +26.4%   47303041 ±  0%  numa-vmstat.node1.numa_local
     24584 ±  0%     -31.5%      16845 ± 11%  numa-vmstat.node1.numa_other
      6.15 ± 31%     +36.2%       8.37 ± 15%  sched_debug.cpu.cpu_load[4].stddev
     20260 ±  6%     +13.4%      22965 ±  5%  sched_debug.cpu.nr_switches.min
      4450 ±  9%     +16.5%       5183 ±  8%  sched_debug.cpu.ttwu_local.max
    920.59 ± 10%     +15.0%       1058 ±  7%  sched_debug.cpu.ttwu_local.stddev
  2.59e+08 ±  0%     +13.9%   2.95e+08 ±  0%  numa-numastat.node0.local_node
  2.59e+08 ±  0%     +13.9%   2.95e+08 ±  0%  numa-numastat.node0.numa_hit
     10.25 ±119%  +75378.0%       7736 ± 19%  numa-numastat.node0.other_node
 1.097e+08 ±  0%     +28.0%  1.404e+08 ±  0%  numa-numastat.node1.local_node
 1.097e+08 ±  0%     +28.0%  1.404e+08 ±  0%  numa-numastat.node1.numa_hit
      9285 ±  0%     -83.1%       1568 ± 98%  numa-numastat.node1.other_node
 3.687e+08 ±  0%     +18.1%  4.354e+08 ±  0%  proc-vmstat.numa_hit
 3.687e+08 ±  0%     +18.1%  4.354e+08 ±  0%  proc-vmstat.numa_local
  52281716 ±  0%     +11.1%   58060959 ±  1%  proc-vmstat.pgalloc_dma32
 3.943e+08 ±  0%     +16.7%  4.603e+08 ±  0%  proc-vmstat.pgalloc_normal
 1.854e+08 ±  0%     +18.0%  2.187e+08 ±  0%  proc-vmstat.pgfault
 4.465e+08 ±  0%     +16.1%  5.183e+08 ±  0%  proc-vmstat.pgfree
      0.00 ± -1%      +Inf%       2.36 ± 12%  perf-profile.cycles-pp.__split_vma.isra.36.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
      2.38 ±  8%    -100.0%       0.00 ± -1%  perf-profile.cycles-pp.__split_vma.isra.37.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
      4.37 ± 31%     +32.9%       5.80 ± 15%  perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.37 ± 31%     +32.9%       5.80 ± 15%  perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
      4.08 ± 34%     +39.8%       5.70 ± 15%  perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
      0.89 ±  6%     +16.3%       1.04 ±  5%  perf-profile.cycles-pp.perf_event_aux.part.46.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.x86_64_start_kernel
      4.52 ± 28%     +29.4%       5.85 ± 14%  perf-profile.cycles-pp.x86_64_start_reservations.x86_64_start_kernel


ivb42: Ivytown Ivy Bridge-EP
Memory: 64G



                              will-it-scale.scalability

   0.14 ++-----------OOOO-O-------------------------------------------------+
  0.135 ++               O      OOO                                         |
        |          OO      O O                                              |
   0.13 OOOOOO OOOO         O OO   OOOOOOOO                                 |
  0.125 ++    O                                                             |
        |                                                                   |
   0.12 ++                                                                  |
  0.115 ++                                                                  |
   0.11 ++                                                                  |
        |                                       *                           |
  0.105 ++        *                        ***** :                          |
    0.1 ********** **     *****************      *****************          |
        |          * *   *                                        ** *  *****
  0.095 ++            ***                                           * **    |
   0.09 ++------------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong Ye

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3464 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: will-it-scale
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  heartbeat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
default-watchdogs:
  oom-killer: 
  watchdog: 
commit: 39a1aa8e194ab67983de3b9d0b204ccee12e689a
model: Ivytown Ivy Bridge-EP
nr_cpu: 48
memory: 64G
swap_partitions: LABEL=SWAP
rootfs_partition: LABEL=LKP-ROOTFS
category: benchmark
perf-profile:
  freq: 800
will-it-scale:
  test: malloc1
queue: bisect
testbox: ivb42
tbox_group: ivb42
kconfig: x86_64-rhel
enqueue_time: 2016-03-27 01:28:50.274043932 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: cbb727cf0abbc1788ae7a2be13107a8cd059a3ca
user: lkp
head_commit: 85060f056f13635ada31751734ceaa417fafa477
base_commit: b562e44f507e863c6792946e4e1b1449fbbac85d
branch: linux-devel/devel-hourly-2016032608
result_root: "/result/will-it-scale/performance-malloc1/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/0"
job_file: "/lkp/scheduled/ivb42/bisect_will-it-scale-performance-malloc1-debian-x86_64-2015-02-07.cgz-x86_64-rhel-39a1aa8e194ab67983de3b9d0b204ccee12e689a-20160327-65598-ul4ews-0.yaml"
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/ivb42/bisect_will-it-scale-performance-malloc1-debian-x86_64-2015-02-07.cgz-x86_64-rhel-39a1aa8e194ab67983de3b9d0b204ccee12e689a-20160327-65598-ul4ews-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2016032608
- commit=39a1aa8e194ab67983de3b9d0b204ccee12e689a
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/vmlinuz-4.5.0-02567-g39a1aa8
- max_uptime=1500
- RESULT_ROOT=/result/will-it-scale/performance-malloc1/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/0
- LKP_SERVER=inn
- |2-


  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/will-it-scale.cgz,/lkp/benchmarks/will-it-scale.cgz,/lkp/benchmarks/will-it-scale-x86_64.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/linux-headers.cgz"
repeat_to: 2
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/39a1aa8e194ab67983de3b9d0b204ccee12e689a/vmlinuz-4.5.0-02567-g39a1aa8"
dequeue_time: 2016-03-27 01:38:55.386064237 +08:00
job_state: finished
loadavg: 37.77 18.15 7.19 1/544 9381
start_time: '1459013979'
end_time: '1459014290'
version: "/lkp/lkp/.src-20160325-205817"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 4565 bytes --]

2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-03-27 01:39:38 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-03-27 01:39:39 ./runtest.py malloc1 25 both 1 12 24 36 48

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-03-30  5:47 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-30  5:46 [lkp] [mm] 39a1aa8e19: will-it-scale.per_process_ops +5.2% improvement kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).