linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [mm] 23047a96d7: vm-scalability.throughput -23.8% regression
@ 2016-05-17  4:58 kernel test robot
  2016-05-23 20:46 ` Johannes Weiner
  0 siblings, 1 reply; 4+ messages in thread
From: kernel test robot @ 2016-05-17  4:58 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Sergey Senozhatsky, Vladimir Davydov, Michal Hocko,
	David Rientjes, LKML, lkp

[-- Attachment #1: Type: text/plain, Size: 13523 bytes --]

FYI, we noticed vm-scalability.throughput -23.8% regression due to commit:

commit 23047a96d7cfcfca1a6d026ecaec526ea4803e9e ("mm: workingset: per-cgroup cache thrash detection")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

in testcase: vm-scalability
on test machine: lkp-hsw01: 56 threads Grantley Haswell-EP with 64G memory
with following conditions: cpufreq_governor=performance/runtime=300s/test=lru-file-readtwice


Details are as below:
-------------------------------------------------------------------------------------------------->


=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-readtwice/vm-scalability

commit: 
  612e44939c3c77245ac80843c0c7876c8cf97282
  23047a96d7cfcfca1a6d026ecaec526ea4803e9e

612e44939c3c7724 23047a96d7cfcfca1a6d026eca 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
  28384711 ±  0%     -23.8%   21621405 ±  0%  vm-scalability.throughput
   1854112 ±  0%      -7.7%    1711141 ±  0%  vm-scalability.time.involuntary_context_switches
    176.03 ±  0%     -22.2%     136.95 ±  1%  vm-scalability.time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%  vm-scalability.time.voluntary_context_switches
      0.92 ±  2%     +51.0%       1.38 ±  2%  perf-profile.cycles-pp.kswapd
    754212 ±  1%     -29.2%     533832 ±  2%  softirqs.RCU
     20518 ±  2%      -8.1%      18866 ±  2%  vmstat.system.cs
     10574 ± 19%     +29.9%      13737 ±  8%  numa-meminfo.node0.Mapped
     13490 ± 13%     -36.6%       8549 ± 17%  numa-meminfo.node1.Mapped
    583.00 ±  8%     +18.8%     692.50 ±  5%  slabinfo.avc_xperms_node.active_objs
    583.00 ±  8%     +18.8%     692.50 ±  5%  slabinfo.avc_xperms_node.num_objs
    176.03 ±  0%     -22.2%     136.95 ±  1%  time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%  time.voluntary_context_switches
    263.42 ±  0%      -3.0%     255.52 ±  0%  turbostat.PkgWatt
     61.05 ±  0%     -12.7%      53.26 ±  0%  turbostat.RAMWatt
      1868 ± 16%     -43.7%       1052 ± 13%  cpuidle.C1-HSW.usage
      1499 ±  9%     -30.3%       1045 ± 12%  cpuidle.C3-HSW.usage
     16071 ±  4%     -15.0%      13664 ±  3%  cpuidle.C6-HSW.usage
     17572 ± 27%     -59.1%       7179 ±  5%  cpuidle.POLL.usage
 4.896e+08 ±  0%     -20.7%  3.884e+08 ±  0%  numa-numastat.node0.local_node
  71305376 ±  2%     -19.7%   57223573 ±  4%  numa-numastat.node0.numa_foreign
 4.896e+08 ±  0%     -20.7%  3.884e+08 ±  0%  numa-numastat.node0.numa_hit
  43760475 ±  3%     -22.1%   34074417 ±  5%  numa-numastat.node0.numa_miss
  43765010 ±  3%     -22.1%   34078937 ±  5%  numa-numastat.node0.other_node
 4.586e+08 ±  0%     -25.7%  3.408e+08 ±  1%  numa-numastat.node1.local_node
  43760472 ±  3%     -22.1%   34074417 ±  5%  numa-numastat.node1.numa_foreign
 4.586e+08 ±  0%     -25.7%  3.408e+08 ±  1%  numa-numastat.node1.numa_hit
  71305376 ±  2%     -19.7%   57223573 ±  4%  numa-numastat.node1.numa_miss
  71311721 ±  2%     -19.7%   57229904 ±  4%  numa-numastat.node1.other_node
    543.25 ±  3%     -15.0%     461.50 ±  3%  numa-vmstat.node0.nr_isolated_file
      2651 ± 19%     +30.2%       3451 ±  8%  numa-vmstat.node0.nr_mapped
      1226 ±  6%     -31.7%     837.25 ±  9%  numa-vmstat.node0.nr_pages_scanned
  37111278 ±  1%     -20.6%   29474561 ±  3%  numa-vmstat.node0.numa_foreign
 2.568e+08 ±  0%     -21.0%  2.028e+08 ±  0%  numa-vmstat.node0.numa_hit
 2.567e+08 ±  0%     -21.0%  2.027e+08 ±  0%  numa-vmstat.node0.numa_local
  22595209 ±  2%     -22.9%   17420980 ±  4%  numa-vmstat.node0.numa_miss
  22665391 ±  2%     -22.8%   17490378 ±  4%  numa-vmstat.node0.numa_other
     88.25 ±173%   +1029.7%     997.00 ± 63%  numa-vmstat.node0.workingset_activate
   3965715 ±  0%     -24.9%    2977998 ±  0%  numa-vmstat.node0.workingset_nodereclaim
     90.25 ±170%   +1006.4%     998.50 ± 63%  numa-vmstat.node0.workingset_refault
    612.50 ±  3%      -9.4%     554.75 ±  4%  numa-vmstat.node1.nr_alloc_batch
      3279 ± 14%     -34.1%       2161 ± 17%  numa-vmstat.node1.nr_mapped
  22597658 ±  2%     -22.9%   17423271 ±  4%  numa-vmstat.node1.numa_foreign
 2.403e+08 ±  0%     -25.9%  1.781e+08 ±  1%  numa-vmstat.node1.numa_hit
 2.403e+08 ±  0%     -25.9%  1.781e+08 ±  1%  numa-vmstat.node1.numa_local
  37115261 ±  1%     -20.6%   29478460 ±  3%  numa-vmstat.node1.numa_miss
  37136533 ±  1%     -20.6%   29500409 ±  3%  numa-vmstat.node1.numa_other
      6137 ±173%    +257.3%      21927 ± 60%  numa-vmstat.node1.workingset_activate
   3237162 ±  0%     -30.6%    2246385 ±  1%  numa-vmstat.node1.workingset_nodereclaim
      6139 ±173%    +257.2%      21930 ± 60%  numa-vmstat.node1.workingset_refault
    501243 ±  0%     -26.9%     366510 ±  1%  proc-vmstat.allocstall
     28483 ±  0%     -50.7%      14047 ±  3%  proc-vmstat.kswapd_low_wmark_hit_quickly
 1.151e+08 ±  0%     -20.7%   91297990 ±  0%  proc-vmstat.numa_foreign
 9.482e+08 ±  0%     -23.1%  7.293e+08 ±  0%  proc-vmstat.numa_hit
 9.482e+08 ±  0%     -23.1%  7.293e+08 ±  0%  proc-vmstat.numa_local
 1.151e+08 ±  0%     -20.7%   91297990 ±  0%  proc-vmstat.numa_miss
 1.151e+08 ±  0%     -20.7%   91308842 ±  0%  proc-vmstat.numa_other
     31562 ±  0%     -47.1%      16687 ±  2%  proc-vmstat.pageoutrun
 1.048e+09 ±  0%     -22.8%  8.088e+08 ±  0%  proc-vmstat.pgactivate
  28481000 ±  0%     -21.3%   22422907 ±  0%  proc-vmstat.pgalloc_dma32
 1.035e+09 ±  0%     -22.9%  7.984e+08 ±  0%  proc-vmstat.pgalloc_normal
 1.041e+09 ±  0%     -23.0%  8.024e+08 ±  0%  proc-vmstat.pgdeactivate
 1.063e+09 ±  0%     -22.8%    8.2e+08 ±  0%  proc-vmstat.pgfree
      2458 ± 91%     -93.5%     160.75 ± 29%  proc-vmstat.pgmigrate_success
  27571690 ±  0%     -20.6%   21889554 ±  0%  proc-vmstat.pgrefill_dma32
 1.014e+09 ±  0%     -23.0%  7.805e+08 ±  0%  proc-vmstat.pgrefill_normal
  25263166 ±  0%     -27.4%   18337251 ±  1%  proc-vmstat.pgscan_direct_dma32
 9.377e+08 ±  0%     -26.9%  6.852e+08 ±  1%  proc-vmstat.pgscan_direct_normal
   2134103 ±  1%     +57.6%    3363418 ±  6%  proc-vmstat.pgscan_kswapd_dma32
  69594167 ±  0%     +26.7%   88192786 ±  2%  proc-vmstat.pgscan_kswapd_normal
  25260851 ±  0%     -27.4%   18335464 ±  1%  proc-vmstat.pgsteal_direct_dma32
 9.376e+08 ±  0%     -26.9%  6.852e+08 ±  1%  proc-vmstat.pgsteal_direct_normal
   2133563 ±  1%     +57.6%    3362346 ±  6%  proc-vmstat.pgsteal_kswapd_dma32
  69585316 ±  0%     +26.7%   88176045 ±  2%  proc-vmstat.pgsteal_kswapd_normal
  17530080 ±  0%     -23.3%   13440416 ±  0%  proc-vmstat.slabs_scanned
      6226 ±173%    +268.2%      22924 ± 58%  proc-vmstat.workingset_activate
   7202139 ±  0%     -27.5%    5223203 ±  0%  proc-vmstat.workingset_nodereclaim
      6230 ±173%    +268.0%      22929 ± 58%  proc-vmstat.workingset_refault
    123.70 ± 12%     +26.7%     156.79 ± 11%  sched_debug.cfs_rq:/.load.stddev
     42.08 ±  1%     +23.3%      51.90 ±  8%  sched_debug.cfs_rq:/.load_avg.avg
    779.50 ±  2%     +20.7%     940.83 ±  5%  sched_debug.cfs_rq:/.load_avg.max
      9.46 ±  8%     -13.7%       8.17 ±  1%  sched_debug.cfs_rq:/.load_avg.min
    123.38 ±  2%     +31.4%     162.10 ±  6%  sched_debug.cfs_rq:/.load_avg.stddev
    304497 ± 22%     +65.6%     504169 ±  7%  sched_debug.cfs_rq:/.min_vruntime.stddev
     25.74 ±  8%     +33.9%      34.46 ±  8%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    481.33 ± 11%     +50.5%     724.54 ± 11%  sched_debug.cfs_rq:/.runnable_load_avg.max
     69.65 ± 15%     +62.2%     112.95 ± 12%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
  -1363122 ±-14%     +52.6%   -2080627 ±-10%  sched_debug.cfs_rq:/.spread0.min
    304448 ± 22%     +65.6%     504111 ±  7%  sched_debug.cfs_rq:/.spread0.stddev
    733220 ±  5%     +13.0%     828548 ±  1%  sched_debug.cpu.avg_idle.avg
    123344 ± 11%     +73.4%     213827 ± 27%  sched_debug.cpu.avg_idle.min
    233732 ±  5%     -13.5%     202264 ±  6%  sched_debug.cpu.avg_idle.stddev
     26.93 ±  9%     +27.8%      34.42 ±  8%  sched_debug.cpu.cpu_load[0].avg
     78.79 ± 19%     +43.7%     113.20 ± 12%  sched_debug.cpu.cpu_load[0].stddev
     26.23 ±  8%     +30.5%      34.23 ±  7%  sched_debug.cpu.cpu_load[1].avg
    513.17 ± 12%     +38.6%     711.12 ± 11%  sched_debug.cpu.cpu_load[1].max
     73.34 ± 15%     +50.7%     110.55 ± 11%  sched_debug.cpu.cpu_load[1].stddev
     25.93 ±  6%     +32.6%      34.40 ±  6%  sched_debug.cpu.cpu_load[2].avg
    488.38 ±  8%     +44.8%     706.96 ± 10%  sched_debug.cpu.cpu_load[2].max
     69.79 ± 10%     +56.9%     109.52 ± 10%  sched_debug.cpu.cpu_load[2].stddev
     25.89 ±  4%     +35.1%      34.97 ±  4%  sched_debug.cpu.cpu_load[3].avg
    467.83 ±  7%     +50.2%     702.71 ±  9%  sched_debug.cpu.cpu_load[3].max
     67.27 ±  9%     +63.6%     110.03 ±  8%  sched_debug.cpu.cpu_load[3].stddev
     25.83 ±  4%     +37.2%      35.44 ±  3%  sched_debug.cpu.cpu_load[4].avg
    445.29 ±  9%     +56.7%     697.88 ±  8%  sched_debug.cpu.cpu_load[4].max
     64.41 ±  9%     +72.4%     111.02 ±  6%  sched_debug.cpu.cpu_load[4].stddev
    123.66 ± 12%     +28.2%     158.54 ± 11%  sched_debug.cpu.load.stddev
      1.56 ±  1%      +9.8%       1.71 ±  0%  sched_debug.cpu.nr_running.avg
      0.46 ± 12%     +28.4%       0.59 ±  6%  sched_debug.cpu.nr_running.stddev
     57967 ±  3%      -9.8%      52290 ±  2%  sched_debug.cpu.nr_switches.avg
    270099 ±  9%     -16.4%     225748 ±  7%  sched_debug.cpu.nr_switches.max
     27370 ±  1%     -13.3%      23723 ±  0%  sched_debug.cpu.nr_switches.min
     55749 ±  7%     -14.3%      47767 ±  5%  sched_debug.cpu.nr_switches.stddev
    -55.33 ±-19%     -40.4%     -32.96 ± -2%  sched_debug.cpu.nr_uninterruptible.min

=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
  gcc-5/x86_64-randconfig-a0-04240012/yocto-minimal-i386.cgz/1/vm-kbuild-yocto-ia32/boot

commit: 
  612e44939c3c77245ac80843c0c7876c8cf97282
  23047a96d7cfcfca1a6d026ecaec526ea4803e9e

612e44939c3c7724 23047a96d7cfcfca1a6d026eca 
---------------- -------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
           :50           2%           1:180   kmsg.augmented_rbtree_testing
           :50         216%         108:180   last_state.is_incomplete_run



                           vm-scalability.time.user_time

  180 *+**-*-----*-*-*--*-*---*----*-*-**------**-*-**-**------*-**---------+
  175 ++     **.*     *    *.*  *.*       **.*            *.**     +  .*.**.*
      |                                                             **      |
  170 ++                                                                    |
  165 ++                                                                    |
      |                                                                     |
  160 ++                                                                    |
  155 ++                                                                    |
  150 ++                                                                    |
      | O                                                                   |
  145 O+ O O OO    O                                                        |
  140 ++        OO                                     O                    |
      |                   OO O  O  O O     O   OO O O                       |
  135 ++              O O     O   O    OO O  O       O                      |
  130 ++-------------O------------------------------------------------------+


                               vm-scalability.throughput

  2.9e+07 ++------*--------------------------*---------------*--------------+
          |.*   *  *.* .**.**.* .*.**. *.  .*  **. *. .*   *  *.* .* .**.**.*
  2.8e+07 *+ *.*      *        *      *  **       *  *  *.*      *  *       |
  2.7e+07 ++                                                                |
          |                                                                 |
  2.6e+07 ++                                                                |
          |                                                                 |
  2.5e+07 ++                                                                |
          |                                                                 |
  2.4e+07 ++                                                                |
  2.3e+07 ++O  OO     O                                                     |
          O  O    OO O                                                      |
  2.2e+07 ++                                                                |
          |             OO OO OO O OO OO OO OO OO OO O OO                   |
  2.1e+07 ++----------------------------------------------------------------+



	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3531 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: vm-scalability
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  heartbeat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
NFS_HANG_DF_TIMEOUT: 200
NFS_HANG_CHECK_INTERVAL: 900
default-watchdogs:
  oom-killer: 
  watchdog: 
  nfs-hang: 
commit: 23047a96d7cfcfca1a6d026ecaec526ea4803e9e
model: Grantley Haswell-EP
nr_cpu: 56
memory: 64G
hdd_partitions: 
swap_partitions: 
rootfs_partition: 
category: benchmark
perf-profile: 
runtime: 300s
size: 
vm-scalability:
  test: lru-file-readtwice
queue: bisect
testbox: lkp-hsw01
tbox_group: lkp-hsw01
kconfig: x86_64-rhel
enqueue_time: 2016-05-17 00:13:15.109289824 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 7f627bf0745c19a8c6ee6f267ee1d3176e422ef4
user: lkp
head_commit: 2dcd0af568b0cf583645c8a317dd12e344b1c72a
base_commit: 145bdaa1501bf1c8a6cfa8ea5e347b9a46aad1b7
branch: linus/master
result_root: "/result/vm-scalability/performance-300s-lru-file-readtwice/lkp-hsw01/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/0"
job_file: "/lkp/scheduled/lkp-hsw01/bisect_vm-scalability-performance-300s-lru-file-readtwice-debian-x86_64-2015-02-07.cgz-x86_64-rhel-23047a96d7cfcfca1a6d026ecaec526ea4803e9e-20160517-25208-1epp6aw-0.yaml"
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-hsw01/bisect_vm-scalability-performance-300s-lru-file-readtwice-debian-x86_64-2015-02-07.cgz-x86_64-rhel-23047a96d7cfcfca1a6d026ecaec526ea4803e9e-20160517-25208-1epp6aw-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linus/master
- commit=23047a96d7cfcfca1a6d026ecaec526ea4803e9e
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/vmlinuz-4.5.0-00570-g23047a9
- max_uptime=1500
- RESULT_ROOT=/result/vm-scalability/performance-300s-lru-file-readtwice/lkp-hsw01/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/0
- LKP_SERVER=inn
- |2-


  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/perf-profile-x86_64.cgz,/lkp/benchmarks/vm-scalability.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/linux-headers.cgz"
repeat_to: 2
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/23047a96d7cfcfca1a6d026ecaec526ea4803e9e/vmlinuz-4.5.0-00570-g23047a9"
dequeue_time: 2016-05-17 00:25:26.281743408 +08:00
job_state: finished
loadavg: 39.10 58.24 29.31 1/548 9271
start_time: '1463415975'
end_time: '1463416286'
version: "/lkp/lkp/.src-20160516-224742"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 12115 bytes --]

2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-05-17 00:26:13 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-05-17 00:26:14 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-05-17 00:26:15 mount -t tmpfs -o size=100% vm-scalability-tmp /tmp/vm-scalability-tmp
2016-05-17 00:26:15 truncate -s 67404406784 /tmp/vm-scalability-tmp/vm-scalability.img
2016-05-17 00:26:15 mkfs.xfs -q /tmp/vm-scalability-tmp/vm-scalability.img
2016-05-17 00:26:15 mount -o loop /tmp/vm-scalability-tmp/vm-scalability.img /tmp/vm-scalability-tmp/vm-scalability
2016-05-17 00:26:15 ./case-lru-file-readtwice
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-1 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-2 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-3 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-4 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-5 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-6 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-7 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-8 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-9 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-10 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-11 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-12 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-13 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-14 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-15 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-16 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-17 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-18 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-19 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-20 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-21 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-22 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-23 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-24 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-25 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-26 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-27 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-28 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-29 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-30 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-31 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-32 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-33 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-34 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-35 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-36 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-37 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-38 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-39 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-40 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-41 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-42 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-43 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-44 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-45 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-46 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-47 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-48 -s 78536544841
2016-05-17 00:26:15 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-49 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-50 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-51 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-52 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-53 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-54 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-55 -s 78536544841
2016-05-17 00:26:16 truncate /tmp/vm-scalability-tmp/vm-scalability/sparse-lru-file-readtwice-56 -s 78536544841
2016-05-17 00:31:26 umount /tmp/vm-scalability-tmp/vm-scalability
2016-05-17 00:31:26 rm /tmp/vm-scalability-tmp/vm-scalability.img
2016-05-17 00:31:26 umount /tmp/vm-scalability-tmp

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [mm] 23047a96d7: vm-scalability.throughput -23.8% regression
  2016-05-17  4:58 [mm] 23047a96d7: vm-scalability.throughput -23.8% regression kernel test robot
@ 2016-05-23 20:46 ` Johannes Weiner
  2016-05-25  6:06   ` Ye Xiaolong
  0 siblings, 1 reply; 4+ messages in thread
From: Johannes Weiner @ 2016-05-23 20:46 UTC (permalink / raw)
  To: kernel test robot
  Cc: Sergey Senozhatsky, Vladimir Davydov, Michal Hocko,
	David Rientjes, LKML, lkp

Hi,

thanks for your report.

On Tue, May 17, 2016 at 12:58:05PM +0800, kernel test robot wrote:
> FYI, we noticed vm-scalability.throughput -23.8% regression due to commit:
> 
> commit 23047a96d7cfcfca1a6d026ecaec526ea4803e9e ("mm: workingset: per-cgroup cache thrash detection")
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> 
> in testcase: vm-scalability
> on test machine: lkp-hsw01: 56 threads Grantley Haswell-EP with 64G memory
> with following conditions: cpufreq_governor=performance/runtime=300s/test=lru-file-readtwice

That test hammers the LRU activation path, to which this patch added
the cgroup lookup and pinning code. Does the following patch help?

>From b535c630fd8954865b7536c915c3916beb3b4830 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@cmpxchg.org>
Date: Mon, 23 May 2016 16:14:24 -0400
Subject: [PATCH] mm: fix vm-scalability regression in workingset_activation()

23047a96d7cf ("mm: workingset: per-cgroup cache thrash detection")
added cgroup lookup and pinning overhead to the LRU activation path,
which the vm-scalability benchmark is particularly sensitive to.

Inline the lookup functions to eliminate calls. Furthermore, since
activations are not moved when pages are moved between memcgs, we
don't need the full page->mem_cgroup locking; holding the RCU lock is
enough to prevent the memcg from being freed.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 include/linux/memcontrol.h | 43 ++++++++++++++++++++++++++++++++++++++++++-
 include/linux/mm.h         |  8 ++++++++
 mm/memcontrol.c            | 42 ------------------------------------------
 mm/workingset.c            | 10 ++++++----
 4 files changed, 56 insertions(+), 47 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a805474df4ab..0bb36cf89bf6 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -306,7 +306,48 @@ void mem_cgroup_uncharge_list(struct list_head *page_list);
 
 void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
 
-struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
+static inline struct mem_cgroup_per_zone *
+mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
+{
+	int nid = zone_to_nid(zone);
+	int zid = zone_idx(zone);
+
+	return &memcg->nodeinfo[nid]->zoneinfo[zid];
+}
+
+/**
+ * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
+ * @zone: zone of the wanted lruvec
+ * @memcg: memcg of the wanted lruvec
+ *
+ * Returns the lru list vector holding pages for the given @zone and
+ * @mem.  This can be the global zone lruvec, if the memory controller
+ * is disabled.
+ */
+static inline struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
+						    struct mem_cgroup *memcg)
+{
+	struct mem_cgroup_per_zone *mz;
+	struct lruvec *lruvec;
+
+	if (mem_cgroup_disabled()) {
+		lruvec = &zone->lruvec;
+		goto out;
+	}
+
+	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
+	lruvec = &mz->lruvec;
+out:
+	/*
+	 * Since a node can be onlined after the mem_cgroup was created,
+	 * we have to be prepared to initialize lruvec->zone here;
+	 * and if offlined then reonlined, we need to reinitialize it.
+	 */
+	if (unlikely(lruvec->zone != zone))
+		lruvec->zone = zone;
+	return lruvec;
+}
+
 struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *);
 
 bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b530c99e8e81..a9dd54e196a7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -943,11 +943,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 {
 	return page->mem_cgroup;
 }
+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
+{
+	return READ_ONCE(page->mem_cgroup);
+}
 #else
 static inline struct mem_cgroup *page_memcg(struct page *page)
 {
 	return NULL;
 }
+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
+{
+	return NULL;
+}
 #endif
 
 /*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b3f16ab4b431..f65e5e527864 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -323,15 +323,6 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
 
 #endif /* !CONFIG_SLOB */
 
-static struct mem_cgroup_per_zone *
-mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
-{
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-
-	return &memcg->nodeinfo[nid]->zoneinfo[zid];
-}
-
 /**
  * mem_cgroup_css_from_page - css of the memcg associated with a page
  * @page: page of interest
@@ -944,39 +935,6 @@ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
 	     iter = mem_cgroup_iter(NULL, iter, NULL))
 
 /**
- * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
- * @zone: zone of the wanted lruvec
- * @memcg: memcg of the wanted lruvec
- *
- * Returns the lru list vector holding pages for the given @zone and
- * @mem.  This can be the global zone lruvec, if the memory controller
- * is disabled.
- */
-struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
-				      struct mem_cgroup *memcg)
-{
-	struct mem_cgroup_per_zone *mz;
-	struct lruvec *lruvec;
-
-	if (mem_cgroup_disabled()) {
-		lruvec = &zone->lruvec;
-		goto out;
-	}
-
-	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
-	lruvec = &mz->lruvec;
-out:
-	/*
-	 * Since a node can be onlined after the mem_cgroup was created,
-	 * we have to be prepared to initialize lruvec->zone here;
-	 * and if offlined then reonlined, we need to reinitialize it.
-	 */
-	if (unlikely(lruvec->zone != zone))
-		lruvec->zone = zone;
-	return lruvec;
-}
-
-/**
  * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page
  * @page: the page
  * @zone: zone of the page
diff --git a/mm/workingset.c b/mm/workingset.c
index 8a75f8d2916a..8252de4566e9 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -305,9 +305,10 @@ bool workingset_refault(void *shadow)
  */
 void workingset_activation(struct page *page)
 {
+	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
-	lock_page_memcg(page);
+	rcu_read_lock();
 	/*
 	 * Filter non-memcg pages here, e.g. unmap can call
 	 * mark_page_accessed() on VDSO pages.
@@ -315,12 +316,13 @@ void workingset_activation(struct page *page)
 	 * XXX: See workingset_refault() - this should return
 	 * root_mem_cgroup even for !CONFIG_MEMCG.
 	 */
-	if (!mem_cgroup_disabled() && !page_memcg(page))
+	memcg = page_memcg_rcu(page);
+	if (!mem_cgroup_disabled() && !memcg)
 		goto out;
-	lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
+	lruvec = mem_cgroup_zone_lruvec(page_zone(page), memcg);
 	atomic_long_inc(&lruvec->inactive_age);
 out:
-	unlock_page_memcg(page);
+	rcu_read_unlock();
 }
 
 /*
-- 
2.8.2

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [mm] 23047a96d7: vm-scalability.throughput -23.8% regression
  2016-05-23 20:46 ` Johannes Weiner
@ 2016-05-25  6:06   ` Ye Xiaolong
  2016-05-27  8:11     ` [LKP] " Ye Xiaolong
  0 siblings, 1 reply; 4+ messages in thread
From: Ye Xiaolong @ 2016-05-25  6:06 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Sergey Senozhatsky, Vladimir Davydov, Michal Hocko,
	David Rientjes, LKML, lkp

On Mon, May 23, 2016 at 04:46:05PM -0400, Johannes Weiner wrote:
>Hi,
>
>thanks for your report.
>
>On Tue, May 17, 2016 at 12:58:05PM +0800, kernel test robot wrote:
>> FYI, we noticed vm-scalability.throughput -23.8% regression due to commit:
>> 
>> commit 23047a96d7cfcfca1a6d026ecaec526ea4803e9e ("mm: workingset: per-cgroup cache thrash detection")
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> 
>> in testcase: vm-scalability
>> on test machine: lkp-hsw01: 56 threads Grantley Haswell-EP with 64G memory
>> with following conditions: cpufreq_governor=performance/runtime=300s/test=lru-file-readtwice
>
>That test hammers the LRU activation path, to which this patch added
>the cgroup lookup and pinning code. Does the following patch help?
>

Hi,

Here is the comparison of original first bad commit (23047a96d) and your new patch (063f6715e).
vm-scalability.throughput improved 11.3%.


compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-readtwice/vm-scalability

commit: 
  23047a96d7cfcfca1a6d026ecaec526ea4803e9e
  063f6715e77a7be5770d6081fe6d7ca2437ac9f2

23047a96d7cfcfca 063f6715e77a7be5770d6081fe
---------------- --------------------------
         %stddev     %change         %stddev
             \          |                \
  21621405 ±  0%     +11.3%   24069657 ±  2%  vm-scalability.throughput
   1711141 ±  0%     +40.9%    2411083 ±  2%  vm-scalability.time.involuntary_context_switches
      2747 ±  0%      +2.4%       2812 ±  0%  vm-scalability.time.maximum_resident_set_size
      5243 ±  0%      -1.2%       5180 ±  0%  vm-scalability.time.percent_of_cpu_this_job_got
    136.95 ±  1%     +13.6%     155.55 ±  0%  vm-scalability.time.user_time
    208386 ±  0%     -71.5%      59394 ± 16%  vm-scalability.time.voluntary_context_switches
      1.38 ±  2%     +21.7%       1.69 ±  2%  perf-profile.cycles-pp.kswapd
    160522 ±  5%     -30.0%     112342 ±  2%  softirqs.SCHED
      2536 ±  0%      +7.3%       2722 ±  2%  uptime.idle
   1711141 ±  0%     +40.9%    2411083 ±  2%  time.involuntary_context_switches
    136.95 ±  1%     +13.6%     155.55 ±  0%  time.user_time
    208386 ±  0%     -71.5%      59394 ± 16%  time.voluntary_context_switches
      1052 ± 13%   +1453.8%      16346 ± 39%  cpuidle.C1-HSW.usage
      1045 ± 12%     -54.3%     477.50 ± 25%  cpuidle.C3-HSW.usage
 5.719e+08 ±  1%     +17.9%  6.743e+08 ±  0%  cpuidle.C6-HSW.time
  40424411 ±  2%     -97.3%    1076732 ± 99%  cpuidle.POLL.time
      7179 ±  5%     -99.9%       6.50 ± 53%  cpuidle.POLL.usage
      0.51 ±  8%     -40.6%       0.30 ± 13%  turbostat.CPU%c1
      2.83 ±  2%     +30.5%       3.70 ±  0%  turbostat.CPU%c6
      0.23 ± 79%    +493.4%       1.35 ±  2%  turbostat.Pkg%pc2
    255.52 ±  0%      +3.3%     263.95 ±  0%  turbostat.PkgWatt
     53.26 ±  0%     +14.9%      61.22 ±  0%  turbostat.RAMWatt
   1836104 ±  0%     +13.3%    2079934 ±  4%  vmstat.memory.free
      5.00 ±  0%     -70.0%       1.50 ± 33%  vmstat.procs.b
    107.00 ±  0%      +8.4%     116.00 ±  2%  vmstat.procs.r
     18866 ±  2%     +40.1%      26436 ± 13%  vmstat.system.cs
     69056 ±  0%     +11.8%      77219 ±  1%  vmstat.system.in
  31628132 ±  0%     +80.9%   57224963 ±  0%  meminfo.Active
  31294504 ±  0%     +81.7%   56876042 ±  0%  meminfo.Active(file)
    142271 ±  6%     +11.2%     158138 ±  5%  meminfo.DirectMap4k
  30612825 ±  0%     -87.2%    3915695 ±  0%  meminfo.Inactive
  30562772 ±  0%     -87.4%    3862631 ±  0%  meminfo.Inactive(file)
     15635 ±  1%     +38.0%      21572 ±  8%  meminfo.KernelStack
     22575 ±  2%      +7.7%      24316 ±  4%  meminfo.Mapped
   1762372 ±  3%     +12.2%    1976873 ±  3%  meminfo.MemFree
    847557 ±  0%    +105.5%    1741958 ±  8%  meminfo.SReclaimable
    946378 ±  0%     +95.1%    1846370 ±  8%  meminfo.Slab

Thanks,
Xiaolong

>From b535c630fd8954865b7536c915c3916beb3b4830 Mon Sep 17 00:00:00 2001
>From: Johannes Weiner <hannes@cmpxchg.org>
>Date: Mon, 23 May 2016 16:14:24 -0400
>Subject: [PATCH] mm: fix vm-scalability regression in workingset_activation()
>
>23047a96d7cf ("mm: workingset: per-cgroup cache thrash detection")
>added cgroup lookup and pinning overhead to the LRU activation path,
>which the vm-scalability benchmark is particularly sensitive to.
>
>Inline the lookup functions to eliminate calls. Furthermore, since
>activations are not moved when pages are moved between memcgs, we
>don't need the full page->mem_cgroup locking; holding the RCU lock is
>enough to prevent the memcg from being freed.
>
>Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>---
> include/linux/memcontrol.h | 43 ++++++++++++++++++++++++++++++++++++++++++-
> include/linux/mm.h         |  8 ++++++++
> mm/memcontrol.c            | 42 ------------------------------------------
> mm/workingset.c            | 10 ++++++----
> 4 files changed, 56 insertions(+), 47 deletions(-)
>
>diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>index a805474df4ab..0bb36cf89bf6 100644
>--- a/include/linux/memcontrol.h
>+++ b/include/linux/memcontrol.h
>@@ -306,7 +306,48 @@ void mem_cgroup_uncharge_list(struct list_head *page_list);
> 
> void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
> 
>-struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
>+static inline struct mem_cgroup_per_zone *
>+mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
>+{
>+	int nid = zone_to_nid(zone);
>+	int zid = zone_idx(zone);
>+
>+	return &memcg->nodeinfo[nid]->zoneinfo[zid];
>+}
>+
>+/**
>+ * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
>+ * @zone: zone of the wanted lruvec
>+ * @memcg: memcg of the wanted lruvec
>+ *
>+ * Returns the lru list vector holding pages for the given @zone and
>+ * @mem.  This can be the global zone lruvec, if the memory controller
>+ * is disabled.
>+ */
>+static inline struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
>+						    struct mem_cgroup *memcg)
>+{
>+	struct mem_cgroup_per_zone *mz;
>+	struct lruvec *lruvec;
>+
>+	if (mem_cgroup_disabled()) {
>+		lruvec = &zone->lruvec;
>+		goto out;
>+	}
>+
>+	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
>+	lruvec = &mz->lruvec;
>+out:
>+	/*
>+	 * Since a node can be onlined after the mem_cgroup was created,
>+	 * we have to be prepared to initialize lruvec->zone here;
>+	 * and if offlined then reonlined, we need to reinitialize it.
>+	 */
>+	if (unlikely(lruvec->zone != zone))
>+		lruvec->zone = zone;
>+	return lruvec;
>+}
>+
> struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *);
> 
> bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
>diff --git a/include/linux/mm.h b/include/linux/mm.h
>index b530c99e8e81..a9dd54e196a7 100644
>--- a/include/linux/mm.h
>+++ b/include/linux/mm.h
>@@ -943,11 +943,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
> {
> 	return page->mem_cgroup;
> }
>+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
>+{
>+	return READ_ONCE(page->mem_cgroup);
>+}
> #else
> static inline struct mem_cgroup *page_memcg(struct page *page)
> {
> 	return NULL;
> }
>+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
>+{
>+	return NULL;
>+}
> #endif
> 
> /*
>diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>index b3f16ab4b431..f65e5e527864 100644
>--- a/mm/memcontrol.c
>+++ b/mm/memcontrol.c
>@@ -323,15 +323,6 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
> 
> #endif /* !CONFIG_SLOB */
> 
>-static struct mem_cgroup_per_zone *
>-mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
>-{
>-	int nid = zone_to_nid(zone);
>-	int zid = zone_idx(zone);
>-
>-	return &memcg->nodeinfo[nid]->zoneinfo[zid];
>-}
>-
> /**
>  * mem_cgroup_css_from_page - css of the memcg associated with a page
>  * @page: page of interest
>@@ -944,39 +935,6 @@ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
> 	     iter = mem_cgroup_iter(NULL, iter, NULL))
> 
> /**
>- * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
>- * @zone: zone of the wanted lruvec
>- * @memcg: memcg of the wanted lruvec
>- *
>- * Returns the lru list vector holding pages for the given @zone and
>- * @mem.  This can be the global zone lruvec, if the memory controller
>- * is disabled.
>- */
>-struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
>-				      struct mem_cgroup *memcg)
>-{
>-	struct mem_cgroup_per_zone *mz;
>-	struct lruvec *lruvec;
>-
>-	if (mem_cgroup_disabled()) {
>-		lruvec = &zone->lruvec;
>-		goto out;
>-	}
>-
>-	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
>-	lruvec = &mz->lruvec;
>-out:
>-	/*
>-	 * Since a node can be onlined after the mem_cgroup was created,
>-	 * we have to be prepared to initialize lruvec->zone here;
>-	 * and if offlined then reonlined, we need to reinitialize it.
>-	 */
>-	if (unlikely(lruvec->zone != zone))
>-		lruvec->zone = zone;
>-	return lruvec;
>-}
>-
>-/**
>  * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page
>  * @page: the page
>  * @zone: zone of the page
>diff --git a/mm/workingset.c b/mm/workingset.c
>index 8a75f8d2916a..8252de4566e9 100644
>--- a/mm/workingset.c
>+++ b/mm/workingset.c
>@@ -305,9 +305,10 @@ bool workingset_refault(void *shadow)
>  */
> void workingset_activation(struct page *page)
> {
>+	struct mem_cgroup *memcg;
> 	struct lruvec *lruvec;
> 
>-	lock_page_memcg(page);
>+	rcu_read_lock();
> 	/*
> 	 * Filter non-memcg pages here, e.g. unmap can call
> 	 * mark_page_accessed() on VDSO pages.
>@@ -315,12 +316,13 @@ void workingset_activation(struct page *page)
> 	 * XXX: See workingset_refault() - this should return
> 	 * root_mem_cgroup even for !CONFIG_MEMCG.
> 	 */
>-	if (!mem_cgroup_disabled() && !page_memcg(page))
>+	memcg = page_memcg_rcu(page);
>+	if (!mem_cgroup_disabled() && !memcg)
> 		goto out;
>-	lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
>+	lruvec = mem_cgroup_zone_lruvec(page_zone(page), memcg);
> 	atomic_long_inc(&lruvec->inactive_age);
> out:
>-	unlock_page_memcg(page);
>+	rcu_read_unlock();
> }
> 
> /*
>-- 
>2.8.2
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [LKP] [mm] 23047a96d7: vm-scalability.throughput -23.8% regression
  2016-05-25  6:06   ` Ye Xiaolong
@ 2016-05-27  8:11     ` Ye Xiaolong
  0 siblings, 0 replies; 4+ messages in thread
From: Ye Xiaolong @ 2016-05-27  8:11 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Sergey Senozhatsky, Vladimir Davydov, Michal Hocko,
	David Rientjes, LKML, lkp

On Wed, May 25, 2016 at 02:06:17PM +0800, Ye Xiaolong wrote:
>On Mon, May 23, 2016 at 04:46:05PM -0400, Johannes Weiner wrote:
>>Hi,
>>
>>thanks for your report.
>>
>>On Tue, May 17, 2016 at 12:58:05PM +0800, kernel test robot wrote:
>>> FYI, we noticed vm-scalability.throughput -23.8% regression due to commit:
>>> 
>>> commit 23047a96d7cfcfca1a6d026ecaec526ea4803e9e ("mm: workingset: per-cgroup cache thrash detection")
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>>> 
>>> in testcase: vm-scalability
>>> on test machine: lkp-hsw01: 56 threads Grantley Haswell-EP with 64G memory
>>> with following conditions: cpufreq_governor=performance/runtime=300s/test=lru-file-readtwice
>>
>>That test hammers the LRU activation path, to which this patch added
>>the cgroup lookup and pinning code. Does the following patch help?
>>

Hi, Johannes

FYI, I have done more tests with your fix patch.

1) apply it on top of latest kernel (head commit: 478a1469("Merge tag 'dax-locking-for-4.7' of
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm")

The following is the comparison info among first bad commit's parent, first
bad commit, head commit of linus' master branch, your fix commit(a7abed95):

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-readtwice/vm-scalability

commit: 
  612e44939c3c77245ac80843c0c7876c8cf97282
  23047a96d7cfcfca1a6d026ecaec526ea4803e9e
  478a1469a7d27fe6b2f85fc801ecdeb8afc836e6
  a7abed950afdc1186d4eaf442b7eb296ff04c947

612e44939c3c7724 23047a96d7cfcfca1a6d026eca 478a1469a7d27fe6b2f85fc801 a7abed950afdc1186d4eaf442b
---------------- -------------------------- -------------------------- --------------------------
         %stddev     %change         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \          |                \
  28384711 ±  0%     -23.8%   21621405 ±  0%     -12.4%   24865101 ±  4%      -8.1%   26076417 ±  3%  vm-scalability.throughput
   1854112 ±  0%      -7.7%    1711141 ±  0%      +6.4%    1973257 ±  4%      +9.2%    2025214 ±  3%  vm-scalability.time.involuntary_context_switches
      5279 ±  0%      -0.7%       5243 ±  0%      -2.6%       5143 ±  0%      -2.4%       5153 ±  0%  vm-scalability.time.percent_of_cpu_this_job_got
     16267 ±  0%      -0.6%      16173 ±  0%      -2.0%      15934 ±  0%      -1.8%      15978 ±  0%  vm-scalability.time.system_time
    176.03 ±  0%     -22.2%     136.95 ±  1%     -10.4%     157.66 ±  1%     -11.2%     156.32 ±  0%  vm-scalability.time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%      +5.8%     320618 ± 47%     -36.0%     193991 ± 22%  vm-scalability.time.voluntary_context_switches
      0.92 ±  2%     +51.0%       1.38 ±  2%     +96.5%       1.80 ±  0%     +97.3%       1.81 ±  0%  perf-profile.cycles-pp.kswapd
      2585 ±  1%      -1.9%       2536 ±  0%      +9.6%       2834 ±  1%     +10.7%       2862 ±  1%  uptime.idle
    754212 ±  1%     -29.2%     533832 ±  2%     -34.8%     491397 ±  2%     -27.5%     546666 ±  8%  softirqs.RCU
    151918 ±  8%      +5.7%     160522 ±  5%     -17.4%     125419 ± 18%     -22.7%     117409 ±  7%  softirqs.SCHED
    176.03 ±  0%     -22.2%     136.95 ±  1%     -10.4%     157.66 ±  1%     -11.2%     156.32 ±  0%  time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%      +5.8%     320618 ± 47%     -36.0%     193991 ± 22%  time.voluntary_context_switches


2) apply it on top of v4.6 (head commit: 2dcd0af5("Linux 4.6"))

The following is the comparison info among first bad commit's parent, first
bad commit, v4.6, your fix commit(c05f8814):

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-readtwice/vm-scalability

commit: 
  612e44939c3c77245ac80843c0c7876c8cf97282
  23047a96d7cfcfca1a6d026ecaec526ea4803e9e
  v4.6
  c05f8814641ceabbc628cd4edc7f64ff58498d5a

612e44939c3c7724 23047a96d7cfcfca1a6d026eca                       v4.6 c05f8814641ceabbc628cd4edc
---------------- -------------------------- -------------------------- --------------------------
         %stddev     %change         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \          |                \
  28384711 ±  0%     -23.8%   21621405 ±  0%     -18.9%   23013011 ±  0%     -19.2%   22937943 ±  0%  vm-scalability.throughput
   1854112 ±  0%      -7.7%    1711141 ±  0%      -5.2%    1757124 ±  0%      -4.9%    1762398 ±  0%  vm-scalability.time.involuntary_context_switches
     66021 ±  0%      -0.4%      65745 ±  0%      +1.8%      67231 ±  1%      +3.4%      68291 ±  0%  vm-scalability.time.minor_page_faults
      5279 ±  0%      -0.7%       5243 ±  0%      -1.6%       5197 ±  0%      -1.5%       5198 ±  0%  vm-scalability.time.percent_of_cpu_this_job_got
     16267 ±  0%      -0.6%      16173 ±  0%      -1.5%      16030 ±  0%      -1.4%      16032 ±  0%  vm-scalability.time.system_time
    176.03 ±  0%     -22.2%     136.95 ±  1%     -17.8%     144.66 ±  1%     -17.0%     146.15 ±  1%  vm-scalability.time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%     -19.4%     244167 ±  1%     -19.3%     244584 ±  1%  vm-scalability.time.voluntary_context_switches
     23999 ±  2%      -5.9%      22575 ±  2%      -7.1%      22291 ±  3%      -7.0%      22319 ±  1%  meminfo.Mapped
      0.92 ±  2%     +51.0%       1.38 ±  2%     +96.8%       1.81 ±  0%     +95.5%       1.79 ±  0%  perf-profile.cycles-pp.kswapd
    754212 ±  1%     -29.2%     533832 ±  2%     -41.1%     444019 ±  4%     -42.8%     431345 ±  0%  softirqs.RCU
     20518 ±  2%      -8.1%      18866 ±  2%      -2.6%      19980 ±  3%      -2.9%      19926 ±  7%  vmstat.system.cs
     10574 ± 19%     +29.9%      13737 ±  8%      +1.6%      10740 ± 33%     +16.9%      12359 ± 17%  numa-meminfo.node0.Mapped
     13490 ± 13%     -36.6%       8549 ± 17%     -13.3%      11689 ± 29%     -26.5%       9912 ± 22%  numa-meminfo.node1.Mapped
    176.03 ±  0%     -22.2%     136.95 ±  1%     -17.8%     144.66 ±  1%     -17.0%     146.15 ±  1%  time.user_time
    302905 ±  2%     -31.2%     208386 ±  0%     -19.4%     244167 ±  1%     -19.3%     244584 ±  1%  time.voluntary_context_switches


Thanks,
Xiaolong
>
>Hi,
>
>Here is the comparison of original first bad commit (23047a96d) and your new patch (063f6715e).
>vm-scalability.throughput improved 11.3%.
>
>
>compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
>  gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsw01/lru-file-readtwice/vm-scalability
>
>commit: 
>  23047a96d7cfcfca1a6d026ecaec526ea4803e9e
>  063f6715e77a7be5770d6081fe6d7ca2437ac9f2
>
>23047a96d7cfcfca 063f6715e77a7be5770d6081fe
>---------------- --------------------------
>         %stddev     %change         %stddev
>             \          |                \
>  21621405 ±  0%     +11.3%   24069657 ±  2%  vm-scalability.throughput
>   1711141 ±  0%     +40.9%    2411083 ±  2%  vm-scalability.time.involuntary_context_switches
>      2747 ±  0%      +2.4%       2812 ±  0%  vm-scalability.time.maximum_resident_set_size
>      5243 ±  0%      -1.2%       5180 ±  0%  vm-scalability.time.percent_of_cpu_this_job_got
>    136.95 ±  1%     +13.6%     155.55 ±  0%  vm-scalability.time.user_time
>    208386 ±  0%     -71.5%      59394 ± 16%  vm-scalability.time.voluntary_context_switches
>      1.38 ±  2%     +21.7%       1.69 ±  2%  perf-profile.cycles-pp.kswapd
>    160522 ±  5%     -30.0%     112342 ±  2%  softirqs.SCHED
>      2536 ±  0%      +7.3%       2722 ±  2%  uptime.idle
>   1711141 ±  0%     +40.9%    2411083 ±  2%  time.involuntary_context_switches
>    136.95 ±  1%     +13.6%     155.55 ±  0%  time.user_time
>    208386 ±  0%     -71.5%      59394 ± 16%  time.voluntary_context_switches
>      1052 ± 13%   +1453.8%      16346 ± 39%  cpuidle.C1-HSW.usage
>      1045 ± 12%     -54.3%     477.50 ± 25%  cpuidle.C3-HSW.usage
> 5.719e+08 ±  1%     +17.9%  6.743e+08 ±  0%  cpuidle.C6-HSW.time
>  40424411 ±  2%     -97.3%    1076732 ± 99%  cpuidle.POLL.time
>      7179 ±  5%     -99.9%       6.50 ± 53%  cpuidle.POLL.usage
>      0.51 ±  8%     -40.6%       0.30 ± 13%  turbostat.CPU%c1
>      2.83 ±  2%     +30.5%       3.70 ±  0%  turbostat.CPU%c6
>      0.23 ± 79%    +493.4%       1.35 ±  2%  turbostat.Pkg%pc2
>    255.52 ±  0%      +3.3%     263.95 ±  0%  turbostat.PkgWatt
>     53.26 ±  0%     +14.9%      61.22 ±  0%  turbostat.RAMWatt
>   1836104 ±  0%     +13.3%    2079934 ±  4%  vmstat.memory.free
>      5.00 ±  0%     -70.0%       1.50 ± 33%  vmstat.procs.b
>    107.00 ±  0%      +8.4%     116.00 ±  2%  vmstat.procs.r
>     18866 ±  2%     +40.1%      26436 ± 13%  vmstat.system.cs
>     69056 ±  0%     +11.8%      77219 ±  1%  vmstat.system.in
>  31628132 ±  0%     +80.9%   57224963 ±  0%  meminfo.Active
>  31294504 ±  0%     +81.7%   56876042 ±  0%  meminfo.Active(file)
>    142271 ±  6%     +11.2%     158138 ±  5%  meminfo.DirectMap4k
>  30612825 ±  0%     -87.2%    3915695 ±  0%  meminfo.Inactive
>  30562772 ±  0%     -87.4%    3862631 ±  0%  meminfo.Inactive(file)
>     15635 ±  1%     +38.0%      21572 ±  8%  meminfo.KernelStack
>     22575 ±  2%      +7.7%      24316 ±  4%  meminfo.Mapped
>   1762372 ±  3%     +12.2%    1976873 ±  3%  meminfo.MemFree
>    847557 ±  0%    +105.5%    1741958 ±  8%  meminfo.SReclaimable
>    946378 ±  0%     +95.1%    1846370 ±  8%  meminfo.Slab
>
>Thanks,
>Xiaolong
>
>>From b535c630fd8954865b7536c915c3916beb3b4830 Mon Sep 17 00:00:00 2001
>>From: Johannes Weiner <hannes@cmpxchg.org>
>>Date: Mon, 23 May 2016 16:14:24 -0400
>>Subject: [PATCH] mm: fix vm-scalability regression in workingset_activation()
>>
>>23047a96d7cf ("mm: workingset: per-cgroup cache thrash detection")
>>added cgroup lookup and pinning overhead to the LRU activation path,
>>which the vm-scalability benchmark is particularly sensitive to.
>>
>>Inline the lookup functions to eliminate calls. Furthermore, since
>>activations are not moved when pages are moved between memcgs, we
>>don't need the full page->mem_cgroup locking; holding the RCU lock is
>>enough to prevent the memcg from being freed.
>>
>>Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>>---
>> include/linux/memcontrol.h | 43 ++++++++++++++++++++++++++++++++++++++++++-
>> include/linux/mm.h         |  8 ++++++++
>> mm/memcontrol.c            | 42 ------------------------------------------
>> mm/workingset.c            | 10 ++++++----
>> 4 files changed, 56 insertions(+), 47 deletions(-)
>>
>>diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
>>index a805474df4ab..0bb36cf89bf6 100644
>>--- a/include/linux/memcontrol.h
>>+++ b/include/linux/memcontrol.h
>>@@ -306,7 +306,48 @@ void mem_cgroup_uncharge_list(struct list_head *page_list);
>> 
>> void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
>> 
>>-struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
>>+static inline struct mem_cgroup_per_zone *
>>+mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
>>+{
>>+	int nid = zone_to_nid(zone);
>>+	int zid = zone_idx(zone);
>>+
>>+	return &memcg->nodeinfo[nid]->zoneinfo[zid];
>>+}
>>+
>>+/**
>>+ * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
>>+ * @zone: zone of the wanted lruvec
>>+ * @memcg: memcg of the wanted lruvec
>>+ *
>>+ * Returns the lru list vector holding pages for the given @zone and
>>+ * @mem.  This can be the global zone lruvec, if the memory controller
>>+ * is disabled.
>>+ */
>>+static inline struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
>>+						    struct mem_cgroup *memcg)
>>+{
>>+	struct mem_cgroup_per_zone *mz;
>>+	struct lruvec *lruvec;
>>+
>>+	if (mem_cgroup_disabled()) {
>>+		lruvec = &zone->lruvec;
>>+		goto out;
>>+	}
>>+
>>+	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
>>+	lruvec = &mz->lruvec;
>>+out:
>>+	/*
>>+	 * Since a node can be onlined after the mem_cgroup was created,
>>+	 * we have to be prepared to initialize lruvec->zone here;
>>+	 * and if offlined then reonlined, we need to reinitialize it.
>>+	 */
>>+	if (unlikely(lruvec->zone != zone))
>>+		lruvec->zone = zone;
>>+	return lruvec;
>>+}
>>+
>> struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *);
>> 
>> bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
>>diff --git a/include/linux/mm.h b/include/linux/mm.h
>>index b530c99e8e81..a9dd54e196a7 100644
>>--- a/include/linux/mm.h
>>+++ b/include/linux/mm.h
>>@@ -943,11 +943,19 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
>> {
>> 	return page->mem_cgroup;
>> }
>>+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
>>+{
>>+	return READ_ONCE(page->mem_cgroup);
>>+}
>> #else
>> static inline struct mem_cgroup *page_memcg(struct page *page)
>> {
>> 	return NULL;
>> }
>>+static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
>>+{
>>+	return NULL;
>>+}
>> #endif
>> 
>> /*
>>diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>index b3f16ab4b431..f65e5e527864 100644
>>--- a/mm/memcontrol.c
>>+++ b/mm/memcontrol.c
>>@@ -323,15 +323,6 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
>> 
>> #endif /* !CONFIG_SLOB */
>> 
>>-static struct mem_cgroup_per_zone *
>>-mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone)
>>-{
>>-	int nid = zone_to_nid(zone);
>>-	int zid = zone_idx(zone);
>>-
>>-	return &memcg->nodeinfo[nid]->zoneinfo[zid];
>>-}
>>-
>> /**
>>  * mem_cgroup_css_from_page - css of the memcg associated with a page
>>  * @page: page of interest
>>@@ -944,39 +935,6 @@ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
>> 	     iter = mem_cgroup_iter(NULL, iter, NULL))
>> 
>> /**
>>- * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
>>- * @zone: zone of the wanted lruvec
>>- * @memcg: memcg of the wanted lruvec
>>- *
>>- * Returns the lru list vector holding pages for the given @zone and
>>- * @mem.  This can be the global zone lruvec, if the memory controller
>>- * is disabled.
>>- */
>>-struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
>>-				      struct mem_cgroup *memcg)
>>-{
>>-	struct mem_cgroup_per_zone *mz;
>>-	struct lruvec *lruvec;
>>-
>>-	if (mem_cgroup_disabled()) {
>>-		lruvec = &zone->lruvec;
>>-		goto out;
>>-	}
>>-
>>-	mz = mem_cgroup_zone_zoneinfo(memcg, zone);
>>-	lruvec = &mz->lruvec;
>>-out:
>>-	/*
>>-	 * Since a node can be onlined after the mem_cgroup was created,
>>-	 * we have to be prepared to initialize lruvec->zone here;
>>-	 * and if offlined then reonlined, we need to reinitialize it.
>>-	 */
>>-	if (unlikely(lruvec->zone != zone))
>>-		lruvec->zone = zone;
>>-	return lruvec;
>>-}
>>-
>>-/**
>>  * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page
>>  * @page: the page
>>  * @zone: zone of the page
>>diff --git a/mm/workingset.c b/mm/workingset.c
>>index 8a75f8d2916a..8252de4566e9 100644
>>--- a/mm/workingset.c
>>+++ b/mm/workingset.c
>>@@ -305,9 +305,10 @@ bool workingset_refault(void *shadow)
>>  */
>> void workingset_activation(struct page *page)
>> {
>>+	struct mem_cgroup *memcg;
>> 	struct lruvec *lruvec;
>> 
>>-	lock_page_memcg(page);
>>+	rcu_read_lock();
>> 	/*
>> 	 * Filter non-memcg pages here, e.g. unmap can call
>> 	 * mark_page_accessed() on VDSO pages.
>>@@ -315,12 +316,13 @@ void workingset_activation(struct page *page)
>> 	 * XXX: See workingset_refault() - this should return
>> 	 * root_mem_cgroup even for !CONFIG_MEMCG.
>> 	 */
>>-	if (!mem_cgroup_disabled() && !page_memcg(page))
>>+	memcg = page_memcg_rcu(page);
>>+	if (!mem_cgroup_disabled() && !memcg)
>> 		goto out;
>>-	lruvec = mem_cgroup_zone_lruvec(page_zone(page), page_memcg(page));
>>+	lruvec = mem_cgroup_zone_lruvec(page_zone(page), memcg);
>> 	atomic_long_inc(&lruvec->inactive_age);
>> out:
>>-	unlock_page_memcg(page);
>>+	rcu_read_unlock();
>> }
>> 
>> /*
>>-- 
>>2.8.2
>>
>_______________________________________________
>LKP mailing list
>LKP@lists.01.org
>https://lists.01.org/mailman/listinfo/lkp

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-05-27  8:13 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-17  4:58 [mm] 23047a96d7: vm-scalability.throughput -23.8% regression kernel test robot
2016-05-23 20:46 ` Johannes Weiner
2016-05-25  6:06   ` Ye Xiaolong
2016-05-27  8:11     ` [LKP] " Ye Xiaolong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).