All of lore.kernel.org
 help / color / mirror / Atom feed
* [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
@ 2014-01-16  3:07 kernel test robot
  2014-01-16 19:12 ` Dave Hansen
  0 siblings, 1 reply; 5+ messages in thread
From: kernel test robot @ 2014-01-16  3:07 UTC (permalink / raw)
  To: Dave Hansen; +Cc: LKML, lkp

Hi Dave,

We noticed increased context switches in the will-it-scale/read2
test case. Test box is a 4S IVB-EX server. The other change is
increased oltp.request_latency_max_ms in a NHM desktop.

commit 0f6934bf1695682e7ced973f67d57ab9e124c325
Author:     Dave Hansen <dave.hansen@intel.com>
AuthorDate: Mon Jan 13 07:40:46 2014 -0800

    for Fenguang to test

git tree branch is

        https://github.com/hansendc/linux.git  slub-reshrink-for-Fengguang-20140113

% compare -ab 0f6934bf1695682e7ced973f67d57ab9e124c325~ 0f6934bf1695682e7ced973f67d57ab9e124c325
9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     15.26 ~ 3%     +19.7%      18.26 ~ 4%  nhm-white/sysbench/oltp/600s-100%-1000000
     15.26          +19.7%      18.26       TOTAL oltp.request_latency_max_ms

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
   8235933 ~ 2%     +80.6%   14872911 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
   8235933          +80.6%   14872911       TOTAL interrupts.RES

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
    161531 ~ 7%    +191.9%     471544 ~ 9%  lkp-sbx04/micro/will-it-scale/read2
    161531         +191.9%     471544       TOTAL vmstat.system.cs

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     32943 ~ 1%     +71.8%      56599 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
     32943          +71.8%      56599       TOTAL vmstat.system.in

                                   vmstat.system.cs

   600000 ++---------O------------------------------------------------------+
   550000 ++            O                                                   |
          |        O       O  O             O  O           O  O             |
   500000 O+ O  O                                                           |
   450000 ++                     O  O  O          O  O                      |
   400000 ++                              O             O                   |
   350000 ++                                                                |
          |                                                                 |
   300000 ++                                                                |
   250000 ++                                                                |
   200000 ++      .*                          .*..                          |
   150000 *+.  .*.  :   *..*..*..*..*..*.. .*.      .*..  .*..*..*.*..  .*..*
          |  *.     : ..                  *       *.    *.            *.    |
   100000 ++         *                                                      |
    50000 ++----------------------------------------------------------------+

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
  2014-01-16  3:07 [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs kernel test robot
@ 2014-01-16 19:12 ` Dave Hansen
  2014-01-17  0:26   ` Fengguang Wu
                     ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Dave Hansen @ 2014-01-16 19:12 UTC (permalink / raw)
  To: kernel test robot; +Cc: LKML, lkp

On 01/15/2014 07:07 PM, kernel test robot wrote:
> 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> ---------------  -------------------------  
>    8235933 ~ 2%     +80.6%   14872911 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
>    8235933          +80.6%   14872911       TOTAL interrupts.RES
> 
> 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> ---------------  -------------------------  
>     161531 ~ 7%    +191.9%     471544 ~ 9%  lkp-sbx04/micro/will-it-scale/read2
>     161531         +191.9%     471544       TOTAL vmstat.system.cs
> 
> 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> ---------------  -------------------------  
>      32943 ~ 1%     +71.8%      56599 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
>      32943          +71.8%      56599       TOTAL vmstat.system.in

I suspect that something is wrong with that system.  My 160-cpu system
does about 40,000 interrupts/sec and ~4300 context switches/sec when
running 160 read2_processes.  I wonder if you're hitting swap or the
dirty limits or something.  Are you running it with way more threads
than it has CPUs?

Also, are those will-it-scale tests the threaded or process versions?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
  2014-01-16 19:12 ` Dave Hansen
@ 2014-01-17  0:26   ` Fengguang Wu
  2014-01-17 13:00   ` Fengguang Wu
  2014-01-29  8:26   ` Fengguang Wu
  2 siblings, 0 replies; 5+ messages in thread
From: Fengguang Wu @ 2014-01-17  0:26 UTC (permalink / raw)
  To: Dave Hansen; +Cc: LKML, lkp

[-- Attachment #1: Type: text/plain, Size: 1938 bytes --]

On Thu, Jan 16, 2014 at 11:12:19AM -0800, Dave Hansen wrote:
> On 01/15/2014 07:07 PM, kernel test robot wrote:
> > 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> > ---------------  -------------------------  
> >    8235933 ~ 2%     +80.6%   14872911 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
> >    8235933          +80.6%   14872911       TOTAL interrupts.RES
> > 
> > 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> > ---------------  -------------------------  
> >     161531 ~ 7%    +191.9%     471544 ~ 9%  lkp-sbx04/micro/will-it-scale/read2
> >     161531         +191.9%     471544       TOTAL vmstat.system.cs
> > 
> > 9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
> > ---------------  -------------------------  
> >      32943 ~ 1%     +71.8%      56599 ~ 3%  lkp-sbx04/micro/will-it-scale/read2
> >      32943          +71.8%      56599       TOTAL vmstat.system.in
> 
> I suspect that something is wrong with that system.  My 160-cpu system
> does about 40,000 interrupts/sec and ~4300 context switches/sec when
> running 160 read2_processes.  I wonder if you're hitting swap or the
> dirty limits or something.  Are you running it with way more threads
> than it has CPUs?

lkp-sbx04 has 64 CPU threads, and I'm running will-it-scale with
thread numbers 1 16 24 32 48 64 8 

> Also, are those will-it-scale tests the threaded or process versions?

Hansen, I'm running will-it-scale with parameters

./runtest.py read2 16 1 16 24 32 48 64 8

Which runs both threaded/process tests. The runtest.py is modified to
accept a custom list of threads to run. The patch is attached.

The list of duration and thread numbers for runtest.py are computed
and is different for machines with different number of CPUs. The goal
of the computation is so that the test wall time on different machines
will be equally ~5 minutes.

In a system with 120 CPUs, the numbers will be:

./runtest.py brk1 16 1 120 15 30 45 60 90

Thanks,
Fengguang

[-- Attachment #2: 0001-accept-custom-list-of-threads-to-run.patch --]
[-- Type: text/x-diff, Size: 1323 bytes --]

>From 882d7cdc4387912e1fe6a3c9e4c42cdb0ce78c23 Mon Sep 17 00:00:00 2001
From: Fengguang Wu <fengguang.wu@intel.com>
Date: Fri, 17 Jan 2014 08:11:52 +0800
Subject: [PATCH] accept custom list of threads to run

Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---
 runtest.py |   13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/runtest.py b/runtest.py
index 14d2467..8d4a2cf 100755
--- a/runtest.py
+++ b/runtest.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python2
+#!/usr/bin/python
 
 import time
 import subprocess
@@ -48,12 +48,12 @@ class linux_stat():
 		return 1.0 * idle / (idle + busy)
 
 
-duration=5
-
-if len(sys.argv) != 2:
-	print >> sys.stderr, 'Usage: runtest.py <testcase>'
+if len(sys.argv) < 4:
+	print >> sys.stderr, 'Usage: runtest.py <testcase> <duration> <threads...>'
 	sys.exit(1)
 cmd = sys.argv[1]
+duration = int(sys.argv[2])
+threads  = sys.argv[3:]
 
 nr_cores=0
 r = re.compile('^processor')
@@ -87,7 +87,8 @@ if arch == 'ppc64':
 print 'tasks,processes,processes_idle,threads,threads_idle,linear'
 print '0,0,100,0,100,0'
 
-for i in range(1, nr_cores+1):
+for i in threads:
+	i = int(i)
 	c = './%s_processes -t %d -s %d' % (cmd, i, duration)
 	before = linux_stat()
 	pipe = subprocess.Popen(setarch + ' ' + c, shell=True, stdout=subprocess.PIPE).stdout
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
  2014-01-16 19:12 ` Dave Hansen
  2014-01-17  0:26   ` Fengguang Wu
@ 2014-01-17 13:00   ` Fengguang Wu
  2014-01-29  8:26   ` Fengguang Wu
  2 siblings, 0 replies; 5+ messages in thread
From: Fengguang Wu @ 2014-01-17 13:00 UTC (permalink / raw)
  To: Dave Hansen; +Cc: LKML, lkp

Hi Dave,

I retested the will-it-scale/read2 case with perf profile enabled, and
here are the new comparison results. It shows that there are increased
overheads in shmem_getpage_gfp(). If you'd like to collect more data,
feel free to tell me.

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     26460 ~95%    +136.3%      62514 ~ 1%   numa-vmstat.node2.numa_other
     62927 ~ 0%     -85.9%       8885 ~ 2%   numa-vmstat.node1.numa_other
   8363465 ~ 4%     +81.9%   15210930 ~ 2%   interrupts.RES
      3.96 ~ 6%     +42.8%       5.66 ~ 4%   perf-profile.cpu-cycles.find_lock_page.shmem_getpage_gfp.shmem_file_aio_read.do_sync_read.vfs_read
    209881 ~11%     +35.2%     283704 ~ 9%   numa-vmstat.node1.numa_local
   1795727 ~ 7%     +52.1%    2730750 ~17%   interrupts.LOC
         7 ~ 0%     -33.3%          4 ~10%   vmstat.procs.b
     18461 ~12%     -21.1%      14569 ~ 2%   numa-meminfo.node1.SUnreclaim
      4614 ~12%     -21.1%       3641 ~ 2%   numa-vmstat.node1.nr_slab_unreclaimable
       491 ~ 2%     -25.9%        363 ~ 6%   proc-vmstat.nr_tlb_remote_flush
     14595 ~ 8%     -17.1%      12093 ~16%   numa-meminfo.node2.AnonPages
      3648 ~ 8%     -17.1%       3025 ~16%   numa-vmstat.node2.nr_anon_pages
       277 ~12%     -14.4%        237 ~ 8%   numa-vmstat.node2.nr_page_table_pages
    202594 ~ 8%     -20.5%     161033 ~12%   softirqs.SCHED
      1104 ~11%     -14.0%        950 ~ 8%   numa-meminfo.node2.PageTables
      5201 ~ 7%     +21.0%       6292 ~ 3%   numa-vmstat.node0.nr_slab_unreclaimable
     20807 ~ 7%     +21.0%      25171 ~ 3%   numa-meminfo.node0.SUnreclaim
       975 ~ 8%     +16.7%       1138 ~ 5%   numa-meminfo.node1.PageTables
       245 ~ 7%     +16.5%        285 ~ 5%   numa-vmstat.node1.nr_page_table_pages
    109964 ~ 4%     -16.7%      91589 ~ 1%   numa-numastat.node0.local_node
     20433 ~ 4%     -16.3%      17104 ~ 2%   proc-vmstat.pgalloc_dma32
    112051 ~ 4%     -16.4%      93676 ~ 1%   numa-numastat.node0.numa_hit
    273320 ~ 8%     -14.4%     234064 ~ 3%   numa-vmstat.node2.numa_local
     31480 ~ 4%     +13.9%      35852 ~ 5%   numa-meminfo.node0.Slab
    917358 ~ 2%     +12.5%    1031687 ~ 2%   softirqs.TIMER
       513 ~ 0%     +37.7%        706 ~33%   numa-meminfo.node2.Mlocked
   8404395 ~13%    +256.9%   29992039 ~ 9%   time.voluntary_context_switches
    157154 ~17%    +201.7%     474102 ~ 8%   vmstat.system.cs
     36948 ~ 3%     +67.7%      61963 ~ 2%   vmstat.system.in
      2274 ~ 0%     +13.7%       2584 ~ 1%   time.system_time
       769 ~ 0%     +13.5%        873 ~ 1%   time.percent_of_cpu_this_job_got
      4359 ~ 2%     +13.6%       4951 ~ 3%   time.involuntary_context_switches
       104 ~ 3%     +10.2%        115 ~ 2%   time.user_time

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
  2014-01-16 19:12 ` Dave Hansen
  2014-01-17  0:26   ` Fengguang Wu
  2014-01-17 13:00   ` Fengguang Wu
@ 2014-01-29  8:26   ` Fengguang Wu
  2 siblings, 0 replies; 5+ messages in thread
From: Fengguang Wu @ 2014-01-29  8:26 UTC (permalink / raw)
  To: Dave Hansen; +Cc: LKML, lkp

Hi Dave,

I got more complete results for

        https://github.com/hansendc/linux.git slub-reshrink-for-Fengguang-20140113,

It looks mostly good.  There are regressions in

- netperf.Throughput_Mbps
- dbench.throughput-MB/sec
- xfstests.generic.256.seconds

However the others are all improvements. 


9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
      5766 ~42%     -79.3%       1196 ~ 8%  TOTAL fileio.request_latency_max_ms
        11 ~44%     -73.5%          3 ~27%  TOTAL xfstests.generic.275.seconds
       132 ~37%     -56.8%         57 ~10%  TOTAL xfstests.xfs.229.seconds
        13 ~47%     -60.0%          5 ~ 8%  TOTAL xfstests.xfs.206.seconds
    513.33 ~ 9%      +8.2%     555.35 ~ 0%  TOTAL pigz.throughput
        44 ~ 9%     +29.3%         57 ~16%  TOTAL xfstests.generic.256.seconds
     32321 ~ 7%     +14.4%      36987 ~ 9%  TOTAL fileio.requests_per_sec
     15.26 ~ 3%     +19.7%      18.26 ~ 4%  TOTAL oltp.request_latency_max_ms
     64742 ~ 6%     -10.1%      58179 ~ 4%  TOTAL netperf.Throughput_Mbps
        98 ~ 0%      +2.7%        101 ~ 0%  TOTAL ebizzy.throughput.per_thread.max
    167137 ~ 4%      +5.7%     176639 ~ 1%  TOTAL netperf.Throughput_tps
   2276278 ~ 1%      +0.2%    2279724 ~ 1%  TOTAL aim7.2000.jobs-per-min
     11213 ~ 0%      +2.1%      11454 ~ 0%  TOTAL ebizzy.throughput
    908470 ~ 0%      +0.3%     911314 ~ 0%  TOTAL hackbench.throughput
      8228 ~ 0%      -1.1%       8142 ~ 0%  TOTAL dbench.throughput-MB/sec
    161604 ~ 5%     +11.0%     179405 ~ 4%  TOTAL iostat.md0.wkB/s

Per test case numbers:

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
      5766 ~42%     -79.3%       1196 ~ 8%  kbuildx/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrw-sync
      5766 ~42%     -79.3%       1196 ~ 8%  TOTAL fileio.request_latency_max_ms

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
        11 ~44%     -73.5%          3 ~27%  vpx/micro/xfstests/4HDD-btrfs-generic-mid
        11 ~44%     -73.5%          3 ~27%  TOTAL xfstests.generic.275.seconds

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
       132 ~37%     -56.8%         57 ~10%  vpx/micro/xfstests/4HDD-xfs-xfs
       132 ~37%     -56.8%         57 ~10%  TOTAL xfstests.xfs.229.seconds

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
        13 ~47%     -60.0%          5 ~ 8%  vpx/micro/xfstests/4HDD-xfs-xfs
        13 ~47%     -60.0%          5 ~ 8%  TOTAL xfstests.xfs.206.seconds

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
    464.80 ~ 7%      +5.6%     490.73 ~ 0%  grantley/micro/pigz/100%
     48.53 ~24%     +33.1%      64.62 ~ 6%  kbuildx/micro/pigz/100%
    513.33 ~ 9%      +8.2%     555.35 ~ 0%  TOTAL pigz.throughput

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
        18 ~10%     +41.8%         26 ~28%  vpx/micro/xfstests/4HDD-ext4-generic-mid
        26 ~ 9%     +20.5%         31 ~ 5%  vpx/micro/xfstests/4HDD-xfs-generic-mid
        44 ~ 9%     +29.3%         57 ~16%  TOTAL xfstests.generic.256.seconds

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
       156 ~ 0%      -1.7%        153 ~ 0%  kbuildx/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-rndrw-sync
     10766 ~ 5%      +8.5%      11677 ~ 6%  kbuildx/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqrewr-sync
     10368 ~15%     +30.8%      13560 ~13%  kbuildx/sysbench/fileio/600s-100%-1HDD-btrfs-64G-1024-seqwr-sync
       124 ~ 3%     +36.2%        169 ~11%  kbuildx/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndrd-sync
       943 ~ 2%     +84.0%       1736 ~40%  kbuildx/sysbench/fileio/600s-100%-1HDD-ext4-64G-1024-rndwr-sync
      9962 ~ 0%      -2.7%       9689 ~ 1%  kbuildx/sysbench/fileio/600s-100%-1HDD-xfs-64G-1024-seqrd-sync
     32321 ~ 7%     +14.4%      36987 ~ 9%  TOTAL fileio.requests_per_sec

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     15.26 ~ 3%     +19.7%      18.26 ~ 4%  nhm-white/sysbench/oltp/600s-100%-1000000
     15.26 ~ 3%     +19.7%      18.26 ~ 4%  TOTAL oltp.request_latency_max_ms

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
      7715 ~ 9%     -22.2%       6002 ~16%  kbuildx/micro/netperf/120s-200%-TCP_MAERTS
     17374 ~11%     -16.0%      14591 ~ 6%  kbuildx/micro/netperf/120s-200%-TCP_SENDFILE
      1650 ~ 0%      -0.7%       1639 ~ 0%  lkp-a04/micro/netperf/120s-200%-TCP_SENDFILE
       559 ~ 0%      -1.3%        552 ~ 0%  lkp-a04/micro/netperf/120s-200%-TCP_STREAM
      1803 ~18%     -45.9%        975 ~ 4%  lkp-nex04/micro/netperf/120s-200%-TCP_MAERTS
      5789 ~ 1%      -4.1%       5551 ~ 0%  lkp-nex04/micro/netperf/120s-200%-TCP_SENDFILE
      3779 ~ 4%     -16.2%       3168 ~ 2%  lkp-sb03/micro/netperf/120s-200%-TCP_MAERTS
      9067 ~ 1%      +2.6%       9299 ~ 0%  lkp-sb03/micro/netperf/120s-200%-TCP_SENDFILE
      3815 ~14%     -29.2%       2699 ~ 6%  lkp-sbx04/micro/netperf/120s-200%-TCP_MAERTS
      7911 ~ 1%      +7.5%       8507 ~ 0%  lkp-sbx04/micro/netperf/120s-200%-TCP_SENDFILE
      3215 ~ 0%      -2.1%       3149 ~ 1%  lkp-t410/micro/netperf/120s-200%-TCP_SENDFILE
      2061 ~ 0%      -0.8%       2044 ~ 0%  lkp-t410/micro/netperf/120s-200%-TCP_STREAM
     64742 ~ 6%     -10.1%      58179 ~ 4%  TOTAL netperf.Throughput_Mbps

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
        98 ~ 0%      +2.7%        101 ~ 0%  lkp-nex04/micro/ebizzy/200%-100-10
        98 ~ 0%      +2.7%        101 ~ 0%  TOTAL ebizzy.throughput.per_thread.max

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     26638 ~27%     +27.2%      33873 ~ 7%  kbuildx/micro/netperf/120s-200%-TCP_RR
       945 ~ 0%      -0.8%        937 ~ 0%  lkp-a04/micro/netperf/120s-200%-TCP_CRR
      5622 ~ 0%      +1.8%       5723 ~ 0%  lkp-a04/micro/netperf/120s-200%-TCP_RR
      6984 ~ 0%      -1.4%       6887 ~ 0%  lkp-a04/micro/netperf/120s-200%-UDP_RR
     20449 ~ 0%      -3.4%      19746 ~ 0%  lkp-nex04/micro/netperf/120s-200%-TCP_RR
     10584 ~ 1%      +3.9%      10998 ~ 1%  lkp-nex04/micro/netperf/120s-200%-UDP_RR
      5161 ~ 0%      +1.0%       5212 ~ 0%  lkp-sb03/micro/netperf/120s-200%-TCP_CRR
     30302 ~ 0%      +5.0%      31831 ~ 0%  lkp-sb03/micro/netperf/120s-200%-TCP_RR
      4425 ~ 0%      +0.8%       4462 ~ 0%  lkp-sbx04/micro/netperf/120s-200%-TCP_CRR
     28254 ~ 0%      +4.9%      29631 ~ 0%  lkp-sbx04/micro/netperf/120s-200%-TCP_RR
      2060 ~ 0%      -0.8%       2044 ~ 0%  lkp-t410/micro/netperf/120s-200%-TCP_CRR
     11493 ~ 0%      -2.7%      11181 ~ 0%  lkp-t410/micro/netperf/120s-200%-TCP_RR
     14217 ~ 0%      -0.8%      14110 ~ 0%  lkp-t410/micro/netperf/120s-200%-UDP_RR
    167137 ~ 4%      +5.7%     176639 ~ 1%  TOTAL netperf.Throughput_tps

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
    116376 ~ 0%      +2.4%     119126 ~ 0%  lkp-ne04/micro/aim7/brk_test
     22916 ~ 0%      -0.9%      22700 ~ 0%  lkp-ne04/micro/aim7/fork_test
    615267 ~ 3%      +3.2%     634727 ~ 3%  lkp-ne04/micro/aim7/misc_rtns_1
    293454 ~ 0%      -0.6%     291796 ~ 0%  lkp-snb01/micro/aim7/dbase
    309821 ~ 0%      -0.4%     308453 ~ 0%  lkp-snb01/micro/aim7/shared
     69669 ~ 0%      +1.5%      70711 ~ 0%  nhm-white/micro/aim7/brk_test
    120211 ~ 2%      -8.7%     109778 ~ 4%  nhm-white/micro/aim7/creat-clo
     77083 ~ 0%      +0.8%      77709 ~ 0%  nhm-white/micro/aim7/exec_test
    371798 ~ 0%      -2.9%     360987 ~ 1%  nhm-white/micro/aim7/link_test
     90481 ~ 0%      +0.4%      90805 ~ 0%  nhm-white/micro/aim7/shell_rtns_1
    189196 ~ 0%      +2.0%     192926 ~ 0%  nhm-white/micro/aim7/signal_test
   2276278 ~ 1%      +0.2%    2279724 ~ 1%  TOTAL aim7.2000.jobs-per-min

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
     11213 ~ 0%      +2.1%      11454 ~ 0%  lkp-nex04/micro/ebizzy/200%-100-10
     11213 ~ 0%      +2.1%      11454 ~ 0%  TOTAL ebizzy.throughput

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
    172715 ~ 0%      +0.9%     174317 ~ 0%  lkp-snb01/micro/hackbench/0-___nr_node-1__-0-___nr_cpu-1__-1600%-threads-pipe
    157456 ~ 0%      -1.3%     155387 ~ 0%  lkp-snb01/micro/hackbench/1600%-process-socket
    172416 ~ 0%      +1.5%     175030 ~ 0%  lkp-snb01/micro/hackbench/1600%-threads-pipe
    161696 ~ 0%      -0.2%     161303 ~ 0%  lkp-snb01/micro/hackbench/1600%-threads-socket
     83983 ~ 0%      +1.8%      85523 ~ 1%  xps2/micro/hackbench/1600%-process-pipe
     41791 ~ 0%      -0.7%      41520 ~ 0%  xps2/micro/hackbench/1600%-process-socket
     77821 ~ 0%      +1.1%      78657 ~ 0%  xps2/micro/hackbench/1600%-threads-pipe
     40588 ~ 0%      -2.5%      39575 ~ 0%  xps2/micro/hackbench/1600%-threads-socket
    908470 ~ 0%      +0.3%     911314 ~ 0%  TOTAL hackbench.throughput

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
      8228 ~ 0%      -1.1%       8142 ~ 0%  nhm8/micro/dbench/100%
      8228 ~ 0%      -1.1%       8142 ~ 0%  TOTAL dbench.throughput-MB/sec

9a0bb2966efbf30  0f6934bf1695682e7ced973f6  
---------------  -------------------------  
    161604 ~ 5%     +11.0%     179405 ~ 4%  kbuildx/micro/dd-write/4HDD-RAID0-cfq-ext4-1dd
    161604 ~ 5%     +11.0%     179405 ~ 4%  TOTAL iostat.md0.wkB/s

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-01-29  8:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-16  3:07 [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs kernel test robot
2014-01-16 19:12 ` Dave Hansen
2014-01-17  0:26   ` Fengguang Wu
2014-01-17 13:00   ` Fengguang Wu
2014-01-29  8:26   ` Fengguang Wu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.