linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: "Andrew Morton" <akpm@linux-foundation.org>,
	"Pekka Enberg" <penberg@cs.helsinki.fi>,
	"Peter Zijlstra" <a.p.zijlstra@chello.nl>,
	"Frédéric Weisbecker" <fweisbec@gmail.com>,
	"Steven Rostedt" <rostedt@goodmis.org>
Cc: Mel Gorman <mel@csn.ul.ie>, Larry Woodman <lwoodman@redhat.com>,
	riel@redhat.com, Peter Zijlstra <peterz@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm@kvack.org
Subject: Re: [PATCH 4/4] tracing, page-allocator: Add a postprocessing script for page-allocator-related ftrace events
Date: Tue, 4 Aug 2009 21:57:17 +0200	[thread overview]
Message-ID: <20090804195717.GA5998@elte.hu> (raw)
In-Reply-To: <20090804112246.4e6d0ab1.akpm@linux-foundation.org>

3
* Andrew Morton <akpm@linux-foundation.org> wrote:

> > This patch adds a simple post-processing script for the 
> > page-allocator-related trace events. It can be used to give an 
> > indication of who the most allocator-intensive processes are and 
> > how often the zone lock was taken during the tracing period. 
> > Example output looks like
> > 
> > find-2840
> >  o pages allocd            = 1877
> >  o pages allocd under lock = 1817
> >  o pages freed directly    = 9
> >  o pcpu refills            = 1078
> >  o migrate fallbacks       = 48
> >    - fragmentation causing = 48
> >      - severe              = 46
> >      - moderate            = 2
> >    - changed migratetype   = 7
> 
> The usual way of accumulating and presenting such measurements is 
> via /proc/vmstat.  How do we justify adding a completely new and 
> different way of doing something which we already do?

/proc/vmstat has a couple of technical and usage disadvantages:

 - it is pretty coarse - all-of-system, nothing else 

 - expensive to read (have to read the full file with all fields)

 - has to be polled, has no notion for events

 - it does not offer sampling of workloads

 - it does not allow the separation of workloads: you cannot measure
   just a single workload, you cannot measure just a single process, 
   nor a single CPU.

Incidentally there's an upstream kernel instrumentation and 
statistics framework that solves all the above disadvantages of 
/proc/vmstat:

 - it is finegrained: per task or per workload or per cpu or full system

 - cheap to read - the counts can be accessed individually

 - is event based, can be poll()ed

 - offers sampling of workloads, of any subset of these values

 - it allows easy separation of workloads

All that is needed are the patches form Mel and Rik and it's 
plug-and-play.

Let me demonstrate these features in action (i've applied the 
patches for testing to -tip):

First, discovery/enumeration of available counters can be done via 
'perf list':

titan:~> perf list
  [...]
  kmem:kmalloc                             [Tracepoint event]
  kmem:kmem_cache_alloc                    [Tracepoint event]
  kmem:kmalloc_node                        [Tracepoint event]
  kmem:kmem_cache_alloc_node               [Tracepoint event]
  kmem:kfree                               [Tracepoint event]
  kmem:kmem_cache_free                     [Tracepoint event]
  kmem:mm_page_free_direct                 [Tracepoint event]
  kmem:mm_pagevec_free                     [Tracepoint event]
  kmem:mm_page_alloc                       [Tracepoint event]
  kmem:mm_page_alloc_zone_locked           [Tracepoint event]
  kmem:mm_page_pcpu_drain                  [Tracepoint event]
  kmem:mm_page_alloc_extfrag               [Tracepoint event]

Then any (or all) of the above event sources can be activated and 
measured. For example the page alloc/free properties of a 'hackbench 
run' are:

 titan:~> perf stat -e kmem:mm_page_pcpu_drain -e kmem:mm_page_alloc 
 -e kmem:mm_pagevec_free -e kmem:mm_page_free_direct ./hackbench 10
 Time: 0.575

 Performance counter stats for './hackbench 10':

          13857  kmem:mm_page_pcpu_drain 
          27576  kmem:mm_page_alloc      
           6025  kmem:mm_pagevec_free    
          20934  kmem:mm_page_free_direct

    0.613972165  seconds time elapsed

You can observe the statistical properties as well, by using the 
'repeat the workload N times' feature of perf stat:

 titan:~> perf stat --repeat 5 -e kmem:mm_page_pcpu_drain -e 
   kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
   kmem:mm_page_free_direct ./hackbench 10
 Time: 0.627
 Time: 0.644
 Time: 0.564
 Time: 0.559
 Time: 0.626

 Performance counter stats for './hackbench 10' (5 runs):

          12920  kmem:mm_page_pcpu_drain    ( +-   3.359% )
          25035  kmem:mm_page_alloc         ( +-   3.783% )
           6104  kmem:mm_pagevec_free       ( +-   0.934% )
          18376  kmem:mm_page_free_direct   ( +-   4.941% )

    0.643954516  seconds time elapsed   ( +-   2.363% )

Furthermore, these tracepoints can be used to sample the workload as 
well. For example the page allocations done by a 'git gc' can be 
captured the following way:

 titan:~/git> perf record -f -e kmem:mm_page_alloc -c 1 ./git gc
 Counting objects: 1148, done.
 Delta compression using up to 2 threads.
 Compressing objects: 100% (450/450), done.
 Writing objects: 100% (1148/1148), done.
 Total 1148 (delta 690), reused 1148 (delta 690)
 [ perf record: Captured and wrote 0.267 MB perf.data (~11679 samples) ]

To check which functions generated page allocations:

 titan:~/git> perf report
 # Samples: 10646
 #
 # Overhead          Command               Shared Object
 # ........  ...............  ..........................
 #
    23.57%       git-repack  /lib64/libc-2.5.so        
    21.81%              git  /lib64/libc-2.5.so        
    14.59%              git  ./git                     
    11.79%       git-repack  ./git                     
     7.12%              git  /lib64/ld-2.5.so          
     3.16%       git-repack  /lib64/libpthread-2.5.so  
     2.09%       git-repack  /bin/bash                 
     1.97%               rm  /lib64/libc-2.5.so        
     1.39%               mv  /lib64/ld-2.5.so          
     1.37%               mv  /lib64/libc-2.5.so        
     1.12%       git-repack  /lib64/ld-2.5.so          
     0.95%               rm  /lib64/ld-2.5.so          
     0.90%  git-update-serv  /lib64/libc-2.5.so        
     0.73%  git-update-serv  /lib64/ld-2.5.so          
     0.68%             perf  /lib64/libpthread-2.5.so  
     0.64%       git-repack  /usr/lib64/libz.so.1.2.3  

Or to see it on a more finegrained level:

titan:~/git> perf report --sort comm,dso,symbol
# Samples: 10646
#
# Overhead          Command               Shared Object  Symbol
# ........  ...............  ..........................  ......
#
     9.35%       git-repack  ./git                       [.] insert_obj_hash
     9.12%              git  ./git                       [.] insert_obj_hash
     7.31%              git  /lib64/libc-2.5.so          [.] memcpy
     6.34%       git-repack  /lib64/libc-2.5.so          [.] _int_malloc
     6.24%       git-repack  /lib64/libc-2.5.so          [.] memcpy
     5.82%       git-repack  /lib64/libc-2.5.so          [.] __GI___fork
     5.47%              git  /lib64/libc-2.5.so          [.] _int_malloc
     2.99%              git  /lib64/libc-2.5.so          [.] memset

Furthermore, call-graph sampling can be done too, of page 
allocations - to see precisely what kind of page allocations there 
are:

 titan:~/git> perf record -f -g -e kmem:mm_page_alloc -c 1 ./git gc
 Counting objects: 1148, done.
 Delta compression using up to 2 threads.
 Compressing objects: 100% (450/450), done.
 Writing objects: 100% (1148/1148), done.
 Total 1148 (delta 690), reused 1148 (delta 690)
 [ perf record: Captured and wrote 0.963 MB perf.data (~42069 samples) ]

 titan:~/git> perf report -g
 # Samples: 10686
 #
 # Overhead          Command               Shared Object
 # ........  ...............  ..........................
 #
    23.25%       git-repack  /lib64/libc-2.5.so        
                |          
                |--50.00%-- _int_free
                |          
                |--37.50%-- __GI___fork
                |          make_child
                |          
                |--12.50%-- ptmalloc_unlock_all2
                |          make_child
                |          
                 --6.25%-- __GI_strcpy
    21.61%              git  /lib64/libc-2.5.so        
                |          
                |--30.00%-- __GI_read
                |          |          
                |           --83.33%-- git_config_from_file
                |                     git_config
                |                     |          
   [...]

Or you can observe the whole system's page allocations for 10 
seconds:

titan:~/git> perf stat -a -e kmem:mm_page_pcpu_drain -e 
kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
kmem:mm_page_free_direct sleep 10

 Performance counter stats for 'sleep 10':

         171585  kmem:mm_page_pcpu_drain 
         322114  kmem:mm_page_alloc      
          73623  kmem:mm_pagevec_free    
         254115  kmem:mm_page_free_direct

   10.000591410  seconds time elapsed

Or observe how fluctuating the page allocations are, via statistical 
analysis done over ten 1-second intervals:

 titan:~/git> perf stat --repeat 10 -a -e kmem:mm_page_pcpu_drain -e 
   kmem:mm_page_alloc -e kmem:mm_pagevec_free -e 
   kmem:mm_page_free_direct sleep 1

 Performance counter stats for 'sleep 1' (10 runs):

          17254  kmem:mm_page_pcpu_drain    ( +-   3.709% )
          34394  kmem:mm_page_alloc         ( +-   4.617% )
           7509  kmem:mm_pagevec_free       ( +-   4.820% )
          25653  kmem:mm_page_free_direct   ( +-   3.672% )

    1.058135029  seconds time elapsed   ( +-   3.089% )

Or you can annotate the recorded 'git gc' run on a per symbol basis 
and check which instructions/source-code generated page allocations:

 titan:~/git> perf annotate __GI___fork
 ------------------------------------------------
  Percent |      Source code & Disassembly of libc-2.5.so
 ------------------------------------------------
          :
          :
          :      Disassembly of section .plt:
          :      Disassembly of section .text:
          :
          :      00000031a2e95560 <__fork>:
 [...]
     0.00 :        31a2e95602:   b8 38 00 00 00          mov    $0x38,%eax
     0.00 :        31a2e95607:   0f 05                   syscall 
    83.42 :        31a2e95609:   48 3d 00 f0 ff ff       cmp    $0xfffffffffffff000,%rax
     0.00 :        31a2e9560f:   0f 87 4d 01 00 00       ja     31a2e95762 <__fork+0x202>
     0.00 :        31a2e95615:   85 c0                   test   %eax,%eax

( this shows that 83.42% of __GI___fork's page allocations come from
  the 0x38 system call it performs. )

etc. etc. - a lot more is possible. I could list a dozen of 
other different usecases straight away - neither of which is 
possible via /proc/vmstat.

/proc/vmstat is not in the same league really, in terms of 
expressive power of system analysis and performance 
analysis.

All that the above results needed were those new tracepoints 
in include/tracing/events/kmem.h.

	Ingo

  parent reply	other threads:[~2009-08-04 19:57 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-04 18:12 [PATCH 0/4] Add some trace events for the page allocator v3 Mel Gorman
2009-08-04 18:12 ` [PATCH 1/4] tracing, page-allocator: Add trace events for page allocation and page freeing Mel Gorman
2009-08-05  9:13   ` KOSAKI Motohiro
2009-08-05  9:40     ` Mel Gorman
2009-08-07  1:17       ` KOSAKI Motohiro
2009-08-07 17:31         ` Mel Gorman
2009-08-08  5:44           ` KOSAKI Motohiro
2009-08-04 18:12 ` [PATCH 2/4] tracing, mm: Add trace events for anti-fragmentation falling back to other migratetypes Mel Gorman
2009-08-05  9:26   ` KOSAKI Motohiro
2009-08-04 18:12 ` [PATCH 3/4] tracing, page-allocator: Add trace event for page traffic related to the buddy lists Mel Gorman
2009-08-05  9:24   ` KOSAKI Motohiro
2009-08-05  9:43     ` Mel Gorman
2009-08-07  1:03       ` KOSAKI Motohiro
2009-08-04 18:12 ` [PATCH 4/4] tracing, page-allocator: Add a postprocessing script for page-allocator-related ftrace events Mel Gorman
2009-08-04 18:22   ` Andrew Morton
2009-08-04 18:27     ` Rik van Riel
2009-08-04 19:13       ` Andrew Morton
2009-08-04 20:48         ` Mel Gorman
2009-08-05  7:41           ` Ingo Molnar
2009-08-05  9:07             ` Mel Gorman
2009-08-05  9:16               ` Ingo Molnar
2009-08-05 10:27               ` Johannes Weiner
2009-08-06 15:48                 ` Mel Gorman
2009-08-05 14:53           ` Larry Woodman
2009-08-06 15:54             ` Mel Gorman
2009-08-04 19:57     ` Ingo Molnar [this message]
2009-08-04 20:18       ` Andrew Morton
2009-08-04 20:35         ` Ingo Molnar
2009-08-04 20:53           ` Andrew Morton
2009-08-05  7:53             ` Ingo Molnar
2009-08-05 13:04           ` Peter Zijlstra
2009-08-05 15:07         ` Valdis.Kletnieks
2009-08-05 14:53       ` Valdis.Kletnieks
2009-08-05 18:53         ` perf: "Longum est iter per praecepta, breve et efficax per exempla" Carlos R. Mafra
2009-08-06  7:08           ` Pekka Enberg
2009-08-06  7:35             ` Ingo Molnar
2009-08-06  8:38               ` Carlos R. Mafra
2009-08-06  8:32             ` Carlos R. Mafra
2009-08-06  9:10               ` Ingo Molnar
2009-08-08 12:37           ` [tip:perfcounters/urgent] " tip-bot for Carlos R. Mafra
2009-08-09 11:11           ` [tip:perfcounters/core] " tip-bot for Carlos R. Mafra
2009-08-06 15:50       ` [PATCH 4/4] tracing, page-allocator: Add a postprocessing script for page-allocator-related ftrace events Mel Gorman
2009-08-05  3:07     ` KOSAKI Motohiro
  -- strict thread matches above, loose matches on Subject: below --
2009-07-29 21:05 [RFC PATCH 0/4] Add some trace events for the page allocator v2 Mel Gorman
2009-07-29 21:05 ` [PATCH 4/4] tracing, page-allocator: Add a postprocessing script for page-allocator-related ftrace events Mel Gorman
2009-07-30 13:45   ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090804195717.GA5998@elte.hu \
    --to=mingo@elte.hu \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lwoodman@redhat.com \
    --cc=mel@csn.ul.ie \
    --cc=penberg@cs.helsinki.fi \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).