All of lore.kernel.org
 help / color / mirror / Atom feed
* How to sample memory usage cheaply?
@ 2017-03-30 20:04 Benjamin King
  2017-03-31 12:06 ` Milian Wolff
  2017-04-03 19:09 ` Benjamin King
  0 siblings, 2 replies; 6+ messages in thread
From: Benjamin King @ 2017-03-30 20:04 UTC (permalink / raw)
  To: linux-perf-users

Hi,

I'd like to get a big picture of where a memory hogging process uses physical
memory. I'm interested in call graphs, but in terms of Brendans treatise
(http://www.brendangregg.com/FlameGraphs/memoryflamegraphs.html), I'd love to
analyze page faults a bit before continuing with the more expensive tracing
of malloc and friends.

My problem is that we mmap some readonly files more than once to save memory,
but each individual mapping creates page faults.

This makes sense, but how can I measure physical memory properly, then?
Parsing the Pss-Rows in /proc/<pid>/smaps does work, but seems a bit clumsy.
Is there a better way (e.g. with callstacks) to measure physical memory
growth for a process?

cheers,
  Benjamin

PS:
 Here is some measurement with a leaky toy program. It uses a 1GB zero-filled
 file and drops file system caches prior to the measurement to encourage
 major page faults for the first mapping only. Does not work at all:
-----
$ gcc -O0 mmap_faults.c
$ fallocate -z -l $((1<<30)) 1gb_of_garbage.dat 
$ sudo sysctl -w vm.drop_caches=3
vm.drop_caches = 3
$ perf stat -eminor-faults,major-faults ./a.out

 Performance counter stats for './a.out':

            327,726      minor-faults                                                
                  1      major-faults
$ cat mmap_faults.c
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>

#define numMaps 20
#define length 1u<<30
#define path "1gb_of_garbage.dat"

int main()
{
  int sum = 0;
  for ( int j = 0; j < numMaps; ++j ) 
  { 
    const char *result =
      (const char*)mmap( NULL, length, PROT_READ, MAP_PRIVATE,
          open( path, O_RDONLY ), 0 );

    for ( int i = 0; i < length; i += 4096 ) 
      sum += result[ i ];
  } 
  return sum;
}
-----
Shouldn't I see ~5 Million page faults (20GB/4K)? 
Shouldn't I see more major page faults?
Same thing when garbage-file is filled from /dev/urandom
Even weirder when MAP_POPULATE'ing the file

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-04-03 19:09 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-30 20:04 How to sample memory usage cheaply? Benjamin King
2017-03-31 12:06 ` Milian Wolff
2017-04-01  7:41   ` Benjamin King
2017-04-01 13:54     ` Vince Weaver
2017-04-01 16:27       ` Benjamin King
2017-04-03 19:09 ` Benjamin King

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.