linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems
@ 2019-01-09  9:19 Alexey Budankov
  2019-01-09  9:35 ` [PATCH v3 1/4] perf record: allocate affinity masks Alexey Budankov
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09  9:19 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra
  Cc: Jiri Olsa, Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel


It has been observed that trace reading thread runs on the same hw thread
most of the time during perf record sampling collection. This scheduling
layout leads up to 30% profiling overhead in case when some cpu intensive
workload fully utilizes a large server system with NUMA. Overhead usually
arises from remote (cross node) HW and memory references that have much
longer latencies than local ones [1].

This patch set implements --affinity option that lowers 30% overhead
completely for serial trace streaming (--affinity=cpu) and from 30% to
10% for AIO1 (--aio=1) trace streaming (--affinity=node|cpu).
See OVERHEAD section below for more details.

Implemented extension provides users with capability to instruct Perf 
tool to bounce trace reading thread's affinity mask between NUMA nodes 
(--affinity=node) or assign the thread to the exact cpu (--affinity=cpu) 
that trace buffer being processed belongs to.

The extension brings improvement in case of full system utilization when 
Perf tool process contends with workload process on cpu cores. In case a 
system has free cores to execute Perf tool process during profiling the 
default system scheduling layout induces the lowest overhead.

The patch set has been validated on BT benchmark from NAS Parallel 
Benchmarks [2] running on dual socket, 44 cores, 88 hw threads Broadwell 
system with kernels v4.4-21-generic (Ubuntu 16.04) and v4.20.0-rc5 
(tip perf/core). 

OVERHEAD:
			       BENCH REPORT BASED   ELAPSED TIME BASED
	  v4.20.0-rc5 
          (tip perf/core):
				
(current) SERIAL-SYS  / BASE : 1.27x (14.37/11.31), 1.29x (15.19/11.69)
	  SERIAL-NODE / BASE : 1.15x (13.04/11.31), 1.17x (13.79/11.69)
	  SERIAL-CPU  / BASE : 1.00x (11.32/11.31), 1.01x (11.89/11.69)
	
	  AIO1-SYS    / BASE : 1.29x (14.58/11.31), 1.29x (15.26/11.69)
	  AIO1-NODE   / BASE : 1.08x (12.23/11.31), 1,11x (13.01/11.69)
	  AIO1-CPU    / BASE : 1.07x (12.14/11.31), 1.08x (12.83/11.69)

	  v4.4.0-21-generic
          (Ubuntu 16.04 LTS):

(current) SERIAL-SYS  / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
	  SERIAL-NODE / BASE : 1.19x (13.02/10.87), 1.23x (14.03/11.32)
	  SERIAL-CPU  / BASE : 1.03x (11.21/10.87), 1.07x (12.18/11.32)
	
	  AIO1-SYS    / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
	  AIO1-NODE   / BASE : 1.10x (12.04/10.87), 1.15x (13.03/11.32)
	  AIO1-CPU    / BASE : 1.12x (12.20/10.87), 1.15x (13.09/11.32)

The patch set is generated for acme perf/core repository.

---
Alexey Budankov (4):
  perf record: allocate affinity masks
  perf record: bind the AIO user space buffers to nodes
  perf record: apply affinity masks when reading mmap buffers
  perf record: implement --affinity=node|cpu option

 tools/perf/Documentation/perf-record.txt |  5 ++
 tools/perf/builtin-record.c              | 47 +++++++++++-
 tools/perf/perf.h                        |  8 +++
 tools/perf/util/evlist.c                 | 10 ++-
 tools/perf/util/evlist.h                 |  2 +-
 tools/perf/util/mmap.c                   | 91 ++++++++++++++++++++++--
 tools/perf/util/mmap.h                   |  4 +-
 7 files changed, 157 insertions(+), 10 deletions(-)

---
Changes in v3:
- converted PERF_AFFINITY_EOF to PERF_AFFINITY_MAX
- corrected code style issues
- adjusted __aio_alloc,__aio_bind,__aio_free() implementation
- separated mask manipulations into __adjust_affinity() and __setup_affinity_mask()
- implemented mapping of c index into online cpu index
- adjusted indentation at record__parse_affinity()

Changes in v2:
- made debug affinity mode message user friendly
- converted affinity mode defines to enum values
- implemented perf_mmap__aio_alloc, perf_mmap__aio_free, perf_mmap__aio_bind 
  and put HAVE_LIBNUMA_SUPPORT #ifdefs in there
- separated AIO buffers binding to patch 2/4

---
[1] https://en.wikipedia.org/wiki/Non-uniform_memory_access
[2] https://www.nas.nasa.gov/publications/npb.html
[3] http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
[4] http://man7.org/linux/man-pages/man2/mbind.2.html

---
ENVIRONMENT AND MEASUREMENTS:

  MACHINE:

	broadwell, dual socket, 44 core, 88 threads

	/proc/cpuinfo

	processor	: 87
	vendor_id	: GenuineIntel
	cpu family	: 6
	model		: 79
	model name	: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
	stepping	: 1
	microcode	: 0xb000019
	cpu MHz		: 1200.117
	cache size	: 56320 KB
	physical id	: 1
	siblings	: 44
	core id		: 28
	cpu cores	: 22
	apicid		: 121
	initial apicid	: 121
	fpu		: yes
	fpu_exception	: yes
	cpuid level	: 20
	wp		: yes
	flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
	bugs		:
	bogomips	: 4391.42
	clflush size	: 64
	cache_alignment	: 64
	address sizes	: 46 bits physical, 48 bits virtual
	power management:
  		
  BASE:

	/usr/bin/time ./bt.B.x 

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark
	
	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88
	
	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    10.87
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 64608.74
	Mop/s/thread    =                   734.19
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018
	
	956.25user 19.14system 0:11.32elapsed 8616%CPU (0avgtext+0avgdata 210496maxresident)k
	0inputs+0outputs (0major+57939minor)pagefaults 0swaps

  SERIAL-SYS:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 0
	affinity (UNSET:0, NODE:1, CPU:2) = 0
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    13.73
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 51136.52
	Mop/s/thread    =                   581.10
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1661,120 MB perf.data ]

	1184.84user 40.70system 0:14.69elapsed 8341%CPU (0avgtext+0avgdata 208612maxresident)k
	0inputs+3402072outputs (0major+137077minor)pagefaults 0swaps

  SERIAL-NODE:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 --affinity=node -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 0
	affinity (UNSET:0, NODE:1, CPU:2) = 1
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    13.02
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 53924.69
	Mop/s/thread    =                   612.78
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1557,152 MB perf.data ]

	1120.42user 29.92system 0:14.03elapsed 8198%CPU (0avgtext+0avgdata 206388maxresident)k
	0inputs+3189128outputs (0major+149207minor)pagefaults 0swaps

  SERIAL-CPU:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 --affinity=cpu -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 0
	affinity (UNSET:0, NODE:1, CPU:2) = 2
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    11.21
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 62642.24
	Mop/s/thread    =                   711.84
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1365,043 MB perf.data ]

	976.06user 31.35system 0:12.18elapsed 8264%CPU (0avgtext+0avgdata 208488maxresident)k
	0inputs+2795704outputs (0major+126032minor)pagefaults 0swaps

  AIO1-SYS:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 --aio=1 -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 1
	affinity (UNSET:0, NODE:1, CPU:2) = 0
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    14.23
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 49338.27
	Mop/s/thread    =                   560.66
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1720,590 MB perf.data ]

	1229.19user 41.99system 0:15.22elapsed 8350%CPU (0avgtext+0avgdata 208604maxresident)k
	0inputs+3523880outputs (0major+124670minor)pagefaults 0swaps

  AIO1-NODE:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 --aio=1 --affinity=node -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 1
	affinity (UNSET:0, NODE:1, CPU:2) = 1
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    12.04
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 58313.58
	Mop/s/thread    =                   662.65
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1471,279 MB perf.data ]

	1055.62user 30.43system 0:13.03elapsed 8333%CPU (0avgtext+0avgdata 208424maxresident)k
	0inputs+3013288outputs (0major+79088minor)pagefaults 0swaps

  AIO1-CPU:

	/usr/bin/time ./tip/tools/perf/perf record -v -N -B -T -R -F 25000 --aio=1 --affinity=cpu -a -e cycles -- ./bt.B.x 
	Using CPUID GenuineIntel-6-4F-1
	nr_cblocks: 1
	affinity (UNSET:0, NODE:1, CPU:2) = 2
	mmap size 528384B

	NAS Parallel Benchmarks (NPB3.3-OMP) - BT Benchmark

	No input file inputbt.data. Using compiled defaults
	Size:  102x 102x 102
	Iterations:  200       dt:   0.0003000
	Number of available threads:    88

	BT Benchmark Completed.
	Class           =                        B
	Size            =            102x 102x 102
	Iterations      =                      200
	Time in seconds =                    12.20
	Total threads   =                       88
	Avail threads   =                       88
	Mop/s total     =                 57538.84
	Mop/s/thread    =                   653.85
	Operation type  =           floating point
	Verification    =               SUCCESSFUL
	Version         =                    3.3.1
	Compile date    =              20 Sep 2018

	[ perf record: Captured and wrote 1486,859 MB perf.data ]

	1051.97user 42.07system 0:13.09elapsed 8352%CPU (0avgtext+0avgdata 206388maxresident)k
	0inputs+3045168outputs (0major+174612minor)pagefaults 0swaps

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 1/4] perf record: allocate affinity masks
  2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
@ 2019-01-09  9:35 ` Alexey Budankov
  2019-01-09  9:37 ` [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes Alexey Budankov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09  9:35 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra
  Cc: Jiri Olsa, Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel


Allocate affinity option and masks for mmap data buffers and
record thread as well as initialize allocated objects.

Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
---
Changes in v3:
- converted PERF_AFFINITY_EOF to PERF_AFFINITY_MAX
Changes in v2:
- made debug affinity mode message user friendly
- converted affinity mode defines to enum values
---
 tools/perf/builtin-record.c | 13 ++++++++++++-
 tools/perf/perf.h           |  8 ++++++++
 tools/perf/util/evlist.c    |  6 +++---
 tools/perf/util/evlist.h    |  2 +-
 tools/perf/util/mmap.c      |  2 ++
 tools/perf/util/mmap.h      |  3 ++-
 6 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 882285fb9f64..e5a108b11d46 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -81,12 +81,17 @@ struct record {
 	bool			timestamp_boundary;
 	struct switch_output	switch_output;
 	unsigned long long	samples;
+	cpu_set_t		affinity_mask;
 };
 
 static volatile int auxtrace_record__snapshot_started;
 static DEFINE_TRIGGER(auxtrace_snapshot_trigger);
 static DEFINE_TRIGGER(switch_output_trigger);
 
+static const char *affinity_tags[PERF_AFFINITY_MAX] = {
+	"SYS", "NODE", "CPU"
+};
+
 static bool switch_output_signal(struct record *rec)
 {
 	return rec->switch_output.signal &&
@@ -533,7 +538,8 @@ static int record__mmap_evlist(struct record *rec,
 
 	if (perf_evlist__mmap_ex(evlist, opts->mmap_pages,
 				 opts->auxtrace_mmap_pages,
-				 opts->auxtrace_snapshot_mode, opts->nr_cblocks) < 0) {
+				 opts->auxtrace_snapshot_mode,
+				 opts->nr_cblocks, opts->affinity) < 0) {
 		if (errno == EPERM) {
 			pr_err("Permission error mapping pages.\n"
 			       "Consider increasing "
@@ -1980,6 +1986,9 @@ int cmd_record(int argc, const char **argv)
 # undef REASON
 #endif
 
+	CPU_ZERO(&rec->affinity_mask);
+	rec->opts.affinity = PERF_AFFINITY_SYS;
+
 	rec->evlist = perf_evlist__new();
 	if (rec->evlist == NULL)
 		return -ENOMEM;
@@ -2143,6 +2152,8 @@ int cmd_record(int argc, const char **argv)
 	if (verbose > 0)
 		pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);
 
+	pr_debug("affinity: %s\n", affinity_tags[rec->opts.affinity]);
+
 	err = __cmd_record(&record, argc, argv);
 out:
 	perf_evlist__delete(rec->evlist);
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 388c6dd128b8..36d5cfe6362f 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -83,6 +83,14 @@ struct record_opts {
 	clockid_t    clockid;
 	u64          clockid_res_ns;
 	int	     nr_cblocks;
+	int	     affinity;
+};
+
+enum perf_affinity {
+	PERF_AFFINITY_SYS = 0,
+	PERF_AFFINITY_NODE,
+	PERF_AFFINITY_CPU,
+	PERF_AFFINITY_MAX
 };
 
 struct option;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 8c902276d4b4..08cedb643ea6 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1022,7 +1022,7 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, const char *str,
  */
 int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 			 unsigned int auxtrace_pages,
-			 bool auxtrace_overwrite, int nr_cblocks)
+			 bool auxtrace_overwrite, int nr_cblocks, int affinity)
 {
 	struct perf_evsel *evsel;
 	const struct cpu_map *cpus = evlist->cpus;
@@ -1032,7 +1032,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 	 * Its value is decided by evsel's write_backward.
 	 * So &mp should not be passed through const pointer.
 	 */
-	struct mmap_params mp = { .nr_cblocks = nr_cblocks };
+	struct mmap_params mp = { .nr_cblocks = nr_cblocks, .affinity = affinity };
 
 	if (!evlist->mmap)
 		evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
@@ -1064,7 +1064,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 
 int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages)
 {
-	return perf_evlist__mmap_ex(evlist, pages, 0, false, 0);
+	return perf_evlist__mmap_ex(evlist, pages, 0, false, 0, PERF_AFFINITY_SYS);
 }
 
 int perf_evlist__create_maps(struct perf_evlist *evlist, struct target *target)
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 868294491194..72728d7f4432 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -162,7 +162,7 @@ unsigned long perf_event_mlock_kb_in_pages(void);
 
 int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 			 unsigned int auxtrace_pages,
-			 bool auxtrace_overwrite, int nr_cblocks);
+			 bool auxtrace_overwrite, int nr_cblocks, int affinity);
 int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages);
 void perf_evlist__munmap(struct perf_evlist *evlist);
 
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index 8fc39311a30d..e68ba754a8e2 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -343,6 +343,8 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
 	map->fd = fd;
 	map->cpu = cpu;
 
+	CPU_ZERO(&map->affinity_mask);
+
 	if (auxtrace_mmap__mmap(&map->auxtrace_mmap,
 				&mp->auxtrace_mp, map->base, fd))
 		return -1;
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index aeb6942fdb00..e566c19b242b 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -38,6 +38,7 @@ struct perf_mmap {
 		int		 nr_cblocks;
 	} aio;
 #endif
+	cpu_set_t	affinity_mask;
 };
 
 /*
@@ -69,7 +70,7 @@ enum bkw_mmap_state {
 };
 
 struct mmap_params {
-	int			    prot, mask, nr_cblocks;
+	int			    prot, mask, nr_cblocks, affinity;
 	struct auxtrace_mmap_params auxtrace_mp;
 };
 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes
  2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
  2019-01-09  9:35 ` [PATCH v3 1/4] perf record: allocate affinity masks Alexey Budankov
@ 2019-01-09  9:37 ` Alexey Budankov
  2019-01-09 15:58   ` Jiri Olsa
  2019-01-09  9:38 ` [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers Alexey Budankov
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09  9:37 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra
  Cc: Jiri Olsa, Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel


Allocate and bind AIO user space buffers to the memory nodes
that mmap kernel buffers are bound to.

Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
---
Changes in v3:
- corrected code style issues
- adjusted __aio_alloc,__aio_bind,__aio_free() implementation
Changes in v2:
- implemented perf_mmap__aio_alloc, perf_mmap__aio_free, perf_mmap__aio_bind 
  and put HAVE_LIBNUMA_SUPPORT #ifdefs in there
---
 tools/perf/util/mmap.c | 71 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 67 insertions(+), 4 deletions(-)

diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index e68ba754a8e2..e5220790f1fb 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -10,6 +10,9 @@
 #include <sys/mman.h>
 #include <inttypes.h>
 #include <asm/bug.h>
+#ifdef HAVE_LIBNUMA_SUPPORT
+#include <numaif.h>
+#endif
 #include "debug.h"
 #include "event.h"
 #include "mmap.h"
@@ -154,9 +157,68 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
 }
 
 #ifdef HAVE_AIO_SUPPORT
+
+#ifdef HAVE_LIBNUMA_SUPPORT
+static int perf_mmap__aio_alloc(struct perf_mmap *map, int index)
+{
+	map->aio.data[index] = mmap(NULL, perf_mmap__mmap_len(map), PROT_READ|PROT_WRITE,
+				    MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
+	if (map->aio.data[index] == MAP_FAILED) {
+		map->aio.data[index] = NULL;
+		return -1;
+	}
+
+	return 0;
+}
+
+static void perf_mmap__aio_free(struct perf_mmap *map, int index)
+{
+	if (map->aio.data[index]) {
+		munmap(map->aio.data[index], perf_mmap__mmap_len(map));
+		map->aio.data[index] = NULL;
+	}
+}
+
+static void perf_mmap__aio_bind(struct perf_mmap *map, int index, int cpu, int affinity)
+{
+	void *data;
+	size_t mmap_len;
+	unsigned long node_mask;
+
+	if (affinity != PERF_AFFINITY_SYS && cpu__max_node() > 1) {
+		data = map->aio.data[index];
+		mmap_len = perf_mmap__mmap_len(map);
+		node_mask = 1UL << cpu__get_node(cpu);
+		if (mbind(data, mmap_len, MPOL_BIND, &node_mask, 1, 0)) {
+			pr_warn("failed to bind [%p-%p] to node %d\n",
+				data, data + mmap_len, cpu__get_node(cpu));
+		}
+	}
+}
+#else
+static int perf_mmap__aio_alloc(struct perf_mmap *map, int index)
+{
+	map->aio.data[index] = malloc(perf_mmap__mmap_len(map));
+	if (map->aio.data[index] == NULL)
+		return -1;
+
+	return 0;
+}
+
+static void perf_mmap__aio_free(struct perf_mmap *map, int index)
+{
+	zfree(&(map->aio.data[index]));
+}
+
+static void perf_mmap__aio_bind(struct perf_mmap *map __maybe_unused, int index __maybe_unused,
+		int cpu __maybe_unused, int affinity __maybe_unused)
+{
+}
+#endif
+
 static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
 {
-	int delta_max, i, prio;
+	int delta_max, i, prio, ret;
 
 	map->aio.nr_cblocks = mp->nr_cblocks;
 	if (map->aio.nr_cblocks) {
@@ -177,11 +239,12 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
 		}
 		delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
 		for (i = 0; i < map->aio.nr_cblocks; ++i) {
-			map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
-			if (!map->aio.data[i]) {
+			ret = perf_mmap__aio_alloc(map, i);
+			if (ret == -1) {
 				pr_debug2("failed to allocate data buffer area, error %m");
 				return -1;
 			}
+			perf_mmap__aio_bind(map, i, map->cpu, mp->affinity);
 			/*
 			 * Use cblock.aio_fildes value different from -1
 			 * to denote started aio write operation on the
@@ -210,7 +273,7 @@ static void perf_mmap__aio_munmap(struct perf_mmap *map)
 	int i;
 
 	for (i = 0; i < map->aio.nr_cblocks; ++i)
-		zfree(&map->aio.data[i]);
+		perf_mmap__aio_free(map, i);
 	if (map->aio.data)
 		zfree(&map->aio.data);
 	zfree(&map->aio.cblocks);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
  2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
  2019-01-09  9:35 ` [PATCH v3 1/4] perf record: allocate affinity masks Alexey Budankov
  2019-01-09  9:37 ` [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes Alexey Budankov
@ 2019-01-09  9:38 ` Alexey Budankov
  2019-01-09 16:53   ` Jiri Olsa
  2019-01-09  9:40 ` [PATCH v3 4/4] perf record: implement --affinity=node|cpu option Alexey Budankov
  2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
  4 siblings, 1 reply; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09  9:38 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra
  Cc: Jiri Olsa, Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel


Build node cpu masks for mmap data buffers. Apply node cpu
masks to tool thread every time it references data buffers
cross node or cross cpu.

Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
---
Changes in v3:
- separated mask manipulations into __adjust_affinity() and __setup_affinity_mask()
- implemented mapping of c index into online cpu index
Changes in v2:
- separated AIO buffers binding to patch 2/4
---
 tools/perf/builtin-record.c | 14 ++++++++++++++
 tools/perf/util/evlist.c    |  6 +++++-
 tools/perf/util/mmap.c      | 20 +++++++++++++++++++-
 tools/perf/util/mmap.h      |  1 +
 4 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index e5a108b11d46..553c2fabf3c1 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -536,6 +536,9 @@ static int record__mmap_evlist(struct record *rec,
 	struct record_opts *opts = &rec->opts;
 	char msg[512];
 
+	if (opts->affinity != PERF_AFFINITY_SYS)
+		cpu__setup_cpunode_map();
+
 	if (perf_evlist__mmap_ex(evlist, opts->mmap_pages,
 				 opts->auxtrace_mmap_pages,
 				 opts->auxtrace_snapshot_mode,
@@ -728,6 +731,16 @@ static struct perf_event_header finished_round_event = {
 	.type = PERF_RECORD_FINISHED_ROUND,
 };
 
+static void record__adjust_affinity(struct record *rec, struct perf_mmap *map)
+{
+	if (rec->opts.affinity != PERF_AFFINITY_SYS &&
+	    !CPU_EQUAL(&rec->affinity_mask, &map->affinity_mask)) {
+		CPU_ZERO(&rec->affinity_mask);
+		CPU_OR(&rec->affinity_mask, &rec->affinity_mask, &map->affinity_mask);
+		sched_setaffinity(0, sizeof(rec->affinity_mask), &rec->affinity_mask);
+	}
+}
+
 static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evlist,
 				    bool overwrite)
 {
@@ -755,6 +768,7 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
 		struct perf_mmap *map = &maps[i];
 
 		if (map->base) {
+			record__adjust_affinity(rec, map);
 			if (!record__aio_enabled(rec)) {
 				if (perf_mmap__push(map, rec, record__pushfn) != 0) {
 					rc = -1;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 08cedb643ea6..b6680f65ccc4 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1032,7 +1032,11 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 	 * Its value is decided by evsel's write_backward.
 	 * So &mp should not be passed through const pointer.
 	 */
-	struct mmap_params mp = { .nr_cblocks = nr_cblocks, .affinity = affinity };
+	struct mmap_params mp = {
+		.nr_cblocks	= nr_cblocks,
+		.affinity	= affinity,
+		.cpu_map	= cpus
+	};
 
 	if (!evlist->mmap)
 		evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index e5220790f1fb..ee0230eed635 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
 	auxtrace_mmap__munmap(&map->auxtrace_mmap);
 }
 
+static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
+{
+	int c, cpu, nr_cpus, node;
+
+	CPU_ZERO(&map->affinity_mask);
+	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
+		nr_cpus = cpu_map__nr(mp->cpu_map);
+		node = cpu__get_node(map->cpu);
+		for (c = 0; c < nr_cpus; c++) {
+			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
+			if (cpu__get_node(cpu) == node)
+				CPU_SET(cpu, &map->affinity_mask);
+		}
+	} else if (mp->affinity == PERF_AFFINITY_CPU) {
+		CPU_SET(map->cpu, &map->affinity_mask);
+	}
+}
+
 int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int cpu)
 {
 	/*
@@ -406,7 +424,7 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
 	map->fd = fd;
 	map->cpu = cpu;
 
-	CPU_ZERO(&map->affinity_mask);
+	perf_mmap__setup_affinity_mask(map, mp);
 
 	if (auxtrace_mmap__mmap(&map->auxtrace_mmap,
 				&mp->auxtrace_mp, map->base, fd))
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index e566c19b242b..b3f724fad22e 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -72,6 +72,7 @@ enum bkw_mmap_state {
 struct mmap_params {
 	int			    prot, mask, nr_cblocks, affinity;
 	struct auxtrace_mmap_params auxtrace_mp;
+	const struct cpu_map	    *cpu_map;
 };
 
 int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int cpu);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/4] perf record: implement --affinity=node|cpu option
  2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
                   ` (2 preceding siblings ...)
  2019-01-09  9:38 ` [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers Alexey Budankov
@ 2019-01-09  9:40 ` Alexey Budankov
  2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
  4 siblings, 0 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09  9:40 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra
  Cc: Jiri Olsa, Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel


Implement --affinity=node|cpu option for the record mode defaulting
to system affinity mask bouncing.

Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
---
changes in v3:
- adjusted indentation at record__parse_affinity()
---
 tools/perf/Documentation/perf-record.txt |  5 +++++
 tools/perf/builtin-record.c              | 20 ++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index d232b13ea713..efb839784f32 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -440,6 +440,11 @@ Use <n> control blocks in asynchronous (Posix AIO) trace writing mode (default:
 Asynchronous mode is supported only when linking Perf tool with libc library
 providing implementation for Posix AIO API.
 
+--affinity=mode::
+Set affinity mask of trace reading thread according to the policy defined by 'mode' value:
+  node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer
+  cpu  - thread affinity mask is set to cpu of the processed mmap buffer
+
 --all-kernel::
 Configure all used events to run in kernel space.
 
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 553c2fabf3c1..94a966ba9a6f 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1659,6 +1659,23 @@ static int parse_clockid(const struct option *opt, const char *str, int unset)
 	return -1;
 }
 
+static int record__parse_affinity(const struct option *opt, const char *str, int unset)
+{
+	struct record_opts *opts = (struct record_opts *)opt->value;
+
+	if (unset)
+		return 0;
+
+	if (str) {
+		if (!strcasecmp(str, "node"))
+			opts->affinity = PERF_AFFINITY_NODE;
+		else if (!strcasecmp(str, "cpu"))
+			opts->affinity = PERF_AFFINITY_CPU;
+	}
+
+	return 0;
+}
+
 static int record__parse_mmap_pages(const struct option *opt,
 				    const char *str,
 				    int unset __maybe_unused)
@@ -1966,6 +1983,9 @@ static struct option __record_options[] = {
 		     &nr_cblocks_default, "n", "Use <n> control blocks in asynchronous trace writing mode (default: 1, max: 4)",
 		     record__aio_parse),
 #endif
+	OPT_CALLBACK(0, "affinity", &record.opts, "node|cpu",
+		     "Set affinity mask of trace reading thread to NUMA node cpu mask or cpu of processed mmap buffer",
+		     record__parse_affinity),
 	OPT_END()
 };

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems
  2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
                   ` (3 preceding siblings ...)
  2019-01-09  9:40 ` [PATCH v3 4/4] perf record: implement --affinity=node|cpu option Alexey Budankov
@ 2019-01-09 14:41 ` Jiri Olsa
  2019-01-09 15:51   ` Jiri Olsa
  2019-01-09 16:11   ` Alexey Budankov
  4 siblings, 2 replies; 14+ messages in thread
From: Jiri Olsa @ 2019-01-09 14:41 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On Wed, Jan 09, 2019 at 12:19:20PM +0300, Alexey Budankov wrote:
> 
> It has been observed that trace reading thread runs on the same hw thread
> most of the time during perf record sampling collection. This scheduling
> layout leads up to 30% profiling overhead in case when some cpu intensive
> workload fully utilizes a large server system with NUMA. Overhead usually
> arises from remote (cross node) HW and memory references that have much
> longer latencies than local ones [1].
> 
> This patch set implements --affinity option that lowers 30% overhead
> completely for serial trace streaming (--affinity=cpu) and from 30% to
> 10% for AIO1 (--aio=1) trace streaming (--affinity=node|cpu).
> See OVERHEAD section below for more details.
> 
> Implemented extension provides users with capability to instruct Perf 
> tool to bounce trace reading thread's affinity mask between NUMA nodes 
> (--affinity=node) or assign the thread to the exact cpu (--affinity=cpu) 
> that trace buffer being processed belongs to.
> 
> The extension brings improvement in case of full system utilization when 
> Perf tool process contends with workload process on cpu cores. In case a 
> system has free cores to execute Perf tool process during profiling the 
> default system scheduling layout induces the lowest overhead.
> 
> The patch set has been validated on BT benchmark from NAS Parallel 
> Benchmarks [2] running on dual socket, 44 cores, 88 hw threads Broadwell 
> system with kernels v4.4-21-generic (Ubuntu 16.04) and v4.20.0-rc5 
> (tip perf/core). 
> 
> OVERHEAD:
> 			       BENCH REPORT BASED   ELAPSED TIME BASED
> 	  v4.20.0-rc5 
>           (tip perf/core):
> 				
> (current) SERIAL-SYS  / BASE : 1.27x (14.37/11.31), 1.29x (15.19/11.69)
> 	  SERIAL-NODE / BASE : 1.15x (13.04/11.31), 1.17x (13.79/11.69)
> 	  SERIAL-CPU  / BASE : 1.00x (11.32/11.31), 1.01x (11.89/11.69)
> 	
> 	  AIO1-SYS    / BASE : 1.29x (14.58/11.31), 1.29x (15.26/11.69)
> 	  AIO1-NODE   / BASE : 1.08x (12.23/11.31), 1,11x (13.01/11.69)
> 	  AIO1-CPU    / BASE : 1.07x (12.14/11.31), 1.08x (12.83/11.69)
> 
> 	  v4.4.0-21-generic
>           (Ubuntu 16.04 LTS):
> 
> (current) SERIAL-SYS  / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
> 	  SERIAL-NODE / BASE : 1.19x (13.02/10.87), 1.23x (14.03/11.32)
> 	  SERIAL-CPU  / BASE : 1.03x (11.21/10.87), 1.07x (12.18/11.32)
> 	
> 	  AIO1-SYS    / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
> 	  AIO1-NODE   / BASE : 1.10x (12.04/10.87), 1.15x (13.03/11.32)
> 	  AIO1-CPU    / BASE : 1.12x (12.20/10.87), 1.15x (13.09/11.32)
> 
> The patch set is generated for acme perf/core repository.
> 
> ---
> Alexey Budankov (4):
>   perf record: allocate affinity masks
>   perf record: bind the AIO user space buffers to nodes
>   perf record: apply affinity masks when reading mmap buffers
>   perf record: implement --affinity=node|cpu option


hi,
can't apply your code on latest Arnaldo's perf/core:

Applying: perf record: allocate affinity masks
Applying: perf record: bind the AIO user space buffers to nodes
Applying: perf record: apply affinity masks when reading mmap buffers
Applying: perf record: implement --affinity=node|cpu option
error: corrupt patch at line 62
Patch failed at 0004 perf record: implement --affinity=node|cpu option
Use 'git am --show-current-patch' to see the failed patch
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems
  2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
@ 2019-01-09 15:51   ` Jiri Olsa
  2019-01-09 16:11   ` Alexey Budankov
  1 sibling, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2019-01-09 15:51 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On Wed, Jan 09, 2019 at 03:41:25PM +0100, Jiri Olsa wrote:
> On Wed, Jan 09, 2019 at 12:19:20PM +0300, Alexey Budankov wrote:
> > 
> > It has been observed that trace reading thread runs on the same hw thread
> > most of the time during perf record sampling collection. This scheduling
> > layout leads up to 30% profiling overhead in case when some cpu intensive
> > workload fully utilizes a large server system with NUMA. Overhead usually
> > arises from remote (cross node) HW and memory references that have much
> > longer latencies than local ones [1].
> > 
> > This patch set implements --affinity option that lowers 30% overhead
> > completely for serial trace streaming (--affinity=cpu) and from 30% to
> > 10% for AIO1 (--aio=1) trace streaming (--affinity=node|cpu).
> > See OVERHEAD section below for more details.
> > 
> > Implemented extension provides users with capability to instruct Perf 
> > tool to bounce trace reading thread's affinity mask between NUMA nodes 
> > (--affinity=node) or assign the thread to the exact cpu (--affinity=cpu) 
> > that trace buffer being processed belongs to.
> > 
> > The extension brings improvement in case of full system utilization when 
> > Perf tool process contends with workload process on cpu cores. In case a 
> > system has free cores to execute Perf tool process during profiling the 
> > default system scheduling layout induces the lowest overhead.
> > 
> > The patch set has been validated on BT benchmark from NAS Parallel 
> > Benchmarks [2] running on dual socket, 44 cores, 88 hw threads Broadwell 
> > system with kernels v4.4-21-generic (Ubuntu 16.04) and v4.20.0-rc5 
> > (tip perf/core). 
> > 
> > OVERHEAD:
> > 			       BENCH REPORT BASED   ELAPSED TIME BASED
> > 	  v4.20.0-rc5 
> >           (tip perf/core):
> > 				
> > (current) SERIAL-SYS  / BASE : 1.27x (14.37/11.31), 1.29x (15.19/11.69)
> > 	  SERIAL-NODE / BASE : 1.15x (13.04/11.31), 1.17x (13.79/11.69)
> > 	  SERIAL-CPU  / BASE : 1.00x (11.32/11.31), 1.01x (11.89/11.69)
> > 	
> > 	  AIO1-SYS    / BASE : 1.29x (14.58/11.31), 1.29x (15.26/11.69)
> > 	  AIO1-NODE   / BASE : 1.08x (12.23/11.31), 1,11x (13.01/11.69)
> > 	  AIO1-CPU    / BASE : 1.07x (12.14/11.31), 1.08x (12.83/11.69)
> > 
> > 	  v4.4.0-21-generic
> >           (Ubuntu 16.04 LTS):
> > 
> > (current) SERIAL-SYS  / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
> > 	  SERIAL-NODE / BASE : 1.19x (13.02/10.87), 1.23x (14.03/11.32)
> > 	  SERIAL-CPU  / BASE : 1.03x (11.21/10.87), 1.07x (12.18/11.32)
> > 	
> > 	  AIO1-SYS    / BASE : 1.26x (13.73/10.87), 1.29x (14.69/11.32)
> > 	  AIO1-NODE   / BASE : 1.10x (12.04/10.87), 1.15x (13.03/11.32)
> > 	  AIO1-CPU    / BASE : 1.12x (12.20/10.87), 1.15x (13.09/11.32)
> > 
> > The patch set is generated for acme perf/core repository.
> > 
> > ---
> > Alexey Budankov (4):
> >   perf record: allocate affinity masks
> >   perf record: bind the AIO user space buffers to nodes
> >   perf record: apply affinity masks when reading mmap buffers
> >   perf record: implement --affinity=node|cpu option
> 
> 
> hi,
> can't apply your code on latest Arnaldo's perf/core:
> 
> Applying: perf record: allocate affinity masks
> Applying: perf record: bind the AIO user space buffers to nodes
> Applying: perf record: apply affinity masks when reading mmap buffers
> Applying: perf record: implement --affinity=node|cpu option
> error: corrupt patch at line 62
> Patch failed at 0004 perf record: implement --affinity=node|cpu option
> Use 'git am --show-current-patch' to see the failed patch
> When you have resolved this problem, run "git am --continue".
> If you prefer to skip this patch, run "git am --skip" instead.
> To restore the original branch and stop patching, run "git am --abort".

hum, when I separate the raw patch and apply it works with no fuzz:

[jolsa@krava perf]$ patch -p3 < /tmp/krava
patching file Documentation/perf-record.txt
patching file builtin-record.c

this email header caught my eye:

  User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0
 
but no idea what's the issue in here ;-)

jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes
  2019-01-09  9:37 ` [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes Alexey Budankov
@ 2019-01-09 15:58   ` Jiri Olsa
  2019-01-09 16:58     ` Alexey Budankov
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2019-01-09 15:58 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On Wed, Jan 09, 2019 at 12:37:17PM +0300, Alexey Budankov wrote:
> 
> Allocate and bind AIO user space buffers to the memory nodes
> that mmap kernel buffers are bound to.
> 
> Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
> ---
> Changes in v3:
> - corrected code style issues
> - adjusted __aio_alloc,__aio_bind,__aio_free() implementation
> Changes in v2:
> - implemented perf_mmap__aio_alloc, perf_mmap__aio_free, perf_mmap__aio_bind 
>   and put HAVE_LIBNUMA_SUPPORT #ifdefs in there
> ---
>  tools/perf/util/mmap.c | 71 +++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 67 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> index e68ba754a8e2..e5220790f1fb 100644
> --- a/tools/perf/util/mmap.c
> +++ b/tools/perf/util/mmap.c
> @@ -10,6 +10,9 @@
>  #include <sys/mman.h>
>  #include <inttypes.h>
>  #include <asm/bug.h>
> +#ifdef HAVE_LIBNUMA_SUPPORT
> +#include <numaif.h>
> +#endif
>  #include "debug.h"
>  #include "event.h"
>  #include "mmap.h"
> @@ -154,9 +157,68 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
>  }
>  
>  #ifdef HAVE_AIO_SUPPORT
> +
> +#ifdef HAVE_LIBNUMA_SUPPORT
> +static int perf_mmap__aio_alloc(struct perf_mmap *map, int index)
> +{
> +	map->aio.data[index] = mmap(NULL, perf_mmap__mmap_len(map), PROT_READ|PROT_WRITE,
> +				    MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
> +	if (map->aio.data[index] == MAP_FAILED) {
> +		map->aio.data[index] = NULL;
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
> +static void perf_mmap__aio_free(struct perf_mmap *map, int index)
> +{
> +	if (map->aio.data[index]) {
> +		munmap(map->aio.data[index], perf_mmap__mmap_len(map));
> +		map->aio.data[index] = NULL;
> +	}
> +}
> +
> +static void perf_mmap__aio_bind(struct perf_mmap *map, int index, int cpu, int affinity)
> +{
> +	void *data;
> +	size_t mmap_len;
> +	unsigned long node_mask;
> +
> +	if (affinity != PERF_AFFINITY_SYS && cpu__max_node() > 1) {
> +		data = map->aio.data[index];
> +		mmap_len = perf_mmap__mmap_len(map);
> +		node_mask = 1UL << cpu__get_node(cpu);
> +		if (mbind(data, mmap_len, MPOL_BIND, &node_mask, 1, 0)) {
> +			pr_warn("failed to bind [%p-%p] to node %d\n",
> +				data, data + mmap_len, cpu__get_node(cpu));
> +		}

getting compilation fail in here:

  CC       util/mmap.o
util/mmap.c: In function ‘perf_mmap__aio_bind’:
util/mmap.c:193:4: error: implicit declaration of function ‘pr_warn’; did you mean ‘pr_warning’? [-Werror=implicit-function-declaration]
    pr_warn("failed to bind [%p-%p] to node %d\n",
    ^~~~~~~
    pr_warning
util/mmap.c:193:4: error: nested extern declaration of ‘pr_warn’ [-Werror=nested-externs]
cc1: all warnings being treated as errors


jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems
  2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
  2019-01-09 15:51   ` Jiri Olsa
@ 2019-01-09 16:11   ` Alexey Budankov
  1 sibling, 0 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09 16:11 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

Hi,

On 09.01.2019 17:41, Jiri Olsa wrote:
> On Wed, Jan 09, 2019 at 12:19:20PM +0300, Alexey Budankov wrote:
>>
<SNIP>
>> It has been observed that trace reading thread run>> The patch set is generated for acme perf/core repository.
>>
>> ---
>> Alexey Budankov (4):
>>   perf record: allocate affinity masks
>>   perf record: bind the AIO user space buffers to nodes
>>   perf record: apply affinity masks when reading mmap buffers
>>   perf record: implement --affinity=node|cpu option
> 
> 
> hi,
> can't apply your code on latest Arnaldo's perf/core:
> 
> Applying: perf record: allocate affinity masks
> Applying: perf record: bind the AIO user space buffers to nodes
> Applying: perf record: apply affinity masks when reading mmap buffers
> Applying: perf record: implement --affinity=node|cpu option
> error: corrupt patch at line 62
> Patch failed at 0004 perf record: implement --affinity=node|cpu option
> Use 'git am --show-current-patch' to see the failed patch
> When you have resolved this problem, run "git am --continue".
> If you prefer to skip this patch, run "git am --skip" instead.
> To restore the original branch and stop patching, run "git am --abort".

Sorry about that.
Patch set update and resend is in progress.
The whole change on top of Arnaldo's perf/core tip follows for your convenience.

Thanks!
Alexey

---
 tools/perf/Documentation/perf-record.txt |  5 ++
 tools/perf/builtin-record.c              | 47 ++++++++++++++++-
 tools/perf/perf.h                        |  8 +++
 tools/perf/util/evlist.c                 | 10 ++--
 tools/perf/util/evlist.h                 |  2 +-
 tools/perf/util/mmap.c                   | 91 ++++++++++++++++++++++++++++++--
 tools/perf/util/mmap.h                   |  4 +-
 7 files changed, 157 insertions(+), 10 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index d232b13ea713..efb839784f32 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -440,6 +440,11 @@ Use <n> control blocks in asynchronous (Posix AIO) trace writing mode (default:
 Asynchronous mode is supported only when linking Perf tool with libc library
 providing implementation for Posix AIO API.
 
+--affinity=mode::
+Set affinity mask of trace reading thread according to the policy defined by 'mode' value:
+  node - thread affinity mask is set to NUMA node cpu mask of the processed mmap buffer
+  cpu  - thread affinity mask is set to cpu of the processed mmap buffer
+
 --all-kernel::
 Configure all used events to run in kernel space.
 
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 882285fb9f64..94a966ba9a6f 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -81,12 +81,17 @@ struct record {
 	bool			timestamp_boundary;
 	struct switch_output	switch_output;
 	unsigned long long	samples;
+	cpu_set_t		affinity_mask;
 };
 
 static volatile int auxtrace_record__snapshot_started;
 static DEFINE_TRIGGER(auxtrace_snapshot_trigger);
 static DEFINE_TRIGGER(switch_output_trigger);
 
+static const char *affinity_tags[PERF_AFFINITY_MAX] = {
+	"SYS", "NODE", "CPU"
+};
+
 static bool switch_output_signal(struct record *rec)
 {
 	return rec->switch_output.signal &&
@@ -531,9 +536,13 @@ static int record__mmap_evlist(struct record *rec,
 	struct record_opts *opts = &rec->opts;
 	char msg[512];
 
+	if (opts->affinity != PERF_AFFINITY_SYS)
+		cpu__setup_cpunode_map();
+
 	if (perf_evlist__mmap_ex(evlist, opts->mmap_pages,
 				 opts->auxtrace_mmap_pages,
-				 opts->auxtrace_snapshot_mode, opts->nr_cblocks) < 0) {
+				 opts->auxtrace_snapshot_mode,
+				 opts->nr_cblocks, opts->affinity) < 0) {
 		if (errno == EPERM) {
 			pr_err("Permission error mapping pages.\n"
 			       "Consider increasing "
@@ -722,6 +731,16 @@ static struct perf_event_header finished_round_event = {
 	.type = PERF_RECORD_FINISHED_ROUND,
 };
 
+static void record__adjust_affinity(struct record *rec, struct perf_mmap *map)
+{
+	if (rec->opts.affinity != PERF_AFFINITY_SYS &&
+	    !CPU_EQUAL(&rec->affinity_mask, &map->affinity_mask)) {
+		CPU_ZERO(&rec->affinity_mask);
+		CPU_OR(&rec->affinity_mask, &rec->affinity_mask, &map->affinity_mask);
+		sched_setaffinity(0, sizeof(rec->affinity_mask), &rec->affinity_mask);
+	}
+}
+
 static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evlist,
 				    bool overwrite)
 {
@@ -749,6 +768,7 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
 		struct perf_mmap *map = &maps[i];
 
 		if (map->base) {
+			record__adjust_affinity(rec, map);
 			if (!record__aio_enabled(rec)) {
 				if (perf_mmap__push(map, rec, record__pushfn) != 0) {
 					rc = -1;
@@ -1639,6 +1659,23 @@ static int parse_clockid(const struct option *opt, const char *str, int unset)
 	return -1;
 }
 
+static int record__parse_affinity(const struct option *opt, const char *str, int unset)
+{
+	struct record_opts *opts = (struct record_opts *)opt->value;
+
+	if (unset)
+		return 0;
+
+	if (str) {
+		if (!strcasecmp(str, "node"))
+			opts->affinity = PERF_AFFINITY_NODE;
+		else if (!strcasecmp(str, "cpu"))
+			opts->affinity = PERF_AFFINITY_CPU;
+	}
+
+	return 0;
+}
+
 static int record__parse_mmap_pages(const struct option *opt,
 				    const char *str,
 				    int unset __maybe_unused)
@@ -1946,6 +1983,9 @@ static struct option __record_options[] = {
 		     &nr_cblocks_default, "n", "Use <n> control blocks in asynchronous trace writing mode (default: 1, max: 4)",
 		     record__aio_parse),
 #endif
+	OPT_CALLBACK(0, "affinity", &record.opts, "node|cpu",
+		     "Set affinity mask of trace reading thread to NUMA node cpu mask or cpu of processed mmap buffer",
+		     record__parse_affinity),
 	OPT_END()
 };
 
@@ -1980,6 +2020,9 @@ int cmd_record(int argc, const char **argv)
 # undef REASON
 #endif
 
+	CPU_ZERO(&rec->affinity_mask);
+	rec->opts.affinity = PERF_AFFINITY_SYS;
+
 	rec->evlist = perf_evlist__new();
 	if (rec->evlist == NULL)
 		return -ENOMEM;
@@ -2143,6 +2186,8 @@ int cmd_record(int argc, const char **argv)
 	if (verbose > 0)
 		pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);
 
+	pr_debug("affinity: %s\n", affinity_tags[rec->opts.affinity]);
+
 	err = __cmd_record(&record, argc, argv);
 out:
 	perf_evlist__delete(rec->evlist);
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 388c6dd128b8..36d5cfe6362f 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -83,6 +83,14 @@ struct record_opts {
 	clockid_t    clockid;
 	u64          clockid_res_ns;
 	int	     nr_cblocks;
+	int	     affinity;
+};
+
+enum perf_affinity {
+	PERF_AFFINITY_SYS = 0,
+	PERF_AFFINITY_NODE,
+	PERF_AFFINITY_CPU,
+	PERF_AFFINITY_MAX
 };
 
 struct option;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 8c902276d4b4..b6680f65ccc4 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1022,7 +1022,7 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, const char *str,
  */
 int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 			 unsigned int auxtrace_pages,
-			 bool auxtrace_overwrite, int nr_cblocks)
+			 bool auxtrace_overwrite, int nr_cblocks, int affinity)
 {
 	struct perf_evsel *evsel;
 	const struct cpu_map *cpus = evlist->cpus;
@@ -1032,7 +1032,11 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 	 * Its value is decided by evsel's write_backward.
 	 * So &mp should not be passed through const pointer.
 	 */
-	struct mmap_params mp = { .nr_cblocks = nr_cblocks };
+	struct mmap_params mp = {
+		.nr_cblocks	= nr_cblocks,
+		.affinity	= affinity,
+		.cpu_map	= cpus
+	};
 
 	if (!evlist->mmap)
 		evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
@@ -1064,7 +1068,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 
 int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages)
 {
-	return perf_evlist__mmap_ex(evlist, pages, 0, false, 0);
+	return perf_evlist__mmap_ex(evlist, pages, 0, false, 0, PERF_AFFINITY_SYS);
 }
 
 int perf_evlist__create_maps(struct perf_evlist *evlist, struct target *target)
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 868294491194..72728d7f4432 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -162,7 +162,7 @@ unsigned long perf_event_mlock_kb_in_pages(void);
 
 int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
 			 unsigned int auxtrace_pages,
-			 bool auxtrace_overwrite, int nr_cblocks);
+			 bool auxtrace_overwrite, int nr_cblocks, int affinity);
 int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages);
 void perf_evlist__munmap(struct perf_evlist *evlist);
 
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index 8fc39311a30d..ee0230eed635 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -10,6 +10,9 @@
 #include <sys/mman.h>
 #include <inttypes.h>
 #include <asm/bug.h>
+#ifdef HAVE_LIBNUMA_SUPPORT
+#include <numaif.h>
+#endif
 #include "debug.h"
 #include "event.h"
 #include "mmap.h"
@@ -154,9 +157,68 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
 }
 
 #ifdef HAVE_AIO_SUPPORT
+
+#ifdef HAVE_LIBNUMA_SUPPORT
+static int perf_mmap__aio_alloc(struct perf_mmap *map, int index)
+{
+	map->aio.data[index] = mmap(NULL, perf_mmap__mmap_len(map), PROT_READ|PROT_WRITE,
+				    MAP_PRIVATE|MAP_ANONYMOUS, 0, 0);
+	if (map->aio.data[index] == MAP_FAILED) {
+		map->aio.data[index] = NULL;
+		return -1;
+	}
+
+	return 0;
+}
+
+static void perf_mmap__aio_free(struct perf_mmap *map, int index)
+{
+	if (map->aio.data[index]) {
+		munmap(map->aio.data[index], perf_mmap__mmap_len(map));
+		map->aio.data[index] = NULL;
+	}
+}
+
+static void perf_mmap__aio_bind(struct perf_mmap *map, int index, int cpu, int affinity)
+{
+	void *data;
+	size_t mmap_len;
+	unsigned long node_mask;
+
+	if (affinity != PERF_AFFINITY_SYS && cpu__max_node() > 1) {
+		data = map->aio.data[index];
+		mmap_len = perf_mmap__mmap_len(map);
+		node_mask = 1UL << cpu__get_node(cpu);
+		if (mbind(data, mmap_len, MPOL_BIND, &node_mask, 1, 0)) {
+			pr_warn("failed to bind [%p-%p] to node %d\n",
+				data, data + mmap_len, cpu__get_node(cpu));
+		}
+	}
+}
+#else
+static int perf_mmap__aio_alloc(struct perf_mmap *map, int index)
+{
+	map->aio.data[index] = malloc(perf_mmap__mmap_len(map));
+	if (map->aio.data[index] == NULL)
+		return -1;
+
+	return 0;
+}
+
+static void perf_mmap__aio_free(struct perf_mmap *map, int index)
+{
+	zfree(&(map->aio.data[index]));
+}
+
+static void perf_mmap__aio_bind(struct perf_mmap *map __maybe_unused, int index __maybe_unused,
+		int cpu __maybe_unused, int affinity __maybe_unused)
+{
+}
+#endif
+
 static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
 {
-	int delta_max, i, prio;
+	int delta_max, i, prio, ret;
 
 	map->aio.nr_cblocks = mp->nr_cblocks;
 	if (map->aio.nr_cblocks) {
@@ -177,11 +239,12 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
 		}
 		delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
 		for (i = 0; i < map->aio.nr_cblocks; ++i) {
-			map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
-			if (!map->aio.data[i]) {
+			ret = perf_mmap__aio_alloc(map, i);
+			if (ret == -1) {
 				pr_debug2("failed to allocate data buffer area, error %m");
 				return -1;
 			}
+			perf_mmap__aio_bind(map, i, map->cpu, mp->affinity);
 			/*
 			 * Use cblock.aio_fildes value different from -1
 			 * to denote started aio write operation on the
@@ -210,7 +273,7 @@ static void perf_mmap__aio_munmap(struct perf_mmap *map)
 	int i;
 
 	for (i = 0; i < map->aio.nr_cblocks; ++i)
-		zfree(&map->aio.data[i]);
+		perf_mmap__aio_free(map, i);
 	if (map->aio.data)
 		zfree(&map->aio.data);
 	zfree(&map->aio.cblocks);
@@ -314,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
 	auxtrace_mmap__munmap(&map->auxtrace_mmap);
 }
 
+static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
+{
+	int c, cpu, nr_cpus, node;
+
+	CPU_ZERO(&map->affinity_mask);
+	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
+		nr_cpus = cpu_map__nr(mp->cpu_map);
+		node = cpu__get_node(map->cpu);
+		for (c = 0; c < nr_cpus; c++) {
+			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
+			if (cpu__get_node(cpu) == node)
+				CPU_SET(cpu, &map->affinity_mask);
+		}
+	} else if (mp->affinity == PERF_AFFINITY_CPU) {
+		CPU_SET(map->cpu, &map->affinity_mask);
+	}
+}
+
 int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int cpu)
 {
 	/*
@@ -343,6 +424,8 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
 	map->fd = fd;
 	map->cpu = cpu;
 
+	perf_mmap__setup_affinity_mask(map, mp);
+
 	if (auxtrace_mmap__mmap(&map->auxtrace_mmap,
 				&mp->auxtrace_mp, map->base, fd))
 		return -1;
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index aeb6942fdb00..b3f724fad22e 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -38,6 +38,7 @@ struct perf_mmap {
 		int		 nr_cblocks;
 	} aio;
 #endif
+	cpu_set_t	affinity_mask;
 };
 
 /*
@@ -69,8 +70,9 @@ enum bkw_mmap_state {
 };
 
 struct mmap_params {
-	int			    prot, mask, nr_cblocks;
+	int			    prot, mask, nr_cblocks, affinity;
 	struct auxtrace_mmap_params auxtrace_mp;
+	const struct cpu_map	    *cpu_map;
 };
 
 int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int cpu);

> 
> jirka
> 

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
  2019-01-09  9:38 ` [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers Alexey Budankov
@ 2019-01-09 16:53   ` Jiri Olsa
  2019-01-10  9:41     ` Alexey Budankov
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2019-01-09 16:53 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On Wed, Jan 09, 2019 at 12:38:23PM +0300, Alexey Budankov wrote:

SNIP

> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> index e5220790f1fb..ee0230eed635 100644
> --- a/tools/perf/util/mmap.c
> +++ b/tools/perf/util/mmap.c
> @@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
>  	auxtrace_mmap__munmap(&map->auxtrace_mmap);
>  }
>  
> +static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
> +{
> +	int c, cpu, nr_cpus, node;
> +
> +	CPU_ZERO(&map->affinity_mask);
> +	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
> +		nr_cpus = cpu_map__nr(mp->cpu_map);
> +		node = cpu__get_node(map->cpu);
> +		for (c = 0; c < nr_cpus; c++) {
> +			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
> +			if (cpu__get_node(cpu) == node)
> +				CPU_SET(cpu, &map->affinity_mask);

should we do that from from all possible cpus task (perf record)
can run on, instead of mp->cpu_map, which might be only subset
(-C ... option)

also node -> cpu_map is static configuration, we could prepare
this map ahead (like cpunode_map) and just assign it in here
based on node index

thanks,
jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes
  2019-01-09 15:58   ` Jiri Olsa
@ 2019-01-09 16:58     ` Alexey Budankov
  0 siblings, 0 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-09 16:58 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

Hi,
On 09.01.2019 18:58, Jiri Olsa wrote:
> On Wed, Jan 09, 2019 at 12:37:17PM +0300, Alexey Budankov wrote:
>>
>> Allocate and bind AIO user space buffers to the memory nodes
>> that mmap kernel buffers are bound to.
<SNIP>
>>
>> Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
<SNIP>
>> +static void perf_mmap__aio_bind(struct perf_mmap *map, int index, int cpu, int affinity)
>> +{
>> +	void *data;
>> +	size_t mmap_len;
>> +	unsigned long node_mask;
>> +
>> +	if (affinity != PERF_AFFINITY_SYS && cpu__max_node() > 1) {
>> +		data = map->aio.data[index];
>> +		mmap_len = perf_mmap__mmap_len(map);
>> +		node_mask = 1UL << cpu__get_node(cpu);
>> +		if (mbind(data, mmap_len, MPOL_BIND, &node_mask, 1, 0)) {
>> +			pr_warn("failed to bind [%p-%p] to node %d\n",
>> +				data, data + mmap_len, cpu__get_node(cpu));
>> +		}
> 
> getting compilation fail in here:
> 
>   CC       util/mmap.o
> util/mmap.c: In function ‘perf_mmap__aio_bind’:
> util/mmap.c:193:4: error: implicit declaration of function ‘pr_warn’; did you mean ‘pr_warning’? [-Werror=implicit-function-declaration]
>     pr_warn("failed to bind [%p-%p] to node %d\n",
>     ^~~~~~~
>     pr_warning
> util/mmap.c:193:4: error: nested extern declaration of ‘pr_warn’ [-Werror=nested-externs]
> cc1: all warnings being treated as errors

Yes, it should be pr_warning().
This hunk missed when I was preparing the patches - sorry.
It will be included into v4.

Thanks!
Alexey

> 
> 
> jirka
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
  2019-01-09 16:53   ` Jiri Olsa
@ 2019-01-10  9:41     ` Alexey Budankov
  2019-01-10  9:54       ` Jiri Olsa
  0 siblings, 1 reply; 14+ messages in thread
From: Alexey Budankov @ 2019-01-10  9:41 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On 09.01.2019 19:53, Jiri Olsa wrote:
> On Wed, Jan 09, 2019 at 12:38:23PM +0300, Alexey Budankov wrote:
> 
> SNIP
> 
>> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
>> index e5220790f1fb..ee0230eed635 100644
>> --- a/tools/perf/util/mmap.c
>> +++ b/tools/perf/util/mmap.c
>> @@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
>>  	auxtrace_mmap__munmap(&map->auxtrace_mmap);
>>  }
>>  
>> +static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
>> +{
>> +	int c, cpu, nr_cpus, node;
>> +
>> +	CPU_ZERO(&map->affinity_mask);
>> +	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
>> +		nr_cpus = cpu_map__nr(mp->cpu_map);
>> +		node = cpu__get_node(map->cpu);
>> +		for (c = 0; c < nr_cpus; c++) {
>> +			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
>> +			if (cpu__get_node(cpu) == node)
>> +				CPU_SET(cpu, &map->affinity_mask);
> 
> should we do that from from all possible cpus task (perf record)
> can run on, instead of mp->cpu_map, which might be only subset
> (-C ... option)

That is how it should be and because mp->cpu_map depends on -C option value 
in this patch set version it requires to be corrected, possibly like this:

struct mmap_params mp = {
		.nr_cblocks	= nr_cblocks,
		.affinity	= affinity,
		.cpu_map	= cpu_map__new(NULL) /* builds struct cpu_map from /sys/devices/system/cpu/online */
	}; 
and 
	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1 && mp->cpu_map)

Thanks!

> 
> also node -> cpu_map is static configuration, we could prepare
> this map ahead (like cpunode_map) and just assign it in here
> based on node index

It makes sense and either way is possible. However the static configuration 
looks a bit trickier because it incurs additional mask objects duplication 
and conversion from struct cpu_map to cpu_set_t still remains the same.

Thanks,
Alexey

> 
> thanks,
> jirka
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
  2019-01-10  9:41     ` Alexey Budankov
@ 2019-01-10  9:54       ` Jiri Olsa
  2019-01-10 10:19         ` Alexey Budankov
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2019-01-10  9:54 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On Thu, Jan 10, 2019 at 12:41:55PM +0300, Alexey Budankov wrote:
> On 09.01.2019 19:53, Jiri Olsa wrote:
> > On Wed, Jan 09, 2019 at 12:38:23PM +0300, Alexey Budankov wrote:
> > 
> > SNIP
> > 
> >> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> >> index e5220790f1fb..ee0230eed635 100644
> >> --- a/tools/perf/util/mmap.c
> >> +++ b/tools/perf/util/mmap.c
> >> @@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
> >>  	auxtrace_mmap__munmap(&map->auxtrace_mmap);
> >>  }
> >>  
> >> +static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
> >> +{
> >> +	int c, cpu, nr_cpus, node;
> >> +
> >> +	CPU_ZERO(&map->affinity_mask);
> >> +	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
> >> +		nr_cpus = cpu_map__nr(mp->cpu_map);
> >> +		node = cpu__get_node(map->cpu);
> >> +		for (c = 0; c < nr_cpus; c++) {
> >> +			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
> >> +			if (cpu__get_node(cpu) == node)
> >> +				CPU_SET(cpu, &map->affinity_mask);
> > 
> > should we do that from from all possible cpus task (perf record)
> > can run on, instead of mp->cpu_map, which might be only subset
> > (-C ... option)
> 
> That is how it should be and because mp->cpu_map depends on -C option value 
> in this patch set version it requires to be corrected, possibly like this:
> 
> struct mmap_params mp = {
> 		.nr_cblocks	= nr_cblocks,
> 		.affinity	= affinity,
> 		.cpu_map	= cpu_map__new(NULL) /* builds struct cpu_map from /sys/devices/system/cpu/online */
> 	}; 
> and 
> 	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1 && mp->cpu_map)
> 
> Thanks!
> 
> > 
> > also node -> cpu_map is static configuration, we could prepare
> > this map ahead (like cpunode_map) and just assign it in here
> > based on node index
> 
> It makes sense and either way is possible. However the static configuration 
> looks a bit trickier because it incurs additional mask objects duplication 
> and conversion from struct cpu_map to cpu_set_t still remains the same.

ok, please at least put that node mask creation into separate function

thanks,
jirka

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers
  2019-01-10  9:54       ` Jiri Olsa
@ 2019-01-10 10:19         ` Alexey Budankov
  0 siblings, 0 replies; 14+ messages in thread
From: Alexey Budankov @ 2019-01-10 10:19 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra,
	Namhyung Kim, Alexander Shishkin, Andi Kleen, linux-kernel

On 10.01.2019 12:54, Jiri Olsa wrote:
> On Thu, Jan 10, 2019 at 12:41:55PM +0300, Alexey Budankov wrote:
>> On 09.01.2019 19:53, Jiri Olsa wrote:
>>> On Wed, Jan 09, 2019 at 12:38:23PM +0300, Alexey Budankov wrote:
>>>
>>> SNIP
>>>
>>>> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
>>>> index e5220790f1fb..ee0230eed635 100644
>>>> --- a/tools/perf/util/mmap.c
>>>> +++ b/tools/perf/util/mmap.c
>>>> @@ -377,6 +377,24 @@ void perf_mmap__munmap(struct perf_mmap *map)
>>>>  	auxtrace_mmap__munmap(&map->auxtrace_mmap);
>>>>  }
>>>>  
>>>> +static void perf_mmap__setup_affinity_mask(struct perf_mmap *map, struct mmap_params *mp)
>>>> +{
>>>> +	int c, cpu, nr_cpus, node;
>>>> +
>>>> +	CPU_ZERO(&map->affinity_mask);
>>>> +	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1) {
>>>> +		nr_cpus = cpu_map__nr(mp->cpu_map);
>>>> +		node = cpu__get_node(map->cpu);
>>>> +		for (c = 0; c < nr_cpus; c++) {
>>>> +			cpu = mp->cpu_map->map[c]; /* map c index to online cpu index */
>>>> +			if (cpu__get_node(cpu) == node)
>>>> +				CPU_SET(cpu, &map->affinity_mask);
>>>
>>> should we do that from from all possible cpus task (perf record)
>>> can run on, instead of mp->cpu_map, which might be only subset
>>> (-C ... option)
>>
>> That is how it should be and because mp->cpu_map depends on -C option value 
>> in this patch set version it requires to be corrected, possibly like this:
>>
>> struct mmap_params mp = {
>> 		.nr_cblocks	= nr_cblocks,
>> 		.affinity	= affinity,
>> 		.cpu_map	= cpu_map__new(NULL) /* builds struct cpu_map from /sys/devices/system/cpu/online */
>> 	}; 
>> and 
>> 	if (mp->affinity == PERF_AFFINITY_NODE && cpu__max_node() > 1 && mp->cpu_map)
>>
>> Thanks!
>>
>>>
>>> also node -> cpu_map is static configuration, we could prepare
>>> this map ahead (like cpunode_map) and just assign it in here
>>> based on node index
>>
>> It makes sense and either way is possible. However the static configuration 
>> looks a bit trickier because it incurs additional mask objects duplication 
>> and conversion from struct cpu_map to cpu_set_t still remains the same.
> 
> ok, please at least put that node mask creation into separate function

Will do like this:

static void build_node_mask(const struct cpu_map *cpumap, int node, cpu_set_t *mask)

Thanks,
Alexey

> 
> thanks,
> jirka
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-01-10 10:20 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-09  9:19 [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Alexey Budankov
2019-01-09  9:35 ` [PATCH v3 1/4] perf record: allocate affinity masks Alexey Budankov
2019-01-09  9:37 ` [PATCH v3 2/4] perf record: bind the AIO user space buffers to nodes Alexey Budankov
2019-01-09 15:58   ` Jiri Olsa
2019-01-09 16:58     ` Alexey Budankov
2019-01-09  9:38 ` [PATCH v3 3/4] perf record: apply affinity masks when reading mmap buffers Alexey Budankov
2019-01-09 16:53   ` Jiri Olsa
2019-01-10  9:41     ` Alexey Budankov
2019-01-10  9:54       ` Jiri Olsa
2019-01-10 10:19         ` Alexey Budankov
2019-01-09  9:40 ` [PATCH v3 4/4] perf record: implement --affinity=node|cpu option Alexey Budankov
2019-01-09 14:41 ` [PATCH v3 0/4] Reduce NUMA related overhead in perf record profiling on large server systems Jiri Olsa
2019-01-09 15:51   ` Jiri Olsa
2019-01-09 16:11   ` Alexey Budankov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).