All of lore.kernel.org
 help / color / mirror / Atom feed
* perf PMU support for Haswell v7
@ 2013-01-17 20:36 Andi Kleen
  2013-01-17 20:36 ` [PATCH 01/29] perf, x86: Add PEBSv2 record support Andi Kleen
                   ` (29 more replies)
  0 siblings, 30 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung

[Updated version for the latest master tree and fixes.  See end for details.
All feedback addressed. Ready for merging.]

This adds perf PMU support for the upcoming Haswell core. The patchkit 
is fairly large, mainly due to various enhancement for TSX. TSX tuning
relies heavily on the PMU, so I tried hard to make all facilities 
easily available. In addition it also has some other enhancements.

This includes changes to the core perf code, to the x86 specific part,
to the perf user land tools and to KVM

Available at 
git://git.kernel.org/pub/scm/linux/kernel/ak/linux-misc.git hsw/pmu3

High level overview:

- Basic Haswell PMU support
- Easy high level TSX measurement in perf stat -T
- Transaction events and attributes implemented with sysfs enumeration
- Export arch perfmon events in sysfs 
- Generic weightend profiling for memory latency and transaction abort costs.
- Support for address profiling
- Support for filtering events inside/outside transactions
- KVM support to do this from guests
- Support for filtering/displaying transaction abort types based on 
PEBS information
- LBR support for transactions

For more details on the Haswell PMU please see the SDM. For more details on TSX
please see http://halobates.de/adding-lock-elision-to-linux.pdf

Some of the added features could be added to older CPUs too. I plan
to do this, but in separate patches.

v2: Removed generic transaction events and qualifiers and use sysfs
enumeration. Also export arch perfmon, so that the qualifiers work.
Fixed various issues this exposed. Don't use a special macro for the
TSX constraints anymore. Address other review feedback.
Added pdir event in sysfs.

v3: Fix various bugs and address review comments.
tx-aborts instead of cpu/tx-aborts/ works now (with some limitations)
cpu/instructions,intx=1/ works now

v4:
Addressed all review feedback (I hope). See changelog in individual patches.
KVM support now works again with more changes.
Forbid some more flag combinations that don't work well.

v5:
Rebased on latest perf/core. New method for sysfs events.
Obsolete patches dropped. Added one patch from Stephane.
Fixed generic aliases inside cpu//
Improved transaction flags decoding
Addressed all review feedback (except for two minor items in
perf tools from Namhyung)

v6:
Fix WERROR=1 build with latest fixes.
Address KVM feedback. 
Improve transaction flags display.

v7:
Rebase to 3.8-rc3.
Some minor fixes based on feedback.
Fix for spurious NMI added.
Removed debug patch for spurious NMIs.

-Andi

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 01/29] perf, x86: Add PEBSv2 record support
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 02/29] perf, x86: Basic Haswell PMU support v2 Andi Kleen
                   ` (28 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.

The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with precise == 2
we use that directly and don't try to use LBRs and walking basic blocks.
This lowers the overhead significantly.

Some other features are added in later patches.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.c          |    2 +-
 arch/x86/kernel/cpu/perf_event_intel_ds.c |  101 ++++++++++++++++++++++-------
 2 files changed, 79 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 4428fd1..ec3c549 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -403,7 +403,7 @@ int x86_pmu_hw_config(struct perf_event *event)
 		 * check that PEBS LBR correction does not conflict with
 		 * whatever the user is asking with attr->branch_sample_type
 		 */
-		if (event->attr.precise_ip > 1) {
+		if (event->attr.precise_ip > 1 && x86_pmu.intel_cap.pebs_format < 2) {
 			u64 *br_type = &event->attr.branch_sample_type;
 
 			if (has_branch_stack(event)) {
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 826054a..9d0dae0 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -41,6 +41,12 @@ struct pebs_record_nhm {
 	u64 status, dla, dse, lat;
 };
 
+struct pebs_record_v2 {
+	struct pebs_record_nhm nhm;
+	u64 eventingrip;
+	u64 tsx_tuning;
+};
+
 void init_debug_store_on_cpu(int cpu)
 {
 	struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -559,8 +565,7 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 {
 	/*
 	 * We cast to pebs_record_core since that is a subset of
-	 * both formats and we don't use the other fields in this
-	 * routine.
+	 * both formats.
 	 */
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct pebs_record_core *pebs = __pebs;
@@ -588,7 +593,10 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 	regs.bp = pebs->bp;
 	regs.sp = pebs->sp;
 
-	if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(&regs))
+	if (event->attr.precise_ip > 1 && x86_pmu.intel_cap.pebs_format >= 2) {
+		regs.ip = ((struct pebs_record_v2 *)pebs)->eventingrip;
+		regs.flags |= PERF_EFLAGS_EXACT;
+	} else if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(&regs))
 		regs.flags |= PERF_EFLAGS_EXACT;
 	else
 		regs.flags &= ~PERF_EFLAGS_EXACT;
@@ -641,35 +649,21 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
 	__intel_pmu_pebs_event(event, iregs, at);
 }
 
-static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
+static void intel_pmu_drain_pebs_common(struct pt_regs *iregs, void *at,
+					void *top)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct debug_store *ds = cpuc->ds;
-	struct pebs_record_nhm *at, *top;
 	struct perf_event *event = NULL;
 	u64 status = 0;
-	int bit, n;
-
-	if (!x86_pmu.pebs_active)
-		return;
-
-	at  = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
-	top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
+	int bit;
 
 	ds->pebs_index = ds->pebs_buffer_base;
 
-	n = top - at;
-	if (n <= 0)
-		return;
+	for ( ; at < top; at += x86_pmu.pebs_record_size) {
+		struct pebs_record_nhm *p = at;
 
-	/*
-	 * Should not happen, we program the threshold at 1 and do not
-	 * set a reset value.
-	 */
-	WARN_ONCE(n > x86_pmu.max_pebs_events, "Unexpected number of pebs records %d\n", n);
-
-	for ( ; at < top; at++) {
-		for_each_set_bit(bit, (unsigned long *)&at->status, x86_pmu.max_pebs_events) {
+		for_each_set_bit(bit, (unsigned long *)&p->status, x86_pmu.max_pebs_events) {
 			event = cpuc->events[bit];
 			if (!test_bit(bit, cpuc->active_mask))
 				continue;
@@ -692,6 +686,61 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
 	}
 }
 
+static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	struct debug_store *ds = cpuc->ds;
+	struct pebs_record_nhm *at, *top;
+	int n;
+
+	if (!x86_pmu.pebs_active)
+		return;
+
+	at  = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
+	top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
+
+	ds->pebs_index = ds->pebs_buffer_base;
+
+	n = top - at;
+	if (n <= 0)
+		return;
+
+	/*
+	 * Should not happen, we program the threshold at 1 and do not
+	 * set a reset value.
+	 */
+	WARN_ONCE(n > x86_pmu.max_pebs_events,
+		  "Unexpected number of pebs records %d\n", n);
+
+	return intel_pmu_drain_pebs_common(iregs, at, top);
+}
+
+static void intel_pmu_drain_pebs_v2(struct pt_regs *iregs)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	struct debug_store *ds = cpuc->ds;
+	struct pebs_record_v2 *at, *top;
+	int n;
+
+	if (!x86_pmu.pebs_active)
+		return;
+
+	at  = (struct pebs_record_v2 *)(unsigned long)ds->pebs_buffer_base;
+	top = (struct pebs_record_v2 *)(unsigned long)ds->pebs_index;
+
+	n = top - at;
+	if (n <= 0)
+		return;
+	/*
+	 * Should not happen, we program the threshold at 1 and do not
+	 * set a reset value.
+	 */
+	WARN_ONCE(n > x86_pmu.max_pebs_events,
+		  "Unexpected number of pebs records %d\n", n);
+
+	return intel_pmu_drain_pebs_common(iregs, at, top);
+}
+
 /*
  * BTS, PEBS probe and setup
  */
@@ -723,6 +772,12 @@ void intel_ds_init(void)
 			x86_pmu.drain_pebs = intel_pmu_drain_pebs_nhm;
 			break;
 
+		case 2:
+			printk(KERN_CONT "PEBS fmt2%c, ", pebs_type);
+			x86_pmu.pebs_record_size = sizeof(struct pebs_record_v2);
+			x86_pmu.drain_pebs = intel_pmu_drain_pebs_v2;
+			break;
+
 		default:
 			printk(KERN_CONT "no PEBS fmt%d%c, ", format, pebs_type);
 			x86_pmu.pebs = 0;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 02/29] perf, x86: Basic Haswell PMU support v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
  2013-01-17 20:36 ` [PATCH 01/29] perf, x86: Add PEBSv2 record support Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 03/29] perf, x86: Basic Haswell PEBS support v3 Andi Kleen
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add basic Haswell PMU support.

Similar to SandyBridge, but has a few new events. Further
differences are handled in followon patches.

There are some new counter flags that need to be prevented
from being set on fixed counters.

Contains fixes from Stephane Eranian

v2: Folded TSX bits into standard FIXED_EVENT_CONSTRAINTS
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/perf_event.h      |    3 +++
 arch/x86/kernel/cpu/perf_event.h       |    5 ++++-
 arch/x86/kernel/cpu/perf_event_intel.c |   29 +++++++++++++++++++++++++++++
 3 files changed, 36 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 4fabcdf..4003bb6 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -29,6 +29,9 @@
 #define ARCH_PERFMON_EVENTSEL_INV			(1ULL << 23)
 #define ARCH_PERFMON_EVENTSEL_CMASK			0xFF000000ULL
 
+#define HSW_INTX					(1ULL << 32)
+#define HSW_INTX_CHECKPOINTED				(1ULL << 33)
+
 #define AMD_PERFMON_EVENTSEL_GUESTONLY			(1ULL << 40)
 #define AMD_PERFMON_EVENTSEL_HOSTONLY			(1ULL << 41)
 
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 115c1ea..8941899 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -219,11 +219,14 @@ struct cpu_hw_events {
  *  - inv
  *  - edge
  *  - cnt-mask
+ *  - intx
+ *  - intx_cp
  *  The other filters are supported by fixed counters.
  *  The any-thread option is supported starting with v3.
  */
+#define FIXED_EVENT_FLAGS (X86_RAW_EVENT_MASK|HSW_INTX|HSW_INTX_CHECKPOINTED)
 #define FIXED_EVENT_CONSTRAINT(c, n)	\
-	EVENT_CONSTRAINT(c, (1ULL << (32+n)), X86_RAW_EVENT_MASK)
+	EVENT_CONSTRAINT(c, (1ULL << (32+n)), FIXED_EVENT_FLAGS)
 
 /*
  * Constraint on the Event code + UMask
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 93b9e11..3a08534 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -133,6 +133,17 @@ static struct extra_reg intel_snb_extra_regs[] __read_mostly = {
 	EVENT_EXTRA_END
 };
 
+static struct event_constraint intel_hsw_event_constraints[] =
+{
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+	INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.PENDING */
+	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
+	INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
+	EVENT_CONSTRAINT_END
+};
+
 static u64 intel_pmu_event_map(int hw_event)
 {
 	return intel_perfmon_event_map[hw_event];
@@ -2107,6 +2118,24 @@ __init int intel_pmu_init(void)
 		break;
 
 
+	case 60: /* Haswell Client */
+	case 70:
+	case 71:
+		memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
+		       sizeof(hw_cache_event_ids));
+
+		intel_pmu_lbr_init_nhm();
+
+		x86_pmu.event_constraints = intel_hsw_event_constraints;
+
+		x86_pmu.extra_regs = intel_snb_extra_regs;
+		/* all extra regs are per-cpu when HT is on */
+		x86_pmu.er_flags |= ERF_HAS_RSP_1;
+		x86_pmu.er_flags |= ERF_NO_HT_SHARING;
+
+		pr_cont("Haswell events, ");
+		break;
+
 	default:
 		switch (x86_pmu.version) {
 		case 1:
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 03/29] perf, x86: Basic Haswell PEBS support v3
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
  2013-01-17 20:36 ` [PATCH 01/29] perf, x86: Add PEBSv2 record support Andi Kleen
  2013-01-17 20:36 ` [PATCH 02/29] perf, x86: Basic Haswell PMU support v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 04/29] perf, x86: Support the TSX intx/intx_cp qualifiers v2 Andi Kleen
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.

v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.h          |    2 ++
 arch/x86/kernel/cpu/perf_event_intel.c    |    6 ++++--
 arch/x86/kernel/cpu/perf_event_intel_ds.c |   29 +++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 8941899..1567b0d 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -596,6 +596,8 @@ extern struct event_constraint intel_snb_pebs_event_constraints[];
 
 extern struct event_constraint intel_ivb_pebs_event_constraints[];
 
+extern struct event_constraint intel_hsw_pebs_event_constraints[];
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event);
 
 void intel_pmu_pebs_enable(struct perf_event *event);
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 3a08534..634f639 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -826,7 +826,8 @@ static inline bool intel_pmu_needs_lbr_smpl(struct perf_event *event)
 		return true;
 
 	/* implicit branch sampling to correct PEBS skid */
-	if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1)
+	if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1 &&
+	    x86_pmu.intel_cap.pebs_format < 2)
 		return true;
 
 	return false;
@@ -2127,8 +2128,9 @@ __init int intel_pmu_init(void)
 		intel_pmu_lbr_init_nhm();
 
 		x86_pmu.event_constraints = intel_hsw_event_constraints;
-
+		x86_pmu.pebs_constraints = intel_hsw_pebs_event_constraints;
 		x86_pmu.extra_regs = intel_snb_extra_regs;
+		x86_pmu.pebs_aliases = intel_pebs_aliases_snb;
 		/* all extra regs are per-cpu when HT is on */
 		x86_pmu.er_flags |= ERF_HAS_RSP_1;
 		x86_pmu.er_flags |= ERF_NO_HT_SHARING;
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 9d0dae0..16d7c58 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -427,6 +427,35 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
         EVENT_CONSTRAINT_END
 };
 
+struct event_constraint intel_hsw_pebs_event_constraints[] = {
+	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+	INTEL_UEVENT_CONSTRAINT(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+	INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */
+	INTEL_EVENT_CONSTRAINT(0xc4, 0xf),    /* BR_INST_RETIRED.* */
+	INTEL_UEVENT_CONSTRAINT(0x01c5, 0xf), /* BR_MISP_RETIRED.CONDITIONAL */
+	INTEL_UEVENT_CONSTRAINT(0x04c5, 0xf), /* BR_MISP_RETIRED.ALL_BRANCHES */
+	INTEL_UEVENT_CONSTRAINT(0x20c5, 0xf), /* BR_MISP_RETIRED.NEAR_TAKEN */
+	INTEL_EVENT_CONSTRAINT(0xcd, 0x8),    /* MEM_TRANS_RETIRED.* */
+	INTEL_UEVENT_CONSTRAINT(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x12d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x41d0, 0xf), /* MEM_UOPS_RETIRED.SPLIT_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x42d0, 0xf), /* MEM_UOPS_RETIRED.SPLIT_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x81d0, 0xf), /* MEM_UOPS_RETIRED.ALL_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x82d0, 0xf), /* MEM_UOPS_RETIRED.ALL_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x01d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L1_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x02d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L2_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x04d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L3_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x40d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.HIT_LFB */
+	INTEL_UEVENT_CONSTRAINT(0x01d2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS */
+	INTEL_UEVENT_CONSTRAINT(0x02d2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x02d3, 0xf), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM */
+	INTEL_UEVENT_CONSTRAINT(0x04c8, 0xf), /* HLE_RETIRED.Abort */
+	INTEL_UEVENT_CONSTRAINT(0x04c9, 0xf), /* RTM_RETIRED.Abort */
+
+	EVENT_CONSTRAINT_END
+};
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event)
 {
 	struct event_constraint *c;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 04/29] perf, x86: Support the TSX intx/intx_cp qualifiers v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (2 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 03/29] perf, x86: Basic Haswell PEBS support v3 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4 Andi Kleen
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Implement the TSX transaction and checkpointed transaction qualifiers for
Haswell. This allows e.g. to profile the number of cycles in transactions.

The checkpointed qualifier requires forcing the event to
counter 2, implement this with a custom constraint for Haswell.

Also add sysfs format attributes for intx/intx_cp

[Updated from earlier version that used generic attributes, now does
raw + sysfs formats]
v2: Moved bad hunk. Forbid some bad combinations.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   61 ++++++++++++++++++++++++++++++++
 1 files changed, 61 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 634f639..44e18c02 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -13,6 +13,7 @@
 #include <linux/slab.h>
 #include <linux/export.h>
 
+#include <asm/cpufeature.h>
 #include <asm/hardirq.h>
 #include <asm/apic.h>
 
@@ -1597,6 +1598,44 @@ static void core_pmu_enable_all(int added)
 	}
 }
 
+static int hsw_hw_config(struct perf_event *event)
+{
+	int ret = intel_pmu_hw_config(event);
+
+	if (ret)
+		return ret;
+	if (!boot_cpu_has(X86_FEATURE_RTM) && !boot_cpu_has(X86_FEATURE_HLE))
+		return 0;
+	event->hw.config |= event->attr.config & (HSW_INTX|HSW_INTX_CHECKPOINTED);
+
+	/* 
+	 * INTX/INTX-CP do not play well with PEBS or ANY thread mode.
+	 */
+	if ((event->hw.config & (HSW_INTX|HSW_INTX_CHECKPOINTED)) &&
+	     ((event->hw.config & ARCH_PERFMON_EVENTSEL_ANY) ||
+	      event->attr.precise_ip > 0))
+		return -EIO;
+	return 0;
+}
+
+static struct event_constraint counter2_constraint = 
+			EVENT_CONSTRAINT(0, 0x4, 0);
+
+static struct event_constraint *
+hsw_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
+{
+	struct event_constraint *c = intel_get_event_constraints(cpuc, event);
+
+	/* Handle special quirk on intx_checkpointed only in counter 2 */
+	if (event->hw.config & HSW_INTX_CHECKPOINTED) {
+		if (c->idxmsk64 & (1U << 2))
+			return &counter2_constraint;
+		return &emptyconstraint;
+	}
+
+	return c;
+}
+
 PMU_FORMAT_ATTR(event,	"config:0-7"	);
 PMU_FORMAT_ATTR(umask,	"config:8-15"	);
 PMU_FORMAT_ATTR(edge,	"config:18"	);
@@ -1604,6 +1643,8 @@ PMU_FORMAT_ATTR(pc,	"config:19"	);
 PMU_FORMAT_ATTR(any,	"config:21"	); /* v3 + */
 PMU_FORMAT_ATTR(inv,	"config:23"	);
 PMU_FORMAT_ATTR(cmask,	"config:24-31"	);
+PMU_FORMAT_ATTR(intx,	"config:32"	);
+PMU_FORMAT_ATTR(intx_cp,"config:33"	);
 
 static struct attribute *intel_arch_formats_attr[] = {
 	&format_attr_event.attr,
@@ -1761,6 +1802,23 @@ static struct attribute *intel_arch3_formats_attr[] = {
 	NULL,
 };
 
+/* Arch3 + TSX support */
+static struct attribute *intel_hsw_formats_attr[] __read_mostly = {
+	&format_attr_event.attr,
+	&format_attr_umask.attr,
+	&format_attr_edge.attr,
+	&format_attr_pc.attr,
+	&format_attr_any.attr,
+	&format_attr_inv.attr,
+	&format_attr_cmask.attr,
+	&format_attr_intx.attr,
+	&format_attr_intx_cp.attr,
+
+	&format_attr_offcore_rsp.attr, /* XXX do NHM/WSM + SNB breakout */
+	NULL,
+};
+
+
 static __initconst const struct x86_pmu intel_pmu = {
 	.name			= "Intel",
 	.handle_irq		= intel_pmu_handle_irq,
@@ -2135,6 +2193,9 @@ __init int intel_pmu_init(void)
 		x86_pmu.er_flags |= ERF_HAS_RSP_1;
 		x86_pmu.er_flags |= ERF_NO_HT_SHARING;
 
+		x86_pmu.hw_config = hsw_hw_config;
+		x86_pmu.get_event_constraints = hsw_get_event_constraints;
+		x86_pmu.format_attrs = intel_hsw_formats_attr;
 		pr_cont("Haswell events, ");
 		break;
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (3 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 04/29] perf, x86: Support the TSX intx/intx_cp qualifiers v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-20 14:04   ` Gleb Natapov
  2013-01-17 20:36 ` [PATCH 06/29] perf, x86: Support PERF_SAMPLE_ADDR on Haswell Andi Kleen
                   ` (24 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen, avi, gleb

From: Andi Kleen <ak@linux.intel.com>

This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest

Cc: avi@redhat.com
Cc: gleb@redhat.com
v2: Various fixes to address review feedback
v3: Ignore the bits when no CPUID. No #GP. Force raw events with TSX bits.
v4: Use reserved bits for #GP
Cc: gleb@redhat.com
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/kvm_host.h |    1 +
 arch/x86/kvm/pmu.c              |   32 ++++++++++++++++++++++++--------
 2 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dc87b65..703a1f8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -320,6 +320,7 @@ struct kvm_pmu {
 	u64 global_ovf_ctrl;
 	u64 counter_bitmask[2];
 	u64 global_ctrl_mask;
+	u64 reserved_bits;
 	u8 version;
 	struct kvm_pmc gp_counters[INTEL_PMC_MAX_GENERIC];
 	struct kvm_pmc fixed_counters[INTEL_PMC_MAX_FIXED];
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index cfc258a..89405d0 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -160,7 +160,7 @@ static void stop_counter(struct kvm_pmc *pmc)
 
 static void reprogram_counter(struct kvm_pmc *pmc, u32 type,
 		unsigned config, bool exclude_user, bool exclude_kernel,
-		bool intr)
+		bool intr, bool intx, bool intx_cp)
 {
 	struct perf_event *event;
 	struct perf_event_attr attr = {
@@ -173,6 +173,10 @@ static void reprogram_counter(struct kvm_pmc *pmc, u32 type,
 		.exclude_kernel = exclude_kernel,
 		.config = config,
 	};
+	if (intx)
+		attr.config |= HSW_INTX;
+	if (intx_cp)
+		attr.config |= HSW_INTX_CHECKPOINTED;
 
 	attr.sample_period = (-pmc->counter) & pmc_bitmask(pmc);
 
@@ -206,7 +210,8 @@ static unsigned find_arch_event(struct kvm_pmu *pmu, u8 event_select,
 	return arch_events[i].event_type;
 }
 
-static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+static void reprogram_gp_counter(struct kvm_pmu *pmu, struct kvm_pmc *pmc, 
+				 u64 eventsel)
 {
 	unsigned config, type = PERF_TYPE_RAW;
 	u8 event_select, unit_mask;
@@ -226,7 +231,9 @@ static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 
 	if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE |
 				ARCH_PERFMON_EVENTSEL_INV |
-				ARCH_PERFMON_EVENTSEL_CMASK))) {
+				ARCH_PERFMON_EVENTSEL_CMASK |
+				HSW_INTX |
+				HSW_INTX_CHECKPOINTED))) {
 		config = find_arch_event(&pmc->vcpu->arch.pmu, event_select,
 				unit_mask);
 		if (config != PERF_COUNT_HW_MAX)
@@ -239,7 +246,9 @@ static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 	reprogram_counter(pmc, type, config,
 			!(eventsel & ARCH_PERFMON_EVENTSEL_USR),
 			!(eventsel & ARCH_PERFMON_EVENTSEL_OS),
-			eventsel & ARCH_PERFMON_EVENTSEL_INT);
+			eventsel & ARCH_PERFMON_EVENTSEL_INT,
+			(eventsel & HSW_INTX),
+			(eventsel & HSW_INTX_CHECKPOINTED));
 }
 
 static void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 en_pmi, int idx)
@@ -256,7 +265,7 @@ static void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 en_pmi, int idx)
 			arch_events[fixed_pmc_events[idx]].event_type,
 			!(en & 0x2), /* exclude user */
 			!(en & 0x1), /* exclude kernel */
-			pmi);
+			pmi, false, false);
 }
 
 static inline u8 fixed_en_pmi(u64 ctrl, int idx)
@@ -289,7 +298,7 @@ static void reprogram_idx(struct kvm_pmu *pmu, int idx)
 		return;
 
 	if (pmc_is_gp(pmc))
-		reprogram_gp_counter(pmc, pmc->eventsel);
+		reprogram_gp_counter(pmu, pmc, pmc->eventsel);
 	else {
 		int fidx = idx - INTEL_PMC_IDX_FIXED;
 		reprogram_fixed_counter(pmc,
@@ -400,8 +409,8 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data)
 		} else if ((pmc = get_gp_pmc(pmu, index, MSR_P6_EVNTSEL0))) {
 			if (data == pmc->eventsel)
 				return 0;
-			if (!(data & 0xffffffff00200000ull)) {
-				reprogram_gp_counter(pmc, data);
+			if (!(data & pmu->reserved_bits)) {
+				reprogram_gp_counter(pmu, pmc, data);
 				return 0;
 			}
 		}
@@ -442,6 +451,7 @@ void kvm_pmu_cpuid_update(struct kvm_vcpu *vcpu)
 	pmu->counter_bitmask[KVM_PMC_GP] = 0;
 	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
 	pmu->version = 0;
+	pmu->reserved_bits = 0xffffffff00200000ull;
 
 	entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
 	if (!entry)
@@ -470,6 +480,12 @@ void kvm_pmu_cpuid_update(struct kvm_vcpu *vcpu)
 	pmu->global_ctrl = ((1 << pmu->nr_arch_gp_counters) - 1) |
 		(((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED);
 	pmu->global_ctrl_mask = ~pmu->global_ctrl;
+
+	entry = kvm_find_cpuid_entry(vcpu, 7, 0);
+	if (entry &&
+	    (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
+	    (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM)))
+		pmu->reserved_bits ^= HSW_INTX|HSW_INTX_CHECKPOINTED;
 }
 
 void kvm_pmu_init(struct kvm_vcpu *vcpu)
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 06/29] perf, x86: Support PERF_SAMPLE_ADDR on Haswell
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (4 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 07/29] perf, x86: Support Haswell v4 LBR format Andi Kleen
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Haswell supplies the address for every PEBS memory event, so always fill it in
when the user requested it.  It will be 0 when not useful (no memory access)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_ds.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 16d7c58..aa0f5fa 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -630,6 +630,10 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 	else
 		regs.flags &= ~PERF_EFLAGS_EXACT;
 
+	if ((event->attr.sample_type & PERF_SAMPLE_ADDR) &&
+		x86_pmu.intel_cap.pebs_format >= 2)
+		data.addr = ((struct pebs_record_v2 *)pebs)->nhm.dla;
+
 	if (has_branch_stack(event))
 		data.br_stack = &cpuc->lbr_stack;
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 07/29] perf, x86: Support Haswell v4 LBR format
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (5 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 06/29] perf, x86: Support PERF_SAMPLE_ADDR on Haswell Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 08/29] perf, x86: Disable LBR recording for unknown LBR_FMT Andi Kleen
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Haswell has two additional LBR from flags for TSX: intx and abort, implemented
as a new v4 version of the LBR format.

Handle those in and adjust the sign extension code to still correctly extend.
The flags are exported similarly in the LBR record to the existing misprediction
flag

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |   18 +++++++++++++++---
 include/linux/perf_event.h                 |    7 ++++++-
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index da02e9c..2af6695b 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -12,6 +12,7 @@ enum {
 	LBR_FORMAT_LIP		= 0x01,
 	LBR_FORMAT_EIP		= 0x02,
 	LBR_FORMAT_EIP_FLAGS	= 0x03,
+	LBR_FORMAT_EIP_FLAGS2	= 0x04,
 };
 
 /*
@@ -56,6 +57,8 @@ enum {
 	 LBR_FAR)
 
 #define LBR_FROM_FLAG_MISPRED  (1ULL << 63)
+#define LBR_FROM_FLAG_INTX     (1ULL << 62)
+#define LBR_FROM_FLAG_ABORT    (1ULL << 61)
 
 #define for_each_branch_sample_type(x) \
 	for ((x) = PERF_SAMPLE_BRANCH_USER; \
@@ -270,21 +273,30 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
 
 	for (i = 0; i < x86_pmu.lbr_nr; i++) {
 		unsigned long lbr_idx = (tos - i) & mask;
-		u64 from, to, mis = 0, pred = 0;
+		u64 from, to, mis = 0, pred = 0, intx = 0, abort = 0;
 
 		rdmsrl(x86_pmu.lbr_from + lbr_idx, from);
 		rdmsrl(x86_pmu.lbr_to   + lbr_idx, to);
 
-		if (lbr_format == LBR_FORMAT_EIP_FLAGS) {
+		if (lbr_format == LBR_FORMAT_EIP_FLAGS ||
+		    lbr_format == LBR_FORMAT_EIP_FLAGS2) {
 			mis = !!(from & LBR_FROM_FLAG_MISPRED);
 			pred = !mis;
-			from = (u64)((((s64)from) << 1) >> 1);
+			if (lbr_format == LBR_FORMAT_EIP_FLAGS)
+				from = (u64)((((s64)from) << 1) >> 1);
+			else if (lbr_format == LBR_FORMAT_EIP_FLAGS2) {
+				intx = !!(from & LBR_FROM_FLAG_INTX);
+				abort = !!(from & LBR_FROM_FLAG_ABORT);
+				from = (u64)((((s64)from) << 3) >> 3);
+			}
 		}
 
 		cpuc->lbr_entries[i].from	= from;
 		cpuc->lbr_entries[i].to		= to;
 		cpuc->lbr_entries[i].mispred	= mis;
 		cpuc->lbr_entries[i].predicted	= pred;
+		cpuc->lbr_entries[i].intx	= intx;
+		cpuc->lbr_entries[i].abort	= abort;
 		cpuc->lbr_entries[i].reserved	= 0;
 	}
 	cpuc->lbr_stack.nr = i;
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 6bfb2faa..91052e1 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -74,13 +74,18 @@ struct perf_raw_record {
  *
  * support for mispred, predicted is optional. In case it
  * is not supported mispred = predicted = 0.
+ *
+ *     intx: running in a hardware transaction
+ *     abort: aborting a hardware transaction
  */
 struct perf_branch_entry {
 	__u64	from;
 	__u64	to;
 	__u64	mispred:1,  /* target mispredicted */
 		predicted:1,/* target predicted */
-		reserved:62;
+		intx:1,	    /* in transaction */
+		abort:1,    /* transaction abort */
+		reserved:60;
 };
 
 /*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 08/29] perf, x86: Disable LBR recording for unknown LBR_FMT
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (6 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 07/29] perf, x86: Support Haswell v4 LBR format Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 09/29] perf, x86: Support LBR filtering by INTX/NOTX/ABORT v2 Andi Kleen
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

When the LBR format is unknown disable LBR recording. This prevents
crashes when the LBR address is misdecoded and mis-sign extended.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index 2af6695b..ad5af13 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -13,6 +13,7 @@ enum {
 	LBR_FORMAT_EIP		= 0x02,
 	LBR_FORMAT_EIP_FLAGS	= 0x03,
 	LBR_FORMAT_EIP_FLAGS2	= 0x04,
+	LBR_FORMAT_MAX_KNOWN	= LBR_FORMAT_EIP_FLAGS2,
 };
 
 /*
@@ -392,7 +393,7 @@ int intel_pmu_setup_lbr_filter(struct perf_event *event)
 	/*
 	 * no LBR on this PMU
 	 */
-	if (!x86_pmu.lbr_nr)
+	if (!x86_pmu.lbr_nr || x86_pmu.intel_cap.lbr_format > LBR_FORMAT_MAX_KNOWN)
 		return -EOPNOTSUPP;
 
 	/*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 09/29] perf, x86: Support LBR filtering by INTX/NOTX/ABORT v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (7 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 08/29] perf, x86: Disable LBR recording for unknown LBR_FMT Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 10/29] perf, tools: Add abort_tx,no_tx,in_tx branch filter options to perf record -j v3 Andi Kleen
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add LBR filtering for branch in transaction, branch not in transaction
or transaction abort. This is exposed as new sample types.

v2: Rename ABORT to ABORTTX
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |   31 +++++++++++++++++++++++++--
 include/uapi/linux/perf_event.h            |    5 +++-
 2 files changed, 32 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
index ad5af13..5455a00 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
@@ -85,9 +85,13 @@ enum {
 	X86_BR_JMP      = 1 << 9, /* jump */
 	X86_BR_IRQ      = 1 << 10,/* hw interrupt or trap or fault */
 	X86_BR_IND_CALL = 1 << 11,/* indirect calls */
+	X86_BR_ABORT    = 1 << 12,/* transaction abort */
+	X86_BR_INTX     = 1 << 13,/* in transaction */
+	X86_BR_NOTX     = 1 << 14,/* not in transaction */
 };
 
 #define X86_BR_PLM (X86_BR_USER | X86_BR_KERNEL)
+#define X86_BR_ANYTX (X86_BR_NOTX | X86_BR_INTX)
 
 #define X86_BR_ANY       \
 	(X86_BR_CALL    |\
@@ -99,6 +103,7 @@ enum {
 	 X86_BR_JCC     |\
 	 X86_BR_JMP	 |\
 	 X86_BR_IRQ	 |\
+	 X86_BR_ABORT	 |\
 	 X86_BR_IND_CALL)
 
 #define X86_BR_ALL (X86_BR_PLM | X86_BR_ANY)
@@ -347,6 +352,16 @@ static void intel_pmu_setup_sw_lbr_filter(struct perf_event *event)
 
 	if (br_type & PERF_SAMPLE_BRANCH_IND_CALL)
 		mask |= X86_BR_IND_CALL;
+
+	if (br_type & PERF_SAMPLE_BRANCH_ABORTTX)
+		mask |= X86_BR_ABORT;
+
+	if (br_type & PERF_SAMPLE_BRANCH_INTX)
+		mask |= X86_BR_INTX;
+
+	if (br_type & PERF_SAMPLE_BRANCH_NOTX)
+		mask |= X86_BR_NOTX;
+
 	/*
 	 * stash actual user request into reg, it may
 	 * be used by fixup code for some CPU
@@ -393,7 +408,8 @@ int intel_pmu_setup_lbr_filter(struct perf_event *event)
 	/*
 	 * no LBR on this PMU
 	 */
-	if (!x86_pmu.lbr_nr || x86_pmu.intel_cap.lbr_format > LBR_FORMAT_MAX_KNOWN)
+	if (!x86_pmu.lbr_nr ||
+	    x86_pmu.intel_cap.lbr_format > LBR_FORMAT_MAX_KNOWN)
 		return -EOPNOTSUPP;
 
 	/*
@@ -421,7 +437,7 @@ int intel_pmu_setup_lbr_filter(struct perf_event *event)
  * decoded (e.g., text page not present), then X86_BR_NONE is
  * returned.
  */
-static int branch_type(unsigned long from, unsigned long to)
+static int branch_type(unsigned long from, unsigned long to, int abort)
 {
 	struct insn insn;
 	void *addr;
@@ -441,6 +457,9 @@ static int branch_type(unsigned long from, unsigned long to)
 	if (from == 0 || to == 0)
 		return X86_BR_NONE;
 
+	if (abort)
+		return X86_BR_ABORT | to_plm;
+
 	if (from_plm == X86_BR_USER) {
 		/*
 		 * can happen if measuring at the user level only
@@ -577,7 +596,13 @@ intel_pmu_lbr_filter(struct cpu_hw_events *cpuc)
 		from = cpuc->lbr_entries[i].from;
 		to = cpuc->lbr_entries[i].to;
 
-		type = branch_type(from, to);
+		type = branch_type(from, to, cpuc->lbr_entries[i].abort);
+		if (type != X86_BR_NONE && (br_sel & X86_BR_ANYTX)) {
+			if (cpuc->lbr_entries[i].intx)
+				type |= X86_BR_INTX;
+			else
+				type |= X86_BR_NOTX;
+		}
 
 		/* if type does not correspond, then discard */
 		if (type == X86_BR_NONE || (br_sel & type) != type) {
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 4f63c05..8e38823 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -155,8 +155,11 @@ enum perf_branch_sample_type {
 	PERF_SAMPLE_BRANCH_ANY_CALL	= 1U << 4, /* any call branch */
 	PERF_SAMPLE_BRANCH_ANY_RETURN	= 1U << 5, /* any return branch */
 	PERF_SAMPLE_BRANCH_IND_CALL	= 1U << 6, /* indirect calls */
+	PERF_SAMPLE_BRANCH_ABORTTX	= 1U << 7, /* transaction aborts */
+	PERF_SAMPLE_BRANCH_INTX		= 1U << 8, /* in transaction (flag) */
+	PERF_SAMPLE_BRANCH_NOTX		= 1U << 9, /* not in transaction (flag) */
 
-	PERF_SAMPLE_BRANCH_MAX		= 1U << 7, /* non-ABI */
+	PERF_SAMPLE_BRANCH_MAX		= 1U << 10, /* non-ABI */
 };
 
 #define PERF_SAMPLE_BRANCH_PLM_ALL \
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 10/29] perf, tools: Add abort_tx,no_tx,in_tx branch filter options to perf record -j v3
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (8 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 09/29] perf, x86: Support LBR filtering by INTX/NOTX/ABORT v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 11/29] perf, tools: Support sorting by intx, abort branch flags v2 Andi Kleen
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Make perf record -j aware of the new in_tx,no_tx,abort_tx branch qualifiers.

v2: ABORT -> ABORTTX
v3: Add more _
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-record.txt |    3 +++
 tools/perf/builtin-record.c              |    3 +++
 2 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 938e890..f7d74b2 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -172,6 +172,9 @@ following filters are defined:
         - u:  only when the branch target is at the user level
         - k: only when the branch target is in the kernel
         - hv: only when the target is at the hypervisor level
+	- in_tx: only when the target is in a hardware transaction
+	- no_tx: only when the target is not in a hardware transaction
+	- abort_tx: only when the target is a hardware transaction abort
 
 +
 The option requires at least one branch type among any, any_call, any_ret, ind_call.
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index f3151d3..e7da893 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -781,6 +781,9 @@ static const struct branch_mode branch_modes[] = {
 	BRANCH_OPT("any_call", PERF_SAMPLE_BRANCH_ANY_CALL),
 	BRANCH_OPT("any_ret", PERF_SAMPLE_BRANCH_ANY_RETURN),
 	BRANCH_OPT("ind_call", PERF_SAMPLE_BRANCH_IND_CALL),
+	BRANCH_OPT("abort_tx", PERF_SAMPLE_BRANCH_ABORTTX),
+	BRANCH_OPT("in_tx", PERF_SAMPLE_BRANCH_INTX),
+	BRANCH_OPT("no_tx", PERF_SAMPLE_BRANCH_NOTX),
 	BRANCH_END
 };
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 11/29] perf, tools: Support sorting by intx, abort branch flags v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (9 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 10/29] perf, tools: Add abort_tx,no_tx,in_tx branch filter options to perf record -j v3 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 12/29] perf, x86: Support full width counting Andi Kleen
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Extend the perf branch sorting code to support sorting by intx
or abort qualifiers. Also print out those qualifiers.

v2: Readd flags to man pages
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-report.txt |    4 ++-
 tools/perf/Documentation/perf-top.txt    |    4 ++-
 tools/perf/builtin-report.c              |    3 +-
 tools/perf/builtin-top.c                 |    4 ++-
 tools/perf/perf.h                        |    4 ++-
 tools/perf/util/hist.h                   |    2 +
 tools/perf/util/sort.c                   |   55 ++++++++++++++++++++++++++++++
 tools/perf/util/sort.h                   |    2 +
 8 files changed, 73 insertions(+), 5 deletions(-)

diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index f4d91be..cb4216d 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -57,7 +57,9 @@ OPTIONS
 
 -s::
 --sort=::
-	Sort by key(s): pid, comm, dso, symbol, parent, srcline.
+	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
+        dso_from, dso_to, symbol_to, symbol_from, mispredict,
+        abort, intx
 
 -p::
 --parent=<regex>::
diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
index 5b80d84..1398b73 100644
--- a/tools/perf/Documentation/perf-top.txt
+++ b/tools/perf/Documentation/perf-top.txt
@@ -112,7 +112,9 @@ Default is to monitor all CPUS.
 
 -s::
 --sort::
-	Sort by key(s): pid, comm, dso, symbol, parent, srcline.
+	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
+        dso_from, dso_to, symbol_to, symbol_from, mispredict,
+        abort, intx
 
 -n::
 --show-nr-samples::
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index fc25100..072c388 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -596,7 +596,8 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
 		    "Use the stdio interface"),
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
 		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
-		   " dso_from, symbol_to, symbol_from, mispredict"),
+		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
+		   " abort, intx"),
 	OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
 		    "Show sample percentage for different cpu modes"),
 	OPT_STRING('p', "parent", &parent_pattern, "regex",
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index c9ff395..6cfb678 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -1230,7 +1230,9 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_INCR('v', "verbose", &verbose,
 		    "be more verbose (show counter open errors, etc)"),
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
-		   "sort by key(s): pid, comm, dso, symbol, parent"),
+		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
+		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
+		   " abort, intx"),
 	OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
 		    "Show a column with the number of samples"),
 	OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 2c340e7..c6d315b 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -197,7 +197,9 @@ struct ip_callchain {
 struct branch_flags {
 	u64 mispred:1;
 	u64 predicted:1;
-	u64 reserved:62;
+	u64 intx:1;
+	u64 abort:1;
+	u64 reserved:60;
 };
 
 struct branch_entry {
diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
index 8b091a5..4ae7c25 100644
--- a/tools/perf/util/hist.h
+++ b/tools/perf/util/hist.h
@@ -44,6 +44,8 @@ enum hist_column {
 	HISTC_PARENT,
 	HISTC_CPU,
 	HISTC_MISPREDICT,
+	HISTC_INTX,
+	HISTC_ABORT,
 	HISTC_SYMBOL_FROM,
 	HISTC_SYMBOL_TO,
 	HISTC_DSO_FROM,
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index cfd1c0f..a8d1f1a 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -476,6 +476,55 @@ struct sort_entry sort_mispredict = {
 	.se_width_idx	= HISTC_MISPREDICT,
 };
 
+static int64_t
+sort__abort_cmp(struct hist_entry *left, struct hist_entry *right)
+{
+	return left->branch_info->flags.abort !=
+		right->branch_info->flags.abort;
+}
+
+static int hist_entry__abort_snprintf(struct hist_entry *self, char *bf,
+				    size_t size, unsigned int width)
+{
+	static const char *out = ".";
+
+	if (self->branch_info->flags.abort)
+		out = "A";
+	return repsep_snprintf(bf, size, "%-*s", width, out);
+}
+
+struct sort_entry sort_abort = {
+	.se_header	= "Transaction abort",
+	.se_cmp		= sort__abort_cmp,
+	.se_snprintf	= hist_entry__abort_snprintf,
+	.se_width_idx	= HISTC_ABORT,
+};
+
+static int64_t
+sort__intx_cmp(struct hist_entry *left, struct hist_entry *right)
+{
+	return left->branch_info->flags.intx !=
+		right->branch_info->flags.intx;
+}
+
+static int hist_entry__intx_snprintf(struct hist_entry *self, char *bf,
+				    size_t size, unsigned int width)
+{
+	static const char *out = ".";
+
+	if (self->branch_info->flags.intx)
+		out = "T";
+
+	return repsep_snprintf(bf, size, "%-*s", width, out);
+}
+
+struct sort_entry sort_intx = {
+	.se_header	= "Branch in transaction",
+	.se_cmp		= sort__intx_cmp,
+	.se_snprintf	= hist_entry__intx_snprintf,
+	.se_width_idx	= HISTC_INTX,
+};
+
 struct sort_dimension {
 	const char		*name;
 	struct sort_entry	*entry;
@@ -497,6 +546,8 @@ static struct sort_dimension sort_dimensions[] = {
 	DIM(SORT_CPU, "cpu", sort_cpu),
 	DIM(SORT_MISPREDICT, "mispredict", sort_mispredict),
 	DIM(SORT_SRCLINE, "srcline", sort_srcline),
+	DIM(SORT_ABORT, "abort", sort_abort),
+	DIM(SORT_INTX, "intx", sort_intx)
 };
 
 int sort_dimension__add(const char *tok)
@@ -553,6 +604,10 @@ int sort_dimension__add(const char *tok)
 				sort__first_dimension = SORT_DSO_TO;
 			else if (!strcmp(sd->name, "mispredict"))
 				sort__first_dimension = SORT_MISPREDICT;
+			else if (!strcmp(sd->name, "intx"))
+				sort__first_dimension = SORT_INTX;
+			else if (!strcmp(sd->name, "abort"))
+				sort__first_dimension = SORT_ABORT;
 		}
 
 		list_add_tail(&sd->entry->list, &hist_entry__sort_list);
diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
index b4e8c3b..f811a0a 100644
--- a/tools/perf/util/sort.h
+++ b/tools/perf/util/sort.h
@@ -137,6 +137,8 @@ enum sort_type {
 	SORT_SYM_TO,
 	SORT_MISPREDICT,
 	SORT_SRCLINE,
+	SORT_ABORT,
+	SORT_INTX,
 };
 
 /*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 12/29] perf, x86: Support full width counting
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (10 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 11/29] perf, tools: Support sorting by intx, abort branch flags v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 13/29] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v3 Andi Kleen
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Recent Intel CPUs have a new alternative MSR range for perfctrs that allows
writing the full counter width. Enable this range if the hardware reports it
using a new capability bit. This lowers overhead of perf stat slightly because
it has to do less interrupts to accumulate the counter value. On Haswell it
also avoids some problems with TSX aborting when the end of the counter
range is reached.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/uapi/asm/msr-index.h  |    3 +++
 arch/x86/kernel/cpu/perf_event.h       |    1 +
 arch/x86/kernel/cpu/perf_event_intel.c |    6 ++++++
 3 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h
index 433a59f..af41a77 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -163,6 +163,9 @@
 #define MSR_KNC_EVNTSEL0               0x00000028
 #define MSR_KNC_EVNTSEL1               0x00000029
 
+/* Alternative perfctr range with full access. */
+#define MSR_IA32_PMC0			0x000004c1
+
 /* AMD64 MSRs. Not complete. See the architecture manual for a more
    complete list. */
 
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 1567b0d..ce2a863 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -278,6 +278,7 @@ union perf_capabilities {
 		u64	pebs_arch_reg:1;
 		u64	pebs_format:4;
 		u64	smm_freeze:1;
+		u64	fw_write:1;
 	};
 	u64	capabilities;
 };
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 44e18c02..bc21bce 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2247,5 +2247,11 @@ __init int intel_pmu_init(void)
 		}
 	}
 
+	/* Support full width counters using alternative MSR range */
+	if (x86_pmu.intel_cap.fw_write) {
+		x86_pmu.max_period = x86_pmu.cntval_mask;
+		x86_pmu.perfctr = MSR_IA32_PMC0;
+	}
+
 	return 0;
 }
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 13/29] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v3
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (11 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 12/29] perf, x86: Support full width counting Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 14/29] perf, core: Add a concept of a weightened sample v2 Andi Kleen
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

With checkpointed counters there can be a situation where the counter
is overflowing, aborts the transaction, is set back to a non overflowing
checkpoint, causes interupt. The interrupt doesn't see the overflow
because it has been checkpointed.  This is then a spurious PMI, typically with a
ugly NMI message.  It can also lead to excessive aborts.

Avoid this problem by:
- Using the full counter width for counting counters (previous patch)
- Forbid sampling for checkpointed counters. It's not too useful anyways,
checkpointing is mainly for counting.
- On a PMI always set back checkpointed counters to zero.

v2: Add unlikely. Add comment
v3: Allow large sampling periods with CP for KVM
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   34 ++++++++++++++++++++++++++++++++
 1 files changed, 34 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index bc21bce..9b4dda5 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1079,6 +1079,17 @@ static void intel_pmu_enable_event(struct perf_event *event)
 int intel_pmu_save_and_restart(struct perf_event *event)
 {
 	x86_perf_event_update(event);
+	/*
+	 * For a checkpointed counter always reset back to 0.  This
+	 * avoids a situation where the counter overflows, aborts the
+	 * transaction and is then set back to shortly before the
+	 * overflow, and overflows and aborts again.
+	 */
+	if (unlikely(event->hw.config & HSW_INTX_CHECKPOINTED)) {
+		/* No race with NMIs because the counter should not be armed */
+		wrmsrl(event->hw.event_base, 0);
+		local64_set(&event->hw.prev_count, 0);
+	}
 	return x86_perf_event_set_period(event);
 }
 
@@ -1162,6 +1173,15 @@ again:
 		x86_pmu.drain_pebs(regs);
 	}
 
+	/*
+ 	 * To avoid spurious interrupts with perf stat always reset checkpointed
+ 	 * counters.
+ 	 *
+	 * XXX move somewhere else.
+	 */
+	if (cpuc->events[2] && (cpuc->events[2]->hw.config & HSW_INTX_CHECKPOINTED))
+		status |= (1ULL << 2);
+
 	for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
 		struct perf_event *event = cpuc->events[bit];
 
@@ -1615,6 +1635,20 @@ static int hsw_hw_config(struct perf_event *event)
 	     ((event->hw.config & ARCH_PERFMON_EVENTSEL_ANY) ||
 	      event->attr.precise_ip > 0))
 		return -EIO;
+	if (event->hw.config & HSW_INTX_CHECKPOINTED) {
+		/*
+		 * Sampling of checkpointed events can cause situations where
+		 * the CPU constantly aborts because of a overflow, which is
+		 * then checkpointed back and ignored. Forbid checkpointing
+		 * for sampling.
+		 *
+		 * But still allow a long sampling period, so that perf stat
+		 * from KVM works.
+		 */
+		if (event->attr.sample_period > 0 &&
+		    event->attr.sample_period < 0x7fffffff)
+			return -EIO;
+	}
 	return 0;
 }
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 14/29] perf, core: Add a concept of a weightened sample v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (12 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 13/29] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v3 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 15/29] perf, x86: Support weight samples for PEBS Andi Kleen
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

For some events it's useful to weight sample with a hardware
provided number. This expresses how expensive the action the
sample represent was.  This allows the profiler to scale
the samples to be more informative to the programmer.

There is already the period which is used similarly, but it means
something different, so I chose to not overload it. Instead
a new sample type for WEIGHT is added.

Can be used for multiple things. Initially it is used for TSX abort costs
and profiling by memory latencies (so to make expensive load appear higher
up in the histograms)  The concept is quite generic and can be extended
to many other kinds of events or architectures, as long as the hardware
provides suitable auxillary values. In principle it could be also
used for software tracpoints.

This adds the generic glue. A new optional sample format for a 64bit
weight value.

v2: Move weight format to the end. Remove *_FORMAT_WEIGHT
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 include/linux/perf_event.h      |    2 ++
 include/uapi/linux/perf_event.h |    6 +++++-
 kernel/events/core.c            |    6 ++++++
 3 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 91052e1..c9686c8 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -588,6 +588,7 @@ struct perf_sample_data {
 	struct perf_branch_stack	*br_stack;
 	struct perf_regs_user		regs_user;
 	u64				stack_user_size;
+	u64				weight;
 };
 
 static inline void perf_sample_data_init(struct perf_sample_data *data,
@@ -601,6 +602,7 @@ static inline void perf_sample_data_init(struct perf_sample_data *data,
 	data->regs_user.abi = PERF_SAMPLE_REGS_ABI_NONE;
 	data->regs_user.regs = NULL;
 	data->stack_user_size = 0;
+	data->weight = 0;
 }
 
 extern void perf_output_sample(struct perf_output_handle *handle,
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 8e38823..6f80062 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -132,8 +132,10 @@ enum perf_event_sample_format {
 	PERF_SAMPLE_BRANCH_STACK		= 1U << 11,
 	PERF_SAMPLE_REGS_USER			= 1U << 12,
 	PERF_SAMPLE_STACK_USER			= 1U << 13,
+	PERF_SAMPLE_WEIGHT			= 1U << 14,
+
+	PERF_SAMPLE_MAX = 1U << 15,		/* non-ABI */
 
-	PERF_SAMPLE_MAX = 1U << 14,		/* non-ABI */
 };
 
 /*
@@ -590,6 +592,8 @@ enum perf_event_type {
 	 * 	{ u64			size;
 	 * 	  char			data[size];
 	 * 	  u64			dyn_size; } && PERF_SAMPLE_STACK_USER
+	 *
+	 *	{ u64			weight;   } && PERF_SAMPLE_WEIGHT
 	 * };
 	 */
 	PERF_RECORD_SAMPLE			= 9,
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 301079d..749bdf4 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -952,6 +952,9 @@ static void perf_event__header_size(struct perf_event *event)
 	if (sample_type & PERF_SAMPLE_PERIOD)
 		size += sizeof(data->period);
 
+	if (sample_type & PERF_SAMPLE_WEIGHT)
+		size += sizeof(data->weight);
+
 	if (sample_type & PERF_SAMPLE_READ)
 		size += event->read_size;
 
@@ -4169,6 +4172,9 @@ void perf_output_sample(struct perf_output_handle *handle,
 		perf_output_sample_ustack(handle,
 					  data->stack_user_size,
 					  data->regs_user.regs);
+
+	if (sample_type & PERF_SAMPLE_WEIGHT)
+		perf_output_put(handle, data->weight);
 }
 
 void perf_prepare_sample(struct perf_event_header *header,
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 15/29] perf, x86: Support weight samples for PEBS
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (13 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 14/29] perf, core: Add a concept of a weightened sample v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 16/29] perf, tools: Add support for weight v7 Andi Kleen
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

When a weighted sample is requested, first try to report the TSX abort cost
on Haswell. If that is not available report the memory latency. This
allows profiling both by abort cost and by memory latencies.

Memory latencies requires enabling a different PEBS mode (LL).
When both address and weight is requested address wins.

The LL mode only works for memory related PEBS events, so add a
separate event constraint table for those.

I only did this for Haswell for now, but it could be added
for several other Intel CPUs too by just adding the right
table for them.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.h          |    4 ++
 arch/x86/kernel/cpu/perf_event_intel.c    |    4 ++
 arch/x86/kernel/cpu/perf_event_intel_ds.c |   47 +++++++++++++++++++++++++++-
 3 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index ce2a863..d55e502 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -168,6 +168,7 @@ struct cpu_hw_events {
 	u64				perf_ctr_virt_mask;
 
 	void				*kfree_on_online;
+	u8				*memory_latency_events;
 };
 
 #define __EVENT_CONSTRAINT(c, n, m, w, o) {\
@@ -390,6 +391,7 @@ struct x86_pmu {
 	struct event_constraint *pebs_constraints;
 	void		(*pebs_aliases)(struct perf_event *event);
 	int 		max_pebs_events;
+	struct event_constraint *memory_lat_events;
 
 	/*
 	 * Intel LBR
@@ -599,6 +601,8 @@ extern struct event_constraint intel_ivb_pebs_event_constraints[];
 
 extern struct event_constraint intel_hsw_pebs_event_constraints[];
 
+extern struct event_constraint intel_hsw_memory_latency_events[];
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event);
 
 void intel_pmu_pebs_enable(struct perf_event *event);
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 9b4dda5..20caf0a 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1624,6 +1624,9 @@ static int hsw_hw_config(struct perf_event *event)
 
 	if (ret)
 		return ret;
+	/* PEBS cannot capture both */
+	if (event->attr.sample_type & PERF_SAMPLE_ADDR)
+		event->attr.sample_type &= ~PERF_SAMPLE_WEIGHT;
 	if (!boot_cpu_has(X86_FEATURE_RTM) && !boot_cpu_has(X86_FEATURE_HLE))
 		return 0;
 	event->hw.config |= event->attr.config & (HSW_INTX|HSW_INTX_CHECKPOINTED);
@@ -2230,6 +2233,7 @@ __init int intel_pmu_init(void)
 		x86_pmu.hw_config = hsw_hw_config;
 		x86_pmu.get_event_constraints = hsw_get_event_constraints;
 		x86_pmu.format_attrs = intel_hsw_formats_attr;
+		x86_pmu.memory_lat_events = intel_hsw_memory_latency_events;
 		pr_cont("Haswell events, ");
 		break;
 
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index aa0f5fa..3094caa 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -456,6 +456,17 @@ struct event_constraint intel_hsw_pebs_event_constraints[] = {
 	EVENT_CONSTRAINT_END
 };
 
+/* Subset of PEBS events supporting memory latency. Not used for scheduling */
+
+struct event_constraint intel_hsw_memory_latency_events[] = {
+	INTEL_EVENT_CONSTRAINT(0xcd, 0), /* MEM_TRANS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd0, 0), /* MEM_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd1, 0), /* MEM_LOAD_UOPS_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd2, 0), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+	INTEL_EVENT_CONSTRAINT(0xd3, 0), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
+	EVENT_CONSTRAINT_END
+};
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event)
 {
 	struct event_constraint *c;
@@ -473,6 +484,21 @@ struct event_constraint *intel_pebs_constraints(struct perf_event *event)
 	return &emptyconstraint;
 }
 
+static bool is_memory_lat_event(struct perf_event *event)
+{
+	struct event_constraint *c;
+
+	if (x86_pmu.intel_cap.pebs_format < 1)
+		return false;
+	if (!x86_pmu.memory_lat_events)
+		return false;
+	for_each_event_constraint(c, x86_pmu.memory_lat_events) {
+		if ((event->hw.config & c->cmask) == c->code)
+			return true;
+	}
+	return false;
+}
+
 void intel_pmu_pebs_enable(struct perf_event *event)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
@@ -480,7 +506,12 @@ void intel_pmu_pebs_enable(struct perf_event *event)
 
 	hwc->config &= ~ARCH_PERFMON_EVENTSEL_INT;
 
-	cpuc->pebs_enabled |= 1ULL << hwc->idx;
+	/* When weight is requested enable LL instead of normal PEBS */
+	if ((event->attr.sample_type & PERF_SAMPLE_WEIGHT) &&
+		is_memory_lat_event(event))
+		cpuc->pebs_enabled |= 1ULL << (32 + hwc->idx);
+	else
+		cpuc->pebs_enabled |= 1ULL << hwc->idx;
 }
 
 void intel_pmu_pebs_disable(struct perf_event *event)
@@ -488,7 +519,11 @@ void intel_pmu_pebs_disable(struct perf_event *event)
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct hw_perf_event *hwc = &event->hw;
 
-	cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
+	if ((event->attr.sample_type & PERF_SAMPLE_WEIGHT) &&
+		is_memory_lat_event(event))
+		cpuc->pebs_enabled &= ~(1ULL << (32 + hwc->idx));
+	else
+		cpuc->pebs_enabled &= ~(1ULL << hwc->idx);
 	if (cpuc->enabled)
 		wrmsrl(MSR_IA32_PEBS_ENABLE, cpuc->pebs_enabled);
 
@@ -634,6 +669,14 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 		x86_pmu.intel_cap.pebs_format >= 2)
 		data.addr = ((struct pebs_record_v2 *)pebs)->nhm.dla;
 
+	if ((event->attr.sample_type & PERF_SAMPLE_WEIGHT) &&
+	    x86_pmu.intel_cap.pebs_format >= 2) {
+		data.weight = ((struct pebs_record_v2 *)pebs)->tsx_tuning &
+				0xffffffff;
+		if (!data.weight)
+			data.weight = ((struct pebs_record_v2 *)pebs)->nhm.lat;
+	}
+
 	if (has_branch_stack(event))
 		data.br_stack = &cpuc->lbr_stack;
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (14 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 15/29] perf, x86: Support weight samples for PEBS Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-23 11:38   ` Stephane Eranian
  2013-01-17 20:36 ` [PATCH 17/29] perf, core: Add generic transaction flags v3 Andi Kleen
                   ` (13 subsequent siblings)
  29 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

perf record has a new option -W that enables weightened sampling.

Add sorting support in top/report for the average weight per sample and the
total weight sum. This allows to both compare relative cost per event
and the total cost over the measurement period.

Add the necessary glue to perf report, record and the library.

v2: Merge with new hist refactoring.
v3: Fix manpage. Remove value check.
Rename global_weight to weight and weight to local_weight.
v4: Readd sort keys to manpage
v5: Move weight to end
v6: Move weight to template
v7: Rename weight key.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-record.txt |    6 +++
 tools/perf/Documentation/perf-report.txt |    2 +-
 tools/perf/Documentation/perf-top.txt    |    2 +-
 tools/perf/builtin-annotate.c            |    2 +-
 tools/perf/builtin-diff.c                |    7 ++--
 tools/perf/builtin-record.c              |    2 +
 tools/perf/builtin-report.c              |    7 ++--
 tools/perf/builtin-top.c                 |    5 ++-
 tools/perf/perf.h                        |    1 +
 tools/perf/util/event.h                  |    1 +
 tools/perf/util/evsel.c                  |   10 ++++++
 tools/perf/util/hist.c                   |   23 +++++++++----
 tools/perf/util/hist.h                   |    8 +++-
 tools/perf/util/session.c                |    3 ++
 tools/perf/util/sort.c                   |   51 +++++++++++++++++++++++++++++-
 tools/perf/util/sort.h                   |    3 ++
 16 files changed, 112 insertions(+), 21 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index f7d74b2..6f3405e 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -185,6 +185,12 @@ is enabled for all the sampling events. The sampled branch type is the same for
 The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
 Note that this feature may not be available on all processors.
 
+-W::
+--weight::
+Enable weightened sampling. An additional weight is recorded per sample and can be
+displayed with the weight and local_weight sort keys.  This currently works for TSX
+abort events and some memory events in precise mode on modern Intel CPUs.
+
 SEE ALSO
 --------
 linkperf:perf-stat[1], linkperf:perf-list[1]
diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index cb4216d..5dabd4d 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -59,7 +59,7 @@ OPTIONS
 --sort=::
 	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
         dso_from, dso_to, symbol_to, symbol_from, mispredict,
-        abort, intx
+        abort, intx, local_weight, weight
 
 -p::
 --parent=<regex>::
diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
index 1398b73..3533e0a 100644
--- a/tools/perf/Documentation/perf-top.txt
+++ b/tools/perf/Documentation/perf-top.txt
@@ -114,7 +114,7 @@ Default is to monitor all CPUS.
 --sort::
 	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
         dso_from, dso_to, symbol_to, symbol_from, mispredict,
-        abort, intx
+        abort, intx,  local_weight, weight
 
 -n::
 --show-nr-samples::
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index dc870cf..1bacb7d 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -62,7 +62,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
 		return 0;
 	}
 
-	he = __hists__add_entry(&evsel->hists, al, NULL, 1);
+	he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
 	if (he == NULL)
 		return -ENOMEM;
 
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index 93b852f..03a322f 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -248,9 +248,10 @@ int perf_diff__formula(char *buf, size_t size, struct hist_entry *he)
 }
 
 static int hists__add_entry(struct hists *self,
-			    struct addr_location *al, u64 period)
+			    struct addr_location *al, u64 period,
+			    u64 weight)
 {
-	if (__hists__add_entry(self, al, NULL, period) != NULL)
+	if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
 		return 0;
 	return -ENOMEM;
 }
@@ -272,7 +273,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
 	if (al.filtered)
 		return 0;
 
-	if (hists__add_entry(&evsel->hists, &al, sample->period)) {
+	if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
 		pr_warning("problem incrementing symbol period, skipping event\n");
 		return -1;
 	}
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index e7da893..4e568aa 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1062,6 +1062,8 @@ const struct option record_options[] = {
 	OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
 		     "branch filter mask", "branch stack filter modes",
 		     parse_branch_stack),
+	OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
+		    "sample by weight (on special events only)"),
 	OPT_END()
 };
 
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 072c388..5dc0edd 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -88,7 +88,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
 		 * and not events sampled. Thus we use a pseudo period of 1.
 		 */
 		he = __hists__add_branch_entry(&evsel->hists, al, parent,
-				&bi[i], 1);
+				&bi[i], 1, 1);
 		if (he) {
 			struct annotation *notes;
 			err = -ENOMEM;
@@ -146,7 +146,8 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
 			return err;
 	}
 
-	he = __hists__add_entry(&evsel->hists, al, parent, sample->period);
+	he = __hists__add_entry(&evsel->hists, al, parent, sample->period,
+					sample->weight);
 	if (he == NULL)
 		return -ENOMEM;
 
@@ -597,7 +598,7 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
 		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
 		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
-		   " abort, intx"),
+		   " abort, intx,  weight, local_weight"),
 	OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
 		    "Show sample percentage for different cpu modes"),
 	OPT_STRING('p', "parent", &parent_pattern, "regex",
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 6cfb678..9f87db7 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -271,7 +271,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
 {
 	struct hist_entry *he;
 
-	he = __hists__add_entry(&evsel->hists, al, NULL, sample->period);
+	he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
+				sample->weight);
 	if (he == NULL)
 		return NULL;
 
@@ -1232,7 +1233,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
 		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
 		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
-		   " abort, intx"),
+		   " abort, intx, weight, local_weight"),
 	OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
 		    "Show a column with the number of samples"),
 	OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index c6d315b..7058155 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -238,6 +238,7 @@ struct perf_record_opts {
 	bool	     pipe_output;
 	bool	     raw_samples;
 	bool	     sample_address;
+	bool	     sample_weight;
 	bool	     sample_time;
 	bool	     sample_id_all_missing;
 	bool	     exclude_guest_missing;
diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
index 0d573ff..a97fbbe 100644
--- a/tools/perf/util/event.h
+++ b/tools/perf/util/event.h
@@ -88,6 +88,7 @@ struct perf_sample {
 	u64 id;
 	u64 stream_id;
 	u64 period;
+	u64 weight;
 	u32 cpu;
 	u32 raw_size;
 	void *raw_data;
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 1b16dd1..805d33e 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -510,6 +510,9 @@ void perf_evsel__config(struct perf_evsel *evsel,
 		attr->branch_sample_type = opts->branch_stack;
 	}
 
+	if (opts->sample_weight)
+		attr->sample_type	|= PERF_SAMPLE_WEIGHT;
+
 	attr->mmap = track;
 	attr->comm = track;
 
@@ -908,6 +911,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
 	data->cpu = data->pid = data->tid = -1;
 	data->stream_id = data->id = data->time = -1ULL;
 	data->period = 1;
+	data->weight = 0;
 
 	if (event->header.type != PERF_RECORD_SAMPLE) {
 		if (!evsel->attr.sample_id_all)
@@ -1058,6 +1062,12 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
 		}
 	}
 
+	data->weight = 0;
+	if (type & PERF_SAMPLE_WEIGHT) {
+		data->weight = *array;
+		array++;
+	}
+
 	return 0;
 }
 
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index cb17e2a..a8d7647 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -151,9 +151,11 @@ static void hist_entry__add_cpumode_period(struct hist_entry *he,
 	}
 }
 
-static void he_stat__add_period(struct he_stat *he_stat, u64 period)
+static void he_stat__add_period(struct he_stat *he_stat, u64 period,
+				u64 weight)
 {
 	he_stat->period		+= period;
+	he_stat->weight		+= weight;
 	he_stat->nr_events	+= 1;
 }
 
@@ -165,12 +167,14 @@ static void he_stat__add_stat(struct he_stat *dest, struct he_stat *src)
 	dest->period_guest_sys	+= src->period_guest_sys;
 	dest->period_guest_us	+= src->period_guest_us;
 	dest->nr_events		+= src->nr_events;
+	dest->weight		+= src->weight;
 }
 
 static void hist_entry__decay(struct hist_entry *he)
 {
 	he->stat.period = (he->stat.period * 7) / 8;
 	he->stat.nr_events = (he->stat.nr_events * 7) / 8;
+	/* XXX need decay for weight too? */
 }
 
 static bool hists__decay_entry(struct hists *hists, struct hist_entry *he)
@@ -270,7 +274,8 @@ static u8 symbol__parent_filter(const struct symbol *parent)
 static struct hist_entry *add_hist_entry(struct hists *hists,
 				      struct hist_entry *entry,
 				      struct addr_location *al,
-				      u64 period)
+				      u64 period,
+				      u64 weight)
 {
 	struct rb_node **p;
 	struct rb_node *parent = NULL;
@@ -288,7 +293,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
 		cmp = hist_entry__cmp(entry, he);
 
 		if (!cmp) {
-			he_stat__add_period(&he->stat, period);
+			he_stat__add_period(&he->stat, period, weight);
 
 			/* If the map of an existing hist_entry has
 			 * become out-of-date due to an exec() or
@@ -327,7 +332,8 @@ struct hist_entry *__hists__add_branch_entry(struct hists *self,
 					     struct addr_location *al,
 					     struct symbol *sym_parent,
 					     struct branch_info *bi,
-					     u64 period)
+					     u64 period,
+					     u64 weight)
 {
 	struct hist_entry entry = {
 		.thread	= al->thread,
@@ -341,6 +347,7 @@ struct hist_entry *__hists__add_branch_entry(struct hists *self,
 		.stat = {
 			.period	= period,
 			.nr_events = 1,
+			.weight = weight,
 		},
 		.parent = sym_parent,
 		.filtered = symbol__parent_filter(sym_parent),
@@ -348,12 +355,13 @@ struct hist_entry *__hists__add_branch_entry(struct hists *self,
 		.hists	= self,
 	};
 
-	return add_hist_entry(self, &entry, al, period);
+	return add_hist_entry(self, &entry, al, period, weight);
 }
 
 struct hist_entry *__hists__add_entry(struct hists *self,
 				      struct addr_location *al,
-				      struct symbol *sym_parent, u64 period)
+				      struct symbol *sym_parent, u64 period,
+				      u64 weight)
 {
 	struct hist_entry entry = {
 		.thread	= al->thread,
@@ -367,13 +375,14 @@ struct hist_entry *__hists__add_entry(struct hists *self,
 		.stat = {
 			.period	= period,
 			.nr_events = 1,
+			.weight = weight,
 		},
 		.parent = sym_parent,
 		.filtered = symbol__parent_filter(sym_parent),
 		.hists	= self,
 	};
 
-	return add_hist_entry(self, &entry, al, period);
+	return add_hist_entry(self, &entry, al, period, weight);
 }
 
 int64_t
diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
index 4ae7c25..3767453 100644
--- a/tools/perf/util/hist.h
+++ b/tools/perf/util/hist.h
@@ -51,6 +51,8 @@ enum hist_column {
 	HISTC_DSO_FROM,
 	HISTC_DSO_TO,
 	HISTC_SRCLINE,
+	HISTC_LOCAL_WEIGHT,
+	HISTC_GLOBAL_WEIGHT,
 	HISTC_NR_COLS, /* Last entry */
 };
 
@@ -75,7 +77,8 @@ struct hists {
 
 struct hist_entry *__hists__add_entry(struct hists *self,
 				      struct addr_location *al,
-				      struct symbol *parent, u64 period);
+				      struct symbol *parent, u64 period,
+				      u64 weight);
 int64_t hist_entry__cmp(struct hist_entry *left, struct hist_entry *right);
 int64_t hist_entry__collapse(struct hist_entry *left, struct hist_entry *right);
 int hist_entry__sort_snprintf(struct hist_entry *self, char *bf, size_t size,
@@ -86,7 +89,8 @@ struct hist_entry *__hists__add_branch_entry(struct hists *self,
 					     struct addr_location *al,
 					     struct symbol *sym_parent,
 					     struct branch_info *bi,
-					     u64 period);
+					     u64 period,
+					     u64 weight);
 
 void hists__output_resort(struct hists *self);
 void hists__output_resort_threaded(struct hists *hists);
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index ce6f511..3de9097 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -1006,6 +1006,9 @@ static void dump_sample(struct perf_evsel *evsel, union perf_event *event,
 
 	if (sample_type & PERF_SAMPLE_STACK_USER)
 		stack_user__printf(&sample->user_stack);
+
+	if (sample_type & PERF_SAMPLE_WEIGHT)
+		printf("... weight: %" PRIu64 "\n", sample->weight);
 }
 
 static struct machine *
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index a8d1f1a..76161c4 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -525,6 +525,49 @@ struct sort_entry sort_intx = {
 	.se_width_idx	= HISTC_INTX,
 };
 
+static u64 he_weight(struct hist_entry *he)
+{
+	return he->stat.nr_events ? he->stat.weight / he->stat.nr_events : 0;
+}
+
+static int64_t
+sort__local_weight_cmp(struct hist_entry *left, struct hist_entry *right)
+{
+	return he_weight(left) - he_weight(right);
+}
+
+static int hist_entry__local_weight_snprintf(struct hist_entry *self, char *bf,
+				    size_t size, unsigned int width)
+{
+	return repsep_snprintf(bf, size, "%-*llu", width, he_weight(self));
+}
+
+struct sort_entry sort_local_weight = {
+	.se_header	= "Local Weight",
+	.se_cmp		= sort__local_weight_cmp,
+	.se_snprintf	= hist_entry__local_weight_snprintf,
+	.se_width_idx	= HISTC_LOCAL_WEIGHT,
+};
+
+static int64_t
+sort__global_weight_cmp(struct hist_entry *left, struct hist_entry *right)
+{
+	return left->stat.weight - right->stat.weight;
+}
+
+static int hist_entry__global_weight_snprintf(struct hist_entry *self, char *bf,
+					      size_t size, unsigned int width)
+{
+	return repsep_snprintf(bf, size, "%-*llu", width, self->stat.weight);
+}
+
+struct sort_entry sort_global_weight = {
+	.se_header	= "Weight",
+	.se_cmp		= sort__global_weight_cmp,
+	.se_snprintf	= hist_entry__global_weight_snprintf,
+	.se_width_idx	= HISTC_GLOBAL_WEIGHT,
+};
+
 struct sort_dimension {
 	const char		*name;
 	struct sort_entry	*entry;
@@ -547,7 +590,9 @@ static struct sort_dimension sort_dimensions[] = {
 	DIM(SORT_MISPREDICT, "mispredict", sort_mispredict),
 	DIM(SORT_SRCLINE, "srcline", sort_srcline),
 	DIM(SORT_ABORT, "abort", sort_abort),
-	DIM(SORT_INTX, "intx", sort_intx)
+	DIM(SORT_INTX, "intx", sort_intx),
+	DIM(SORT_LOCAL_WEIGHT, "local_weight", sort_local_weight),
+	DIM(SORT_GLOBAL_WEIGHT, "weight", sort_global_weight),
 };
 
 int sort_dimension__add(const char *tok)
@@ -608,6 +653,10 @@ int sort_dimension__add(const char *tok)
 				sort__first_dimension = SORT_INTX;
 			else if (!strcmp(sd->name, "abort"))
 				sort__first_dimension = SORT_ABORT;
+			else if (!strcmp(sd->name, "weight"))
+				sort__first_dimension = SORT_GLOBAL_WEIGHT;
+			else if (!strcmp(sd->name, "local_weight"))
+				sort__first_dimension = SORT_LOCAL_WEIGHT;
 		}
 
 		list_add_tail(&sd->entry->list, &hist_entry__sort_list);
diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
index f811a0a..4a22b1b 100644
--- a/tools/perf/util/sort.h
+++ b/tools/perf/util/sort.h
@@ -49,6 +49,7 @@ struct he_stat {
 	u64			period_us;
 	u64			period_guest_sys;
 	u64			period_guest_us;
+	u64			weight;
 	u32			nr_events;
 };
 
@@ -139,6 +140,8 @@ enum sort_type {
 	SORT_SRCLINE,
 	SORT_ABORT,
 	SORT_INTX,
+	SORT_LOCAL_WEIGHT,
+	SORT_GLOBAL_WEIGHT,
 };
 
 /*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 17/29] perf, core: Add generic transaction flags v3
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (15 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 16/29] perf, tools: Add support for weight v7 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 18/29] perf, x86: Add Haswell specific transaction flag reporting Andi Kleen
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a generic qualifier for transaction events, as a new sample
type that returns a flag word. This is particularly useful
for qualifying aborts: to distinguish aborts which happen
due to asynchronous events (like conflicts caused by another
CPU) versus instructions that lead to an abort.

The tuning strategies are very different for those cases,
so it's important to distinguish them easily and early.

Since it's inconvenient and inflexible to filter for this
in the kernel we report all the events out and allow
some post processing in user space.

The flags are based on the Intel TSX events, but should be fairly
generic and mostly applicable to other architectures too. In addition
to various flag words there's also reserved space to report an
program supplied abort code. For TSX this is used to distinguish specific
classes of aborts, like a lock busy abort when doing lock elision.

This adds the perf core glue needed for reporting the new flag word out.

v2: Add MEM/MISC
v3: Move transaction to the end
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 include/linux/perf_event.h      |    2 ++
 include/uapi/linux/perf_event.h |   26 ++++++++++++++++++++++++--
 kernel/events/core.c            |    6 ++++++
 3 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index c9686c8..c32fba3 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -589,6 +589,7 @@ struct perf_sample_data {
 	struct perf_regs_user		regs_user;
 	u64				stack_user_size;
 	u64				weight;
+	u64				transaction;
 };
 
 static inline void perf_sample_data_init(struct perf_sample_data *data,
@@ -603,6 +604,7 @@ static inline void perf_sample_data_init(struct perf_sample_data *data,
 	data->regs_user.regs = NULL;
 	data->stack_user_size = 0;
 	data->weight = 0;
+	data->transaction = 0;
 }
 
 extern void perf_output_sample(struct perf_output_handle *handle,
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 6f80062..8c0d439 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -133,9 +133,9 @@ enum perf_event_sample_format {
 	PERF_SAMPLE_REGS_USER			= 1U << 12,
 	PERF_SAMPLE_STACK_USER			= 1U << 13,
 	PERF_SAMPLE_WEIGHT			= 1U << 14,
+	PERF_SAMPLE_TRANSACTION			= 1U << 15,
 
-	PERF_SAMPLE_MAX = 1U << 15,		/* non-ABI */
-
+	PERF_SAMPLE_MAX = 1U << 16,		/* non-ABI */
 };
 
 /*
@@ -179,6 +179,28 @@ enum perf_sample_regs_abi {
 };
 
 /*
+ * Values for the transaction event qualifier, mostly for abort events.
+ */
+enum {
+	PERF_SAMPLE_TXN_ELISION     = (1 << 0), /* From elision */
+	PERF_SAMPLE_TXN_TRANSACTION = (1 << 1), /* From transaction */
+	PERF_SAMPLE_TXN_SYNC        = (1 << 2), /* Instruction is related */
+	PERF_SAMPLE_TXN_ASYNC       = (1 << 3), /* Instruction not related */
+	PERF_SAMPLE_TXN_RETRY       = (1 << 4), /* Retry possible */
+	PERF_SAMPLE_TXN_CONFLICT    = (1 << 5), /* Conflict abort */
+	PERF_SAMPLE_TXN_CAPACITY    = (1 << 6), /* Capacity abort */
+	PERF_SAMPLE_TXN_MEMORY      = (1 << 7), /* Memory related abort */
+	PERF_SAMPLE_TXN_MISC        = (1 << 8), /* Misc aborts */
+
+	PERF_SAMPLE_TXN_MAX	    = (1 << 9),  /* non-ABI */
+
+	/* bits 24..31 are reserved for the abort code */
+
+	PERF_SAMPLE_TXN_ABORT_MASK  = 0xff000000,
+	PERF_SAMPLE_TXN_ABORT_SHIFT = 24,
+};
+
+/*
  * The format of the data returned by read() on a perf event fd,
  * as specified by attr.read_format:
  *
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 749bdf4..b4078a0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -955,6 +955,9 @@ static void perf_event__header_size(struct perf_event *event)
 	if (sample_type & PERF_SAMPLE_WEIGHT)
 		size += sizeof(data->weight);
 
+	if (sample_type & PERF_SAMPLE_TRANSACTION)
+		size += sizeof(data->transaction);
+
 	if (sample_type & PERF_SAMPLE_READ)
 		size += event->read_size;
 
@@ -4175,6 +4178,9 @@ void perf_output_sample(struct perf_output_handle *handle,
 
 	if (sample_type & PERF_SAMPLE_WEIGHT)
 		perf_output_put(handle, data->weight);
+
+	if (sample_type & PERF_SAMPLE_TRANSACTION)
+		perf_output_put(handle, data->transaction);
 }
 
 void perf_prepare_sample(struct perf_event_header *header,
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 18/29] perf, x86: Add Haswell specific transaction flag reporting
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (16 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 17/29] perf, core: Add generic transaction flags v3 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 19/29] perf, tools: Add support for record transaction flags v3 Andi Kleen
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

In the PEBS handler report the transaction flags using the new
generic transaction flags facility. Most of them come from
the "tsx_tuning" field in PEBSv2, but the abort code is derived
from the RAX register reported in the PEBS record.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel_ds.c |    9 +++++++++
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 3094caa..4b657c2 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -677,6 +677,15 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 			data.weight = ((struct pebs_record_v2 *)pebs)->nhm.lat;
 	}
 
+	if ((event->attr.sample_type & PERF_SAMPLE_TRANSACTION) &&
+	    x86_pmu.intel_cap.pebs_format >= 2) {
+		data.transaction =
+		     ((struct pebs_record_v2 *)pebs)->tsx_tuning >> 32;
+		if ((data.transaction & PERF_SAMPLE_TXN_TRANSACTION) &&
+		    (pebs->ax & 1))
+			data.transaction |= pebs->ax & 0xff000000;
+	}
+
 	if (has_branch_stack(event))
 		data.br_stack = &cpuc->lbr_stack;
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 19/29] perf, tools: Add support for record transaction flags v3
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (17 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 18/29] perf, x86: Add Haswell specific transaction flag reporting Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 20/29] perf, tools: Add browser support for transaction flags v5 Andi Kleen
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add the glue in the user tools to record transaction flags with
--transaction (-T was already taken) and dump them.

Followon patches will use them.

v2: Fix manpage
v3: Move transaction to the end
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-record.txt |    4 +++-
 tools/perf/builtin-record.c              |    2 ++
 tools/perf/perf.h                        |    1 +
 tools/perf/util/event.h                  |    1 +
 tools/perf/util/evsel.c                  |    9 +++++++++
 tools/perf/util/session.c                |    3 +++
 6 files changed, 19 insertions(+), 1 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 6f3405e..c73dd25 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -185,12 +185,14 @@ is enabled for all the sampling events. The sampled branch type is the same for
 The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
 Note that this feature may not be available on all processors.
 
--W::
 --weight::
 Enable weightened sampling. An additional weight is recorded per sample and can be
 displayed with the weight and local_weight sort keys.  This currently works for TSX
 abort events and some memory events in precise mode on modern Intel CPUs.
 
+--transaction::
+Record transaction flags for transaction related events.
+
 SEE ALSO
 --------
 linkperf:perf-stat[1], linkperf:perf-list[1]
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 4e568aa..8b81f3e 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1064,6 +1064,8 @@ const struct option record_options[] = {
 		     parse_branch_stack),
 	OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
 		    "sample by weight (on special events only)"),
+	OPT_BOOLEAN(0, "transaction", &record.opts.sample_transaction,
+		    "sample transaction flags (special events only)"),
 	OPT_END()
 };
 
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 7058155..025cc53 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -250,6 +250,7 @@ struct perf_record_opts {
 	u64	     default_interval;
 	u64	     user_interval;
 	u16	     stack_dump_size;
+	bool	     sample_transaction;
 };
 
 #endif
diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
index a97fbbe..84b070d 100644
--- a/tools/perf/util/event.h
+++ b/tools/perf/util/event.h
@@ -89,6 +89,7 @@ struct perf_sample {
 	u64 stream_id;
 	u64 period;
 	u64 weight;
+	u64 transaction;
 	u32 cpu;
 	u32 raw_size;
 	void *raw_data;
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 805d33e..68145f4 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -513,6 +513,9 @@ void perf_evsel__config(struct perf_evsel *evsel,
 	if (opts->sample_weight)
 		attr->sample_type	|= PERF_SAMPLE_WEIGHT;
 
+	if (opts->sample_transaction)
+		attr->sample_type	|= PERF_SAMPLE_TRANSACTION;
+
 	attr->mmap = track;
 	attr->comm = track;
 
@@ -1068,6 +1071,12 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
 		array++;
 	}
 
+	data->transaction = 0;
+	if (type & PERF_SAMPLE_TRANSACTION) {
+		data->transaction = *array;
+		array++;
+	}
+
 	return 0;
 }
 
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index 3de9097..076dd77 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -1009,6 +1009,9 @@ static void dump_sample(struct perf_evsel *evsel, union perf_event *event,
 
 	if (sample_type & PERF_SAMPLE_WEIGHT)
 		printf("... weight: %" PRIu64 "\n", sample->weight);
+
+	if (sample_type & PERF_SAMPLE_TRANSACTION)
+		printf("... transaction: %" PRIx64 "\n", sample->transaction);
 }
 
 static struct machine *
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 20/29] perf, tools: Add browser support for transaction flags v5
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (18 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 19/29] perf, tools: Add support for record transaction flags v3 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 21/29] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add histogram support for the transaction flags. Each flags instance becomes
a separate histogram. Support sorting and displaying the flags in report
and top.

The patch is fairly large, but it's really mostly just plumbing to pass the
flags around.

v2: Increase column. Fix flags decoding. Use longer strings for flags
to be more user friendly.
v3: Fix WERROR=1 build. Tidy display
v4: Readd sort keys to manpage
v5: Reimplement transaction flags display code
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-report.txt |    2 +-
 tools/perf/Documentation/perf-top.txt    |    2 +-
 tools/perf/builtin-annotate.c            |    2 +-
 tools/perf/builtin-diff.c                |    8 ++-
 tools/perf/builtin-report.c              |    4 +-
 tools/perf/builtin-top.c                 |    4 +-
 tools/perf/util/hist.c                   |    7 ++-
 tools/perf/util/hist.h                   |    4 +-
 tools/perf/util/sort.c                   |   75 ++++++++++++++++++++++++++++++
 tools/perf/util/sort.h                   |    2 +
 10 files changed, 98 insertions(+), 12 deletions(-)

diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index 5dabd4d..87224ff 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -59,7 +59,7 @@ OPTIONS
 --sort=::
 	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
         dso_from, dso_to, symbol_to, symbol_from, mispredict,
-        abort, intx, local_weight, weight
+        abort, intx, local_weight, weight, transaction
 
 -p::
 --parent=<regex>::
diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
index 3533e0a..82cc8e1 100644
--- a/tools/perf/Documentation/perf-top.txt
+++ b/tools/perf/Documentation/perf-top.txt
@@ -114,7 +114,7 @@ Default is to monitor all CPUS.
 --sort::
 	Sort by key(s): pid, comm, dso, symbol, parent, srcline,
         dso_from, dso_to, symbol_to, symbol_from, mispredict,
-        abort, intx,  local_weight, weight
+        abort, intx,  local_weight, weight, transaction
 
 -n::
 --show-nr-samples::
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 1bacb7d..cb8fa96 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -62,7 +62,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
 		return 0;
 	}
 
-	he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
+	he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1, 0);
 	if (he == NULL)
 		return -ENOMEM;
 
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index 03a322f..3fccdc4 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -249,9 +249,10 @@ int perf_diff__formula(char *buf, size_t size, struct hist_entry *he)
 
 static int hists__add_entry(struct hists *self,
 			    struct addr_location *al, u64 period,
-			    u64 weight)
+			    u64 weight, u64 transaction)
 {
-	if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
+	if (__hists__add_entry(self, al, NULL, period, weight, transaction)
+	    != NULL)
 		return 0;
 	return -ENOMEM;
 }
@@ -273,7 +274,8 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
 	if (al.filtered)
 		return 0;
 
-	if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
+	if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight,
+			     sample->transaction)) {
 		pr_warning("problem incrementing symbol period, skipping event\n");
 		return -1;
 	}
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 5dc0edd..e6a74ef 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -147,7 +147,7 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
 	}
 
 	he = __hists__add_entry(&evsel->hists, al, parent, sample->period,
-					sample->weight);
+				sample->weight, sample->transaction);
 	if (he == NULL)
 		return -ENOMEM;
 
@@ -598,7 +598,7 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
 		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
 		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
-		   " abort, intx,  weight, local_weight"),
+		   " abort, intx,  weight, local_weight, transaction"),
 	OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
 		    "Show sample percentage for different cpu modes"),
 	OPT_STRING('p', "parent", &parent_pattern, "regex",
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 9f87db7..3d81721 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -272,7 +272,7 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
 	struct hist_entry *he;
 
 	he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
-				sample->weight);
+				sample->weight, sample->transaction);
 	if (he == NULL)
 		return NULL;
 
@@ -1233,7 +1233,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
 		   "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
 		   " dso_from, symbol_to, symbol_from, mispredict, srcline,"
-		   " abort, intx, weight, local_weight"),
+		   " abort, intx, weight, local_weight, transaction"),
 	OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
 		    "Show a column with the number of samples"),
 	OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index a8d7647..28861de 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -112,6 +112,10 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 			hists__set_unres_dso_col_len(hists, HISTC_DSO_TO);
 		}
 	}
+
+	if (h->transaction)
+		hists__new_col_len(hists, HISTC_TRANSACTION, 
+				   hist_entry__transaction_len());
 }
 
 void hists__output_recalc_col_len(struct hists *hists, int max_rows)
@@ -361,7 +365,7 @@ struct hist_entry *__hists__add_branch_entry(struct hists *self,
 struct hist_entry *__hists__add_entry(struct hists *self,
 				      struct addr_location *al,
 				      struct symbol *sym_parent, u64 period,
-				      u64 weight)
+				      u64 weight, u64 transaction)
 {
 	struct hist_entry entry = {
 		.thread	= al->thread,
@@ -380,6 +384,7 @@ struct hist_entry *__hists__add_entry(struct hists *self,
 		.parent = sym_parent,
 		.filtered = symbol__parent_filter(sym_parent),
 		.hists	= self,
+		.transaction = transaction,
 	};
 
 	return add_hist_entry(self, &entry, al, period, weight);
diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
index 3767453..3c37f2e 100644
--- a/tools/perf/util/hist.h
+++ b/tools/perf/util/hist.h
@@ -53,6 +53,7 @@ enum hist_column {
 	HISTC_SRCLINE,
 	HISTC_LOCAL_WEIGHT,
 	HISTC_GLOBAL_WEIGHT,
+	HISTC_TRANSACTION,
 	HISTC_NR_COLS, /* Last entry */
 };
 
@@ -78,9 +79,10 @@ struct hists {
 struct hist_entry *__hists__add_entry(struct hists *self,
 				      struct addr_location *al,
 				      struct symbol *parent, u64 period,
-				      u64 weight);
+				      u64 weight, u64 transaction);
 int64_t hist_entry__cmp(struct hist_entry *left, struct hist_entry *right);
 int64_t hist_entry__collapse(struct hist_entry *left, struct hist_entry *right);
+int hist_entry__transaction_len(void);
 int hist_entry__sort_snprintf(struct hist_entry *self, char *bf, size_t size,
 			      struct hists *hists);
 void hist_entry__free(struct hist_entry *);
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index 76161c4..3f1165e 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -568,6 +568,78 @@ struct sort_entry sort_global_weight = {
 	.se_width_idx	= HISTC_GLOBAL_WEIGHT,
 };
 
+static int64_t
+sort__transaction_cmp(struct hist_entry *left, struct hist_entry *right)
+{
+	return left->transaction - right->transaction;
+}
+
+static inline char *add_str(char *p, const char *str)
+{
+	strcpy(p, str);
+	return p + strlen(str);
+}
+
+static struct txbit {
+	unsigned flag;
+	const char *name;
+	int skip_for_len;
+} txbits[] = {
+	{ PERF_SAMPLE_TXN_ELISION,     "EL ", 0 },
+	{ PERF_SAMPLE_TXN_TRANSACTION, "TX ", 1 },
+	{ PERF_SAMPLE_TXN_SYNC,        "SYNC ", 1 },
+	{ PERF_SAMPLE_TXN_ASYNC,       "ASYNC ", 0 },
+	{ PERF_SAMPLE_TXN_RETRY,       "RETRY ", 0 },
+	{ PERF_SAMPLE_TXN_CONFLICT,    "CON ", 0 },
+	{ PERF_SAMPLE_TXN_CAPACITY,    "CAP ", 1 },
+	{ PERF_SAMPLE_TXN_MEMORY,      "MEM ", 0 },
+	{ PERF_SAMPLE_TXN_MISC,        "MISC ", 0 },
+	{ 0, NULL, 0 }
+};
+
+int hist_entry__transaction_len(void)
+{
+	int i;
+	int len = 0;
+
+	for (i = 0; txbits[i].name; i++) {
+		if (!txbits[i].skip_for_len)
+			len += strlen(txbits[i].name);
+	}
+	len += 4; /* :XX<space> */
+	return len;
+}
+
+static int hist_entry__transaction_snprintf(struct hist_entry *self, char *bf,
+	 			    	    size_t size, unsigned int width)
+{
+	u64 t = self->transaction;
+	char buf[128];
+	char *p = buf;
+	int i;
+
+	for (i = 0; txbits[i].name; i++)
+		if (txbits[i].flag & t)
+			p = add_str(p, txbits[i].name);
+	if (t && !(t & (PERF_SAMPLE_TXN_SYNC|PERF_SAMPLE_TXN_ASYNC)))
+		p = add_str(p, "NEITHER ");
+	if (t & PERF_SAMPLE_TXN_ABORT_MASK) {
+		sprintf(p, ":%" PRIx64,
+			(t & PERF_SAMPLE_TXN_ABORT_MASK) >>
+			PERF_SAMPLE_TXN_ABORT_SHIFT);
+		p += strlen(p);
+	}
+
+	return repsep_snprintf(bf, size, "%-*s", width, buf);
+}
+
+struct sort_entry sort_transaction = {
+	.se_header	= "Transaction                ",
+	.se_cmp		= sort__transaction_cmp,
+	.se_snprintf	= hist_entry__transaction_snprintf,
+	.se_width_idx	= HISTC_TRANSACTION,
+};
+
 struct sort_dimension {
 	const char		*name;
 	struct sort_entry	*entry;
@@ -593,6 +665,7 @@ static struct sort_dimension sort_dimensions[] = {
 	DIM(SORT_INTX, "intx", sort_intx),
 	DIM(SORT_LOCAL_WEIGHT, "local_weight", sort_local_weight),
 	DIM(SORT_GLOBAL_WEIGHT, "weight", sort_global_weight),
+	DIM(SORT_TRANSACTION, "transaction", sort_transaction),
 };
 
 int sort_dimension__add(const char *tok)
@@ -657,6 +730,8 @@ int sort_dimension__add(const char *tok)
 				sort__first_dimension = SORT_GLOBAL_WEIGHT;
 			else if (!strcmp(sd->name, "local_weight"))
 				sort__first_dimension = SORT_LOCAL_WEIGHT;
+			else if (!strcmp(sd->name, "transaction"))
+				sort__first_dimension = SORT_TRANSACTION;
 		}
 
 		list_add_tail(&sd->entry->list, &hist_entry__sort_list);
diff --git a/tools/perf/util/sort.h b/tools/perf/util/sort.h
index 4a22b1b..12530af0 100644
--- a/tools/perf/util/sort.h
+++ b/tools/perf/util/sort.h
@@ -86,6 +86,7 @@ struct hist_entry {
 	struct map_symbol	ms;
 	struct thread		*thread;
 	u64			ip;
+	u64			transaction;
 	s32			cpu;
 
 	struct hist_entry_diff	diff;
@@ -142,6 +143,7 @@ enum sort_type {
 	SORT_INTX,
 	SORT_LOCAL_WEIGHT,
 	SORT_GLOBAL_WEIGHT,
+	SORT_TRANSACTION,
 };
 
 /*
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 21/29] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (19 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 20/29] perf, tools: Add browser support for transaction flags v5 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 22/29] tools, perf: Add a precise event qualifier v2 Andi Kleen
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   15 +++++----------
 1 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 20caf0a..d8acedd 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1133,16 +1133,6 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 
-	/*
-	 * Some chipsets need to unmask the LVTPC in a particular spot
-	 * inside the nmi handler.  As a result, the unmasking was pushed
-	 * into all the nmi handlers.
-	 *
-	 * This handler doesn't seem to have any issues with the unmasking
-	 * so it was left at the top.
-	 */
-	apic_write(APIC_LVTPC, APIC_DM_NMI);
-
 	intel_pmu_disable_all();
 	handled = intel_pmu_drain_bts_buffer();
 	status = intel_pmu_get_status();
@@ -1211,6 +1201,11 @@ again:
 
 done:
 	intel_pmu_enable_all(0);
+	/*
+	 * Only unmask the NMI after the overflow counters
+	 * have been reset.
+	 */
+	apic_write(APIC_LVTPC, APIC_DM_NMI);
 	return handled;
 }
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 22/29] tools, perf: Add a precise event qualifier v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (20 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 21/29] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 23/29] perf, x86: improve sysfs event mapping with event string Andi Kleen
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a precise qualifier, like cpu/event=0x3c,precise=1/

This is needed so that the kernel can request enabling PEBS
for TSX events. The parser bails out on any sysfs parse errors,
so this is needed in any case to handle any event on the TSX
perf kernel.

v2: Allow 3 as value
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/parse-events.c |    6 ++++++
 tools/perf/util/parse-events.h |    1 +
 tools/perf/util/parse-events.l |    1 +
 3 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 2d8d53be..5a157b0 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -526,6 +526,12 @@ do {								\
 	case PARSE_EVENTS__TERM_TYPE_NAME:
 		CHECK_TYPE_VAL(STR);
 		break;
+	case PARSE_EVENTS__TERM_TYPE_PRECISE:
+		CHECK_TYPE_VAL(NUM);
+		if ((unsigned)term->val.num > 3)
+			return -EINVAL;
+		attr->precise_ip = term->val.num;
+		break;
 	default:
 		return -EINVAL;
 	}
diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
index b7af80b..bed2c90 100644
--- a/tools/perf/util/parse-events.h
+++ b/tools/perf/util/parse-events.h
@@ -49,6 +49,7 @@ enum {
 	PARSE_EVENTS__TERM_TYPE_NAME,
 	PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD,
 	PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE,
+	PARSE_EVENTS__TERM_TYPE_PRECISE,
 };
 
 struct parse_events__term {
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index e9d1134..32a9000 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -169,6 +169,7 @@ period			{ return term(yyscanner, PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD); }
 branch_type		{ return term(yyscanner, PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE); }
 ,			{ return ','; }
 "/"			{ BEGIN(INITIAL); return '/'; }
+precise			{ return term(yyscanner, PARSE_EVENTS__TERM_TYPE_PRECISE); }
 {name_minus}		{ return str(yyscanner, PE_NAME); }
 }
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 23/29] perf, x86: improve sysfs event mapping with event string
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (21 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 22/29] tools, perf: Add a precise event qualifier v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 24/29] perf, x86: Support CPU specific sysfs events Andi Kleen
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Stephane Eranian <eranian@google.com>

This patch extends Jiri's changes to make generic
events mapping visible via sysfs. The patch extends
the mechanism to non-generic events by allowing
the mappings to be hardcoded in strings.

This mechanism will be used by the PEBS-LL patch
later on.

[AK: Make events_sysfs_show unstatic again to fix compilation]
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.c |   28 +++++++++++++---------------
 arch/x86/kernel/cpu/perf_event.h |   26 ++++++++++++++++++++++++++
 2 files changed, 39 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index ec3c549..6cdc012 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -1316,20 +1316,22 @@ static struct attribute_group x86_pmu_format_group = {
 	.attrs = NULL,
 };
 
-struct perf_pmu_events_attr {
-	struct device_attribute attr;
-	u64 id;
-};
-
 /*
  * Remove all undefined events (x86_pmu.event_map(id) == 0)
  * out of events_attr attributes.
  */
 static void __init filter_events(struct attribute **attrs)
 {
+	struct device_attribute *d;
+	struct perf_pmu_events_attr *pmu_attr;
 	int i, j;
 
 	for (i = 0; attrs[i]; i++) {
+		d = (struct device_attribute *)attrs[i];
+		pmu_attr = container_of(d, struct perf_pmu_events_attr, attr);
+		/* str trumps id */
+		if (pmu_attr->event_str)
+			continue;
 		if (x86_pmu.event_map(i))
 			continue;
 
@@ -1341,24 +1343,20 @@ static void __init filter_events(struct attribute **attrs)
 	}
 }
 
-static ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
+ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
 			  char *page)
 {
 	struct perf_pmu_events_attr *pmu_attr = \
 		container_of(attr, struct perf_pmu_events_attr, attr);
 
 	u64 config = x86_pmu.event_map(pmu_attr->id);
-	return x86_pmu.events_sysfs_show(page, config);
-}
 
-#define EVENT_VAR(_id)  event_attr_##_id
-#define EVENT_PTR(_id) &event_attr_##_id.attr.attr
+	/* string trumps id */
+	if (pmu_attr->event_str)
+		return sprintf(page, "%s", pmu_attr->event_str);
 
-#define EVENT_ATTR(_name, _id)					\
-static struct perf_pmu_events_attr EVENT_VAR(_id) = {		\
-	.attr = __ATTR(_name, 0444, events_sysfs_show, NULL),	\
-	.id   =  PERF_COUNT_HW_##_id,				\
-};
+	return x86_pmu.events_sysfs_show(page, config);
+}
 
 EVENT_ATTR(cpu-cycles,			CPU_CYCLES		);
 EVENT_ATTR(instructions,		INSTRUCTIONS		);
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index d55e502..8253b73 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -425,6 +425,29 @@ do {									\
 #define ERF_NO_HT_SHARING	1
 #define ERF_HAS_RSP_1		2
 
+#define EVENT_VAR(_id)  event_attr_##_id
+#define EVENT_PTR(_id) &event_attr_##_id.attr.attr
+
+#define EVENT_ATTR(_name, _id)					\
+static struct perf_pmu_events_attr EVENT_VAR(_id) = {		\
+	.attr = __ATTR(_name, 0444, events_sysfs_show, NULL),	\
+	.id   =  PERF_COUNT_HW_##_id,				\
+	.event_str = NULL,					\
+};
+
+#define EVENT_ATTR_STR(_name, v, str)				  \
+static struct perf_pmu_events_attr event_attr_##v = {		  \
+	.attr      = __ATTR(_name, 0444, events_sysfs_show, NULL),\
+	.id        =  0,					  \
+	.event_str =  str,					  \
+};
+
+struct perf_pmu_events_attr {
+	struct device_attribute attr;
+	u64 id;
+	const char *event_str;
+};
+
 extern struct x86_pmu x86_pmu __read_mostly;
 
 DECLARE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
@@ -643,6 +666,9 @@ int p6_pmu_init(void);
 
 int knc_pmu_init(void);
 
+ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
+			  char *page);
+
 #else /* CONFIG_CPU_SUP_INTEL */
 
 static inline void reserve_ds_buffers(void)
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 24/29] perf, x86: Support CPU specific sysfs events
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (22 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 23/29] perf, x86: improve sysfs event mapping with event string Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 25/29] perf, x86: Add Haswell TSX event aliases v2 Andi Kleen
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a way for the CPU initialization code to register additional events,
and merge them into the events attribute directory. Used in the next
patch.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.c |   29 +++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/perf_event.h |    1 +
 2 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 6cdc012..e3a202a 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -1343,6 +1343,30 @@ static void __init filter_events(struct attribute **attrs)
 	}
 }
 
+/* Merge two pointer arrays */
+static __init struct attribute **merge_attr(struct attribute **a,
+					    struct attribute **b)
+{
+	struct attribute **new;
+	int j, i;
+
+	for (j = 0; a[j]; j++)
+		;
+	for (i = 0; b[i]; i++)
+		j++;
+	j++;
+	new = kmalloc(sizeof(struct attribute *) * j, GFP_KERNEL);
+	if (!new)
+		return a;
+	j = 0;
+	for (i = 0; a[i]; i++)
+		new[j++] = a[i];
+	for (i = 0; b[i]; i++)
+		new[j++] = b[i];
+	new[j] = NULL;
+	return new;
+}
+
 ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
 			  char *page)
 {
@@ -1480,6 +1504,11 @@ static int __init init_hw_perf_events(void)
 	else
 		filter_events(x86_pmu_events_group.attrs);
 
+	if (x86_pmu.cpu_events)
+		x86_pmu_events_group.attrs =
+			merge_attr(x86_pmu_events_group.attrs,
+				   x86_pmu.cpu_events);
+
 	pr_info("... version:                %d\n",     x86_pmu.version);
 	pr_info("... bit width:              %d\n",     x86_pmu.cntval_bits);
 	pr_info("... generic registers:      %d\n",     x86_pmu.num_counters);
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 8253b73..ba5d043 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -360,6 +360,7 @@ struct x86_pmu {
 	struct attribute **format_attrs;
 
 	ssize_t		(*events_sysfs_show)(char *page, u64 config);
+	struct attribute **cpu_events;
 
 	/*
 	 * CPU Hotplug hooks
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 25/29] perf, x86: Add Haswell TSX event aliases v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (23 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 24/29] perf, x86: Support CPU specific sysfs events Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 26/29] perf, tools: Add perf stat --transaction v2 Andi Kleen
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add infrastructure to generate event aliases in /sys/devices/cpu/events/

And use this to set up user friendly aliases for the common TSX events.
TSX tuning relies heavily on the PMU, so it's important to be user friendly.

This replaces the generic transaction events in an earlier version
of this patchkit.

tx-start/commit/abort  to count RTM transactions
el-start/commit/abort  to count HLE ("elision") transactions
tx-conflict/overflow   to count conflict/overflow for both combined.

The general abort events exist in precise and non precise variants
Since the common case is sampling plain "tx-aborts" in precise.

This is very important because abort sampling only really works
with PEBS enabled, otherwise it would report the IP after the abort,
not the abort point. But counting with PEBS has more overhead,
so also have tx/el-abort-count aliases that do not enable PEBS
for perf stat.

It would be nice to switch automatically between those two, like in the
previous version, but that would need more new infrastructure for sysfs
first.

There is an tx-abort<->tx-aborts alias too, because I found myself
using both variants.

Also added friendly aliases for cpu/cycles,intx=1/ and
cpu/cycles,intx=1,intx_cp=1/ and the same for instructions.
These will be used by perf stat -T, and are also useful for users directly.

So for example to get transactional cycles can use "perf stat -e cycles-t"

v2: Move to new sysfs infrastructure
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   47 ++++++++++++++++++++++++++++++++
 1 files changed, 47 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index d8acedd..022246a 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2004,6 +2004,52 @@ static __init void intel_nehalem_quirk(void)
 	}
 }
 
+/* Haswell special events */
+EVENT_ATTR_STR(tx-start,       tx_start,       "event=0xc9,umask=0x1");
+EVENT_ATTR_STR(tx-commit,      tx_commit,      "event=0xc9,umask=0x2");
+EVENT_ATTR_STR(tx-abort,       tx_abort,       "event=0xc9,umask=0x4,precise=2");
+EVENT_ATTR_STR(tx-abort-count, tx_abort_count, "event=0xc9,umask=0x4");
+/* alias */
+EVENT_ATTR_STR(tx-aborts,      tx_aborts,      "event=0xc9,umask=0x4,precise=2");
+EVENT_ATTR_STR(tx-capacity,    tx_capacity,    "event=0x54,umask=0x2");
+EVENT_ATTR_STR(tx-conflict,    tx_conflict,    "event=0x54,umask=0x1");
+EVENT_ATTR_STR(el-start,       el_start,       "event=0xc8,umask=0x1");
+EVENT_ATTR_STR(el-commit,      el_commit,      "event=0xc8,umask=0x2");
+EVENT_ATTR_STR(el-abort,       el_abort,       "event=0xc8,umask=0x4,precise=2");
+EVENT_ATTR_STR(el-abort-count, el_abort_count, "event=0xc8,umask=0x4");
+/* alias */
+EVENT_ATTR_STR(el-aborts,      el_aborts,      "event=0xc8,umask=0x4,precise=2");
+/* shared with tx-* */
+EVENT_ATTR_STR(el-capacity,    el_capacity,    "event=0x54,umask=0x2");
+/* shared with tx-* */
+EVENT_ATTR_STR(el-conflict,    el_conflict,    "event=0x54,umask=0x1");
+EVENT_ATTR_STR(cycles-t,       cycles_t,       "event=0x3c,intx=1");
+EVENT_ATTR_STR(cycles-ct,      cycles_ct,      "event=0x3c,intx=1,intx_cp=1");
+EVENT_ATTR_STR(instructions-t, instructions_t, "event=0xc0,intx=1");
+EVENT_ATTR_STR(instructions-ct,instructions_ct,"event=0xc0,intx=1,intx_cp=1");
+
+static struct attribute *hsw_events_attrs[] = {
+	EVENT_PTR(tx_start),
+	EVENT_PTR(tx_commit),
+	EVENT_PTR(tx_abort),
+	EVENT_PTR(tx_aborts),
+	EVENT_PTR(tx_abort_count),
+	EVENT_PTR(tx_capacity),
+	EVENT_PTR(tx_conflict),
+	EVENT_PTR(el_start),
+	EVENT_PTR(el_commit),
+	EVENT_PTR(el_abort),
+	EVENT_PTR(el_aborts),
+	EVENT_PTR(el_abort_count),
+	EVENT_PTR(el_capacity),
+	EVENT_PTR(el_conflict),
+	EVENT_PTR(cycles_t),
+	EVENT_PTR(cycles_ct),
+	EVENT_PTR(instructions_t),
+	EVENT_PTR(instructions_ct),
+	NULL
+};
+
 __init int intel_pmu_init(void)
 {
 	union cpuid10_edx edx;
@@ -2229,6 +2275,7 @@ __init int intel_pmu_init(void)
 		x86_pmu.get_event_constraints = hsw_get_event_constraints;
 		x86_pmu.format_attrs = intel_hsw_formats_attr;
 		x86_pmu.memory_lat_events = intel_hsw_memory_latency_events;
+		x86_pmu.cpu_events = hsw_events_attrs;
 		pr_cont("Haswell events, ");
 		break;
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 26/29] perf, tools: Add perf stat --transaction v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (24 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 25/29] perf, x86: Add Haswell TSX event aliases v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 27/29] perf, x86: Add a Haswell precise instructions event v2 Andi Kleen
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add support to perf stat to print the basic transactional execution statistics:
Total cycles, Cycles in Transaction, Cycles in aborted transsactions
using the intx and intx_checkpoint qualifiers.
Transaction Starts and Elision Starts, to compute the average transaction length.

This is a reasonable overview over the success of the transactions.

Enable with a new --transaction / -T option.

This requires measuring these events in a group, since they depend on each
other.

This is implemented by using TM sysfs events exported by the kernel

v2: Only print the extended statistics when the option is enabled.
This avoids negative output when the user specifies the -T events
in separate groups.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-stat.txt |    4 +
 tools/perf/builtin-stat.c              |  101 +++++++++++++++++++++++++++++++-
 tools/perf/util/evsel.h                |    6 ++
 3 files changed, 108 insertions(+), 3 deletions(-)

diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index cf0c310..0d5b8cb 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -114,6 +114,10 @@ with it.  --append may be used here.  Examples:
 
 perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
 
+-T::
+--transaction::
+
+Print statistics of transactional execution if supported.
 
 EXAMPLES
 --------
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index c247fac..9d1c5e2 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -65,6 +65,29 @@
 #define CNTR_NOT_SUPPORTED	"<not supported>"
 #define CNTR_NOT_COUNTED	"<not counted>"
 
+static const char *transaction_attrs[] = {
+	"task-clock",
+	"{"
+	"instructions,"
+	"cycles,"
+	"cpu/cycles-t/,"
+	"cpu/cycles-ct/,"
+	"cpu/tx-start/,"
+	"cpu/el-start/"
+	"}"
+};
+
+/* must match the transaction_attrs above */
+enum {
+	T_TASK_CLOCK,
+	T_INSTRUCTIONS,
+	T_CYCLES,
+	T_CYCLES_INTX,
+	T_CYCLES_INTX_CP,
+	T_TRANSACTION_START,
+	T_ELISION_START
+};
+
 static struct perf_evlist	*evsel_list;
 
 static struct perf_target	target = {
@@ -78,6 +101,7 @@ static bool			no_aggr				= false;
 static pid_t			child_pid			= -1;
 static bool			null_run			=  false;
 static int			detailed_run			=  0;
+static bool			transaction_run			=  false;
 static bool			big_num				=  true;
 static int			big_num_opt			=  -1;
 static const char		*csv_sep			= NULL;
@@ -127,7 +151,11 @@ static struct stats runtime_l1_icache_stats[MAX_NR_CPUS];
 static struct stats runtime_ll_cache_stats[MAX_NR_CPUS];
 static struct stats runtime_itlb_cache_stats[MAX_NR_CPUS];
 static struct stats runtime_dtlb_cache_stats[MAX_NR_CPUS];
+static struct stats runtime_cycles_intx_stats[MAX_NR_CPUS];
+static struct stats runtime_cycles_intxcp_stats[MAX_NR_CPUS];
 static struct stats walltime_nsecs_stats;
+static struct stats runtime_transaction_stats[MAX_NR_CPUS];
+static struct stats runtime_elision_stats[MAX_NR_CPUS];
 
 static int create_perf_stat_counter(struct perf_evsel *evsel)
 {
@@ -187,6 +215,18 @@ static inline int nsec_counter(struct perf_evsel *evsel)
 	return 0;
 }
 
+static struct perf_evsel *nth_evsel(int n)
+{
+	struct perf_evsel *ev;
+	int j;
+
+	j = 0;
+	list_for_each_entry (ev, &evsel_list->entries, node)
+		if (j++ == n)
+			return ev;
+	return NULL;
+}
+
 /*
  * Update various tracking values we maintain to print
  * more semantic information such as miss/hit ratios,
@@ -198,8 +238,14 @@ static void update_shadow_stats(struct perf_evsel *counter, u64 *count)
 		update_stats(&runtime_nsecs_stats[0], count[0]);
 	else if (perf_evsel__match(counter, HARDWARE, HW_CPU_CYCLES))
 		update_stats(&runtime_cycles_stats[0], count[0]);
-	else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND))
-		update_stats(&runtime_stalled_cycles_front_stats[0], count[0]);
+	else if (perf_evsel__cmp(counter, nth_evsel(T_CYCLES_INTX)))
+		update_stats(&runtime_cycles_intx_stats[0], count[0]);
+	else if (perf_evsel__cmp(counter, nth_evsel(T_CYCLES_INTX_CP)))
+		update_stats(&runtime_cycles_intxcp_stats[0], count[0]);
+	else if (perf_evsel__cmp(counter, nth_evsel(T_TRANSACTION_START)))
+		update_stats(&runtime_transaction_stats[0], count[0]);
+	else if (perf_evsel__cmp(counter, nth_evsel(T_ELISION_START)))
+		update_stats(&runtime_elision_stats[0], count[0]);
 	else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND))
 		update_stats(&runtime_stalled_cycles_back_stats[0], count[0]);
 	else if (perf_evsel__match(counter, HARDWARE, HW_BRANCH_INSTRUCTIONS))
@@ -661,7 +707,7 @@ static void print_ll_cache_misses(int cpu,
 
 static void abs_printout(int cpu, struct perf_evsel *evsel, double avg)
 {
-	double total, ratio = 0.0;
+	double total, ratio = 0.0, total2;
 	char cpustr[16] = { '\0', };
 	const char *fmt;
 
@@ -761,6 +807,41 @@ static void abs_printout(int cpu, struct perf_evsel *evsel, double avg)
 			ratio = 1.0 * avg / total;
 
 		fprintf(output, " # %8.3f GHz                    ", ratio);
+	} else if (perf_evsel__cmp(evsel, nth_evsel(T_CYCLES_INTX)) &&
+		   transaction_run) {
+		total = avg_stats(&runtime_cycles_stats[cpu]);
+		if (total)
+			fprintf(output,
+				" #   %5.2f%% transactional cycles   ",
+				100.0 * (avg / total));
+	} else if (perf_evsel__cmp(evsel, nth_evsel(T_CYCLES_INTX_CP)) &&
+		   transaction_run) {
+		total = avg_stats(&runtime_cycles_stats[cpu]);
+		total2 = avg_stats(&runtime_cycles_intx_stats[cpu]);
+		if (total)
+			fprintf(output,
+				" #   %5.2f%% aborted cycles         ",
+				100.0 * ((total2-avg) / total));
+	} else if (perf_evsel__cmp(evsel, nth_evsel(T_TRANSACTION_START)) &&
+		   avg > 0 &&
+		   runtime_cycles_intx_stats[cpu].n != 0 &&
+		   transaction_run) {
+		total = avg_stats(&runtime_cycles_intx_stats[cpu]);
+
+		if (total)
+			ratio = total / avg;
+
+		fprintf(output, " # %8.0f cycles / transaction ", ratio);
+	} else if (perf_evsel__cmp(evsel, nth_evsel(T_ELISION_START)) &&
+		   avg > 0 &&
+		   runtime_cycles_intx_stats[cpu].n != 0 &&
+		   transaction_run) {
+		total = avg_stats(&runtime_cycles_intx_stats[cpu]);
+
+		if (total)
+			ratio = total / avg;
+
+		fprintf(output, " # %8.0f cycles / elision     ", ratio);
 	} else if (runtime_nsecs_stats[cpu].n != 0) {
 		char unit = 'M';
 
@@ -1067,6 +1148,18 @@ static int add_default_attributes(void)
 	if (null_run)
 		return 0;
 
+	if (transaction_run) {
+		unsigned i;
+
+		for (i = 0; i < ARRAY_SIZE(transaction_attrs); i++) {
+			if (parse_events(evsel_list, transaction_attrs[i], 0)) {
+				fprintf(stderr, "Cannot set up transaction events\n");
+				return -1;
+			}
+		}
+		return 0;
+	}
+
 	if (!evsel_list->nr_entries) {
 		if (perf_evlist__add_default_attrs(evsel_list, default_attrs) < 0)
 			return -1;
@@ -1145,6 +1238,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
 			"command to run prior to the measured command"),
 	OPT_STRING(0, "post", &post_cmd, "command",
 			"command to run after to the measured command"),
+	OPT_BOOLEAN('T', "transaction", &transaction_run,
+		    "hardware transaction statistics"),
 	OPT_END()
 	};
 	const char * const stat_usage[] = {
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 3d2b801..dc6a309 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -158,6 +158,12 @@ static inline bool perf_evsel__match2(struct perf_evsel *e1,
 	       (e1->attr.config == e2->attr.config);
 }
 
+#define perf_evsel__cmp(a, b)			\
+	((a) &&					\
+	 (b) &&					\
+	 (a)->attr.type == (b)->attr.type &&	\
+	 (a)->attr.config == (b)->attr.config)
+
 int __perf_evsel__read_on_cpu(struct perf_evsel *evsel,
 			      int cpu, int thread, bool scale);
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 27/29] perf, x86: Add a Haswell precise instructions event v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (25 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 26/29] perf, tools: Add perf stat --transaction v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 28/29] perf, tools: Default to cpu// for events v5 Andi Kleen
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a instructions-p event alias that uses the PDIR randomized instruction
retirement event. This is useful to avoid some systematic sampling shadow
problems. Normally PEBS sampling has a systematic shadow. With PDIR
enabled the hardware adds some randomization that statistically avoids
this problem. In this sense, it's more precise over a whole sampling
interval, but an individual sample can be less precise. But since we
sample overall it's a more precise event.

This could be used before using the explicit event code syntax, but it's easier
and more user friendly to use with an "instructions-p" alias. I expect
this will eventually become a common use case.

Right now for Haswell, will add to Ivy Bridge later too.

v2: Use new sysfs infrastructure
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 022246a..fa20a19 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2027,6 +2027,7 @@ EVENT_ATTR_STR(cycles-t,       cycles_t,       "event=0x3c,intx=1");
 EVENT_ATTR_STR(cycles-ct,      cycles_ct,      "event=0x3c,intx=1,intx_cp=1");
 EVENT_ATTR_STR(instructions-t, instructions_t, "event=0xc0,intx=1");
 EVENT_ATTR_STR(instructions-ct,instructions_ct,"event=0xc0,intx=1,intx_cp=1");
+EVENT_ATTR_STR(instructions-p, instructions_p, "event=0xc0,umask=0x01,precise=2");
 
 static struct attribute *hsw_events_attrs[] = {
 	EVENT_PTR(tx_start),
@@ -2047,6 +2048,7 @@ static struct attribute *hsw_events_attrs[] = {
 	EVENT_PTR(cycles_ct),
 	EVENT_PTR(instructions_t),
 	EVENT_PTR(instructions_ct),
+	EVENT_PTR(instructions_p),
 	NULL
 };
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 28/29] perf, tools: Default to cpu// for events v5
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (26 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 27/29] perf, x86: Add a Haswell precise instructions event v2 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-17 20:36 ` [PATCH 29/29] perf, tools: List kernel supplied event aliases in perf list v2 Andi Kleen
  2013-01-24 11:39 ` perf PMU support for Haswell v7 Ingo Molnar
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

When an event fails to parse and it's not in a new style format,
try to parse it again as a cpu event.

This allows to use sysfs exported events directly without //, so I can use

perf record -e tx-aborts ...

instead of

perf record -e cpu/tx-aborts/

v2: Handle multiple events
v3: Move to separate function
v4: Move library function to util/string.c
v5: Handle unhandleable errors
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/include/linux/string.h |    1 +
 tools/perf/util/parse-events.c         |   32 ++++++++++++++++++++++++++++++--
 tools/perf/util/string.c               |   25 +++++++++++++++++++++++++
 3 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/include/linux/string.h b/tools/perf/util/include/linux/string.h
index 6f19c54..97a8007 100644
--- a/tools/perf/util/include/linux/string.h
+++ b/tools/perf/util/include/linux/string.h
@@ -1,3 +1,4 @@
 #include <string.h>
 
 void *memdup(const void *src, size_t len);
+int str_append(char **s, int *len, const char *a);
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 5a157b0..3409b15 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -6,7 +6,7 @@
 #include "parse-options.h"
 #include "parse-events.h"
 #include "exec_cmd.h"
-#include "string.h"
+#include "linux/string.h"
 #include "symbol.h"
 #include "cache.h"
 #include "header.h"
@@ -792,6 +792,32 @@ int parse_events_name(struct list_head *list, char *name)
 	return 0;
 }
 
+static int parse_events__scanner(const char *str, void *data, int start_token);
+
+static int parse_events_fixup(int ret, const char *str, void *data,
+			      int start_token)
+{
+	char *o = strdup(str);
+	char *s = NULL;
+	char *t = o;
+	char *p;
+	int len = 0;
+
+	if (!o)
+		return ret;
+	while ((p = strsep(&t, ",")) != NULL) {
+		if (s)
+			str_append(&s, &len, ",");
+		str_append(&s, &len, "cpu/");
+		str_append(&s, &len, p);
+		str_append(&s, &len, "/");
+	}
+	free(o);
+	if (!s)
+		return -ENOMEM;
+	return parse_events__scanner(s, data, start_token);
+}
+
 static int parse_events__scanner(const char *str, void *data, int start_token)
 {
 	YY_BUFFER_STATE buffer;
@@ -812,7 +838,9 @@ static int parse_events__scanner(const char *str, void *data, int start_token)
 	parse_events__flush_buffer(buffer, scanner);
 	parse_events__delete_buffer(buffer, scanner);
 	parse_events_lex_destroy(scanner);
-	return ret;
+	if (ret && !strchr(str, '/'))
+		ret = parse_events_fixup(ret, str, data, start_token);
+ 	return ret;
 }
 
 /*
diff --git a/tools/perf/util/string.c b/tools/perf/util/string.c
index 346707d..708662d 100644
--- a/tools/perf/util/string.c
+++ b/tools/perf/util/string.c
@@ -369,3 +369,28 @@ void *memdup(const void *src, size_t len)
 
 	return p;
 }
+
+/**
+ * str_append - reallocate string and append another
+ * @s: pointer to string pointer
+ * @len: pointer to len (initialized)
+ * @a: string to append.
+ * Also allow the caller to mishandle unhandleable errors.
+ */
+int str_append(char **s, int *len, const char *a)
+{
+	int olen = *s ? strlen(*s) : 0;
+	int nlen = olen + strlen(a) + 1;
+	if (*len < nlen) {
+		*len = *len * 2;
+		if (*len < nlen)
+			*len = nlen;
+		*s = realloc(*s, *len);
+		if (!*s)
+			return -ENOMEM;
+		if (olen == 0)
+			**s = 0;
+	}
+	strcat(*s, a);
+	return 0;
+}
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 29/29] perf, tools: List kernel supplied event aliases in perf list v2
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (27 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 28/29] perf, tools: Default to cpu// for events v5 Andi Kleen
@ 2013-01-17 20:36 ` Andi Kleen
  2013-01-24 11:39 ` perf PMU support for Haswell v7 Ingo Molnar
  29 siblings, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-17 20:36 UTC (permalink / raw)
  To: mingo
  Cc: linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung,
	Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

List the kernel supplied pmu event aliases in perf list

It's better when the users can actually see them.

v2: Fix pattern matching
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-list.txt |    4 +-
 tools/perf/builtin-list.c              |    3 +
 tools/perf/util/parse-events.c         |    5 ++-
 tools/perf/util/pmu.c                  |   73 ++++++++++++++++++++++++++++++++
 tools/perf/util/pmu.h                  |    3 +
 5 files changed, 86 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt
index d1e39dc..826f3d6 100644
--- a/tools/perf/Documentation/perf-list.txt
+++ b/tools/perf/Documentation/perf-list.txt
@@ -8,7 +8,7 @@ perf-list - List all symbolic event types
 SYNOPSIS
 --------
 [verse]
-'perf list' [hw|sw|cache|tracepoint|event_glob]
+'perf list' [hw|sw|cache|tracepoint|pmu|event_glob]
 
 DESCRIPTION
 -----------
@@ -104,6 +104,8 @@ To limit the list use:
   'subsys_glob:event_glob' to filter by tracepoint subsystems such as sched,
   block, etc.
 
+. 'pmu' to print the kernel supplied PMU events.
+
 . If none of the above is matched, it will apply the supplied glob to all
   events, printing the ones that match.
 
diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
index 1948ece..e79f423 100644
--- a/tools/perf/builtin-list.c
+++ b/tools/perf/builtin-list.c
@@ -13,6 +13,7 @@
 
 #include "util/parse-events.h"
 #include "util/cache.h"
+#include "util/pmu.h"
 
 int cmd_list(int argc, const char **argv, const char *prefix __maybe_unused)
 {
@@ -37,6 +38,8 @@ int cmd_list(int argc, const char **argv, const char *prefix __maybe_unused)
 			else if (strcmp(argv[i], "cache") == 0 ||
 				 strcmp(argv[i], "hwcache") == 0)
 				print_hwcache_events(NULL, false);
+			else if (strcmp(argv[i], "pmu") == 0)
+				print_pmu_events(NULL, false);
 			else if (strcmp(argv[i], "--raw-dump") == 0)
 				print_events(NULL, true);
 			else {
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 3409b15..18765f6 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -1078,6 +1078,8 @@ int print_hwcache_events(const char *event_glob, bool name_only)
 		}
 	}
 
+	if (printed)
+		printf("\n");
 	return printed;
 }
 
@@ -1132,11 +1134,12 @@ void print_events(const char *event_glob, bool name_only)
 
 	print_hwcache_events(event_glob, name_only);
 
+	print_pmu_events(event_glob, name_only);
+
 	if (event_glob != NULL)
 		return;
 
 	if (!name_only) {
-		printf("\n");
 		printf("  %-50s [%s]\n",
 		       "rNNN",
 		       event_type_descriptors[PERF_TYPE_RAW]);
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 9bdc60c..d1ecd54 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -552,3 +552,76 @@ void perf_pmu__set_format(unsigned long *bits, long from, long to)
 	for (b = from; b <= to; b++)
 		set_bit(b, bits);
 }
+
+static char *format_alias(char *buf, int len, struct perf_pmu *pmu,
+			  struct perf_pmu__alias *alias)
+{
+	snprintf(buf, len, "%s/%s/", pmu->name, alias->name);
+	return buf;
+}
+
+static char *format_alias_or(char *buf, int len, struct perf_pmu *pmu,
+			     struct perf_pmu__alias *alias)
+{
+	snprintf(buf, len, "%s OR %s/%s/", alias->name, pmu->name, alias->name);
+	return buf;
+}
+
+static int cmp_string(const void *a, const void *b)
+{
+	const char * const *as = a;
+	const char * const *bs = b;
+	return strcmp(*as, *bs);
+}
+
+void print_pmu_events(const char *event_glob, bool name_only)
+{
+	struct perf_pmu *pmu;
+	struct perf_pmu__alias *alias;
+	char buf[1024];
+	int printed = 0;
+	int len, j;
+	char **aliases;
+
+	pmu = NULL;
+	len = 0;
+	while ((pmu = perf_pmu__scan(pmu)) != NULL)
+		list_for_each_entry (alias, &pmu->aliases, list)
+			len++;
+	aliases = malloc(sizeof(char *) * len);
+	if (!aliases)
+		return;
+	pmu = NULL;
+	j = 0;
+	while ((pmu = perf_pmu__scan(pmu)) != NULL)
+		list_for_each_entry (alias, &pmu->aliases, list) {
+			char *name = format_alias(buf, sizeof buf, pmu, alias);
+			bool is_cpu = !strcmp(pmu->name, "cpu");
+
+			if (event_glob != NULL &&
+			    !(strglobmatch(name, event_glob) ||
+			      (!is_cpu && strglobmatch(alias->name, event_glob))))
+				continue;
+			aliases[j] = name;
+			if (is_cpu && !name_only)
+				aliases[j] = format_alias_or(buf, sizeof buf,
+							      pmu, alias);
+			aliases[j] = strdup(aliases[j]);
+			j++;
+		}
+	len = j;
+	qsort(aliases, len, sizeof(char *), cmp_string);
+	for (j = 0; j < len; j++) {
+		if (name_only) {
+			printf("%s ", aliases[j]);
+			continue;
+		}
+		printf("  %-50s [Kernel PMU event]\n", aliases[j]);
+		free(aliases[j]);
+		printed++;
+	}
+	if (printed)
+		printf("\n");
+	free(aliases);
+}
+
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index a313ed7..d9cb89b 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -3,6 +3,7 @@
 
 #include <linux/bitops.h>
 #include <linux/perf_event.h>
+#include <stdbool.h>
 
 enum {
 	PERF_PMU_FORMAT_VALUE_CONFIG,
@@ -53,5 +54,7 @@ int perf_pmu__format_parse(char *dir, struct list_head *head);
 
 struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu);
 
+void print_pmu_events(const char *event_glob, bool name_only);
+
 int perf_pmu__test(void);
 #endif /* __PMU_H */
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4
  2013-01-17 20:36 ` [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4 Andi Kleen
@ 2013-01-20 14:04   ` Gleb Natapov
  0 siblings, 0 replies; 42+ messages in thread
From: Gleb Natapov @ 2013-01-20 14:04 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa,
	namhyung, Andi Kleen, avi

On Thu, Jan 17, 2013 at 12:36:28PM -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> This is not arch perfmon, but older CPUs will just ignore it. This makes
> it possible to do at least some TSX measurements from a KVM guest
> 
> Cc: avi@redhat.com
> Cc: gleb@redhat.com
> v2: Various fixes to address review feedback
> v3: Ignore the bits when no CPUID. No #GP. Force raw events with TSX bits.
> v4: Use reserved bits for #GP
> Cc: gleb@redhat.com
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/pmu.c              |   32 ++++++++++++++++++++++++--------
>  2 files changed, 25 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index dc87b65..703a1f8 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -320,6 +320,7 @@ struct kvm_pmu {
>  	u64 global_ovf_ctrl;
>  	u64 counter_bitmask[2];
>  	u64 global_ctrl_mask;
> +	u64 reserved_bits;
>  	u8 version;
>  	struct kvm_pmc gp_counters[INTEL_PMC_MAX_GENERIC];
>  	struct kvm_pmc fixed_counters[INTEL_PMC_MAX_FIXED];
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index cfc258a..89405d0 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -160,7 +160,7 @@ static void stop_counter(struct kvm_pmc *pmc)
>  
>  static void reprogram_counter(struct kvm_pmc *pmc, u32 type,
>  		unsigned config, bool exclude_user, bool exclude_kernel,
> -		bool intr)
> +		bool intr, bool intx, bool intx_cp)
>  {
>  	struct perf_event *event;
>  	struct perf_event_attr attr = {
> @@ -173,6 +173,10 @@ static void reprogram_counter(struct kvm_pmc *pmc, u32 type,
>  		.exclude_kernel = exclude_kernel,
>  		.config = config,
>  	};
> +	if (intx)
> +		attr.config |= HSW_INTX;
> +	if (intx_cp)
> +		attr.config |= HSW_INTX_CHECKPOINTED;
>  
>  	attr.sample_period = (-pmc->counter) & pmc_bitmask(pmc);
>  
> @@ -206,7 +210,8 @@ static unsigned find_arch_event(struct kvm_pmu *pmu, u8 event_select,
>  	return arch_events[i].event_type;
>  }
>  
> -static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
> +static void reprogram_gp_counter(struct kvm_pmu *pmu, struct kvm_pmc *pmc, 
> +				 u64 eventsel)
No need to add pmu parameter here. It is no used by the function.
Otherwise looks good.

>  {
>  	unsigned config, type = PERF_TYPE_RAW;
>  	u8 event_select, unit_mask;
> @@ -226,7 +231,9 @@ static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
>  
>  	if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE |
>  				ARCH_PERFMON_EVENTSEL_INV |
> -				ARCH_PERFMON_EVENTSEL_CMASK))) {
> +				ARCH_PERFMON_EVENTSEL_CMASK |
> +				HSW_INTX |
> +				HSW_INTX_CHECKPOINTED))) {
>  		config = find_arch_event(&pmc->vcpu->arch.pmu, event_select,
>  				unit_mask);
>  		if (config != PERF_COUNT_HW_MAX)
> @@ -239,7 +246,9 @@ static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
>  	reprogram_counter(pmc, type, config,
>  			!(eventsel & ARCH_PERFMON_EVENTSEL_USR),
>  			!(eventsel & ARCH_PERFMON_EVENTSEL_OS),
> -			eventsel & ARCH_PERFMON_EVENTSEL_INT);
> +			eventsel & ARCH_PERFMON_EVENTSEL_INT,
> +			(eventsel & HSW_INTX),
> +			(eventsel & HSW_INTX_CHECKPOINTED));
>  }
>  
>  static void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 en_pmi, int idx)
> @@ -256,7 +265,7 @@ static void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 en_pmi, int idx)
>  			arch_events[fixed_pmc_events[idx]].event_type,
>  			!(en & 0x2), /* exclude user */
>  			!(en & 0x1), /* exclude kernel */
> -			pmi);
> +			pmi, false, false);
>  }
>  
>  static inline u8 fixed_en_pmi(u64 ctrl, int idx)
> @@ -289,7 +298,7 @@ static void reprogram_idx(struct kvm_pmu *pmu, int idx)
>  		return;
>  
>  	if (pmc_is_gp(pmc))
> -		reprogram_gp_counter(pmc, pmc->eventsel);
> +		reprogram_gp_counter(pmu, pmc, pmc->eventsel);
>  	else {
>  		int fidx = idx - INTEL_PMC_IDX_FIXED;
>  		reprogram_fixed_counter(pmc,
> @@ -400,8 +409,8 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data)
>  		} else if ((pmc = get_gp_pmc(pmu, index, MSR_P6_EVNTSEL0))) {
>  			if (data == pmc->eventsel)
>  				return 0;
> -			if (!(data & 0xffffffff00200000ull)) {
> -				reprogram_gp_counter(pmc, data);
> +			if (!(data & pmu->reserved_bits)) {
> +				reprogram_gp_counter(pmu, pmc, data);
>  				return 0;
>  			}
>  		}
> @@ -442,6 +451,7 @@ void kvm_pmu_cpuid_update(struct kvm_vcpu *vcpu)
>  	pmu->counter_bitmask[KVM_PMC_GP] = 0;
>  	pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
>  	pmu->version = 0;
> +	pmu->reserved_bits = 0xffffffff00200000ull;
>  
>  	entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
>  	if (!entry)
> @@ -470,6 +480,12 @@ void kvm_pmu_cpuid_update(struct kvm_vcpu *vcpu)
>  	pmu->global_ctrl = ((1 << pmu->nr_arch_gp_counters) - 1) |
>  		(((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED);
>  	pmu->global_ctrl_mask = ~pmu->global_ctrl;
> +
> +	entry = kvm_find_cpuid_entry(vcpu, 7, 0);
> +	if (entry &&
> +	    (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
> +	    (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM)))
> +		pmu->reserved_bits ^= HSW_INTX|HSW_INTX_CHECKPOINTED;
>  }
>  
>  void kvm_pmu_init(struct kvm_vcpu *vcpu)
> -- 
> 1.7.7.6

--
			Gleb.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-17 20:36 ` [PATCH 16/29] perf, tools: Add support for weight v7 Andi Kleen
@ 2013-01-23 11:38   ` Stephane Eranian
  2013-01-23 11:54     ` Stephane Eranian
  0 siblings, 1 reply; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 11:38 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Thu, Jan 17, 2013 at 9:36 PM, Andi Kleen <andi@firstfloor.org> wrote:
> From: Andi Kleen <ak@linux.intel.com>
>
> perf record has a new option -W that enables weightened sampling.
>
> Add sorting support in top/report for the average weight per sample and the
> total weight sum. This allows to both compare relative cost per event
> and the total cost over the measurement period.
>
> Add the necessary glue to perf report, record and the library.
>
> v2: Merge with new hist refactoring.
> v3: Fix manpage. Remove value check.
> Rename global_weight to weight and weight to local_weight.
> v4: Readd sort keys to manpage
> v5: Move weight to end
> v6: Move weight to template
> v7: Rename weight key.
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  tools/perf/Documentation/perf-record.txt |    6 +++
>  tools/perf/Documentation/perf-report.txt |    2 +-
>  tools/perf/Documentation/perf-top.txt    |    2 +-
>  tools/perf/builtin-annotate.c            |    2 +-
>  tools/perf/builtin-diff.c                |    7 ++--
>  tools/perf/builtin-record.c              |    2 +
>  tools/perf/builtin-report.c              |    7 ++--
>  tools/perf/builtin-top.c                 |    5 ++-
>  tools/perf/perf.h                        |    1 +
>  tools/perf/util/event.h                  |    1 +
>  tools/perf/util/evsel.c                  |   10 ++++++
>  tools/perf/util/hist.c                   |   23 +++++++++----
>  tools/perf/util/hist.h                   |    8 +++-
>  tools/perf/util/session.c                |    3 ++
>  tools/perf/util/sort.c                   |   51 +++++++++++++++++++++++++++++-
>  tools/perf/util/sort.h                   |    3 ++
>  16 files changed, 112 insertions(+), 21 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
> index f7d74b2..6f3405e 100644
> --- a/tools/perf/Documentation/perf-record.txt
> +++ b/tools/perf/Documentation/perf-record.txt
> @@ -185,6 +185,12 @@ is enabled for all the sampling events. The sampled branch type is the same for
>  The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
>  Note that this feature may not be available on all processors.
>
> +-W::
> +--weight::
> +Enable weightened sampling. An additional weight is recorded per sample and can be
> +displayed with the weight and local_weight sort keys.  This currently works for TSX
> +abort events and some memory events in precise mode on modern Intel CPUs.
> +
>  SEE ALSO
>  --------
>  linkperf:perf-stat[1], linkperf:perf-list[1]
> diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
> index cb4216d..5dabd4d 100644
> --- a/tools/perf/Documentation/perf-report.txt
> +++ b/tools/perf/Documentation/perf-report.txt
> @@ -59,7 +59,7 @@ OPTIONS
>  --sort=::
>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
> -        abort, intx
> +        abort, intx, local_weight, weight
>
>  -p::
>  --parent=<regex>::
> diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
> index 1398b73..3533e0a 100644
> --- a/tools/perf/Documentation/perf-top.txt
> +++ b/tools/perf/Documentation/perf-top.txt
> @@ -114,7 +114,7 @@ Default is to monitor all CPUS.
>  --sort::
>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
> -        abort, intx
> +        abort, intx,  local_weight, weight
>
>  -n::
>  --show-nr-samples::
> diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
> index dc870cf..1bacb7d 100644
> --- a/tools/perf/builtin-annotate.c
> +++ b/tools/perf/builtin-annotate.c
> @@ -62,7 +62,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
>                 return 0;
>         }
>
> -       he = __hists__add_entry(&evsel->hists, al, NULL, 1);
> +       he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
>         if (he == NULL)
>                 return -ENOMEM;
>
> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
> index 93b852f..03a322f 100644
> --- a/tools/perf/builtin-diff.c
> +++ b/tools/perf/builtin-diff.c
> @@ -248,9 +248,10 @@ int perf_diff__formula(char *buf, size_t size, struct hist_entry *he)
>  }
>
>  static int hists__add_entry(struct hists *self,
> -                           struct addr_location *al, u64 period)
> +                           struct addr_location *al, u64 period,
> +                           u64 weight)
>  {
> -       if (__hists__add_entry(self, al, NULL, period) != NULL)
> +       if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
>                 return 0;
>         return -ENOMEM;
>  }
> @@ -272,7 +273,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
>         if (al.filtered)
>                 return 0;
>
> -       if (hists__add_entry(&evsel->hists, &al, sample->period)) {
> +       if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
>                 pr_warning("problem incrementing symbol period, skipping event\n");
>                 return -1;
>         }
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index e7da893..4e568aa 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -1062,6 +1062,8 @@ const struct option record_options[] = {
>         OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
>                      "branch filter mask", "branch stack filter modes",
>                      parse_branch_stack),
> +       OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
> +                   "sample by weight (on special events only)"),
>         OPT_END()
>  };
>
> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> index 072c388..5dc0edd 100644
> --- a/tools/perf/builtin-report.c
> +++ b/tools/perf/builtin-report.c
> @@ -88,7 +88,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
>                  * and not events sampled. Thus we use a pseudo period of 1.
>                  */
>                 he = __hists__add_branch_entry(&evsel->hists, al, parent,
> -                               &bi[i], 1);
> +                               &bi[i], 1, 1);
>                 if (he) {
>                         struct annotation *notes;
>                         err = -ENOMEM;
> @@ -146,7 +146,8 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>                         return err;
>         }
>
> -       he = __hists__add_entry(&evsel->hists, al, parent, sample->period);
> +       he = __hists__add_entry(&evsel->hists, al, parent, sample->period,
> +                                       sample->weight);
>         if (he == NULL)
>                 return -ENOMEM;
>
> @@ -597,7 +598,7 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
> -                  " abort, intx"),
> +                  " abort, intx,  weight, local_weight"),
>         OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
>                     "Show sample percentage for different cpu modes"),
>         OPT_STRING('p', "parent", &parent_pattern, "regex",
> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
> index 6cfb678..9f87db7 100644
> --- a/tools/perf/builtin-top.c
> +++ b/tools/perf/builtin-top.c
> @@ -271,7 +271,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>  {
>         struct hist_entry *he;
>
> -       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period);
> +       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
> +                               sample->weight);
>         if (he == NULL)
>                 return NULL;
>
> @@ -1232,7 +1233,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
> -                  " abort, intx"),
> +                  " abort, intx, weight, local_weight"),
>         OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
>                     "Show a column with the number of samples"),
>         OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
> diff --git a/tools/perf/perf.h b/tools/perf/perf.h
> index c6d315b..7058155 100644
> --- a/tools/perf/perf.h
> +++ b/tools/perf/perf.h
> @@ -238,6 +238,7 @@ struct perf_record_opts {
>         bool         pipe_output;
>         bool         raw_samples;
>         bool         sample_address;
> +       bool         sample_weight;
>         bool         sample_time;
>         bool         sample_id_all_missing;
>         bool         exclude_guest_missing;
> diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
> index 0d573ff..a97fbbe 100644
> --- a/tools/perf/util/event.h
> +++ b/tools/perf/util/event.h
> @@ -88,6 +88,7 @@ struct perf_sample {
>         u64 id;
>         u64 stream_id;
>         u64 period;
> +       u64 weight;
>         u32 cpu;
>         u32 raw_size;
>         void *raw_data;
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index 1b16dd1..805d33e 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -510,6 +510,9 @@ void perf_evsel__config(struct perf_evsel *evsel,
>                 attr->branch_sample_type = opts->branch_stack;
>         }
>
> +       if (opts->sample_weight)
> +               attr->sample_type       |= PERF_SAMPLE_WEIGHT;
> +
>         attr->mmap = track;
>         attr->comm = track;
>
> @@ -908,6 +911,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>         data->cpu = data->pid = data->tid = -1;
>         data->stream_id = data->id = data->time = -1ULL;
>         data->period = 1;
> +       data->weight = 0;
>
>         if (event->header.type != PERF_RECORD_SAMPLE) {
>                 if (!evsel->attr.sample_id_all)
> @@ -1058,6 +1062,12 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>                 }
>         }
>
> +       data->weight = 0;
> +       if (type & PERF_SAMPLE_WEIGHT) {
> +               data->weight = *array;
> +               array++;
> +       }
> +
>         return 0;
>  }
>
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index cb17e2a..a8d7647 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -151,9 +151,11 @@ static void hist_entry__add_cpumode_period(struct hist_entry *he,
>         }
>  }
>
> -static void he_stat__add_period(struct he_stat *he_stat, u64 period)
> +static void he_stat__add_period(struct he_stat *he_stat, u64 period,
> +                               u64 weight)
>  {
>         he_stat->period         += period;
> +       he_stat->weight         += weight;
>         he_stat->nr_events      += 1;
>  }
>
> @@ -165,12 +167,14 @@ static void he_stat__add_stat(struct he_stat *dest, struct he_stat *src)
>         dest->period_guest_sys  += src->period_guest_sys;
>         dest->period_guest_us   += src->period_guest_us;
>         dest->nr_events         += src->nr_events;
> +       dest->weight            += src->weight;
>  }
>
>  static void hist_entry__decay(struct hist_entry *he)
>  {
>         he->stat.period = (he->stat.period * 7) / 8;
>         he->stat.nr_events = (he->stat.nr_events * 7) / 8;
> +       /* XXX need decay for weight too? */
>  }
>
>  static bool hists__decay_entry(struct hists *hists, struct hist_entry *he)
> @@ -270,7 +274,8 @@ static u8 symbol__parent_filter(const struct symbol *parent)
>  static struct hist_entry *add_hist_entry(struct hists *hists,
>                                       struct hist_entry *entry,
>                                       struct addr_location *al,
> -                                     u64 period)
> +                                     u64 period,
> +                                     u64 weight)
>  {
>         struct rb_node **p;
>         struct rb_node *parent = NULL;
> @@ -288,7 +293,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
>                 cmp = hist_entry__cmp(entry, he);
>
>                 if (!cmp) {
> -                       he_stat__add_period(&he->stat, period);
> +                       he_stat__add_period(&he->stat, period, weight);
>
With this approach, you will not aggregate samples with similar
weights for more than 2 samples.
Example:
Sample 1 W=250 -> no match, add Sample 1
Sample 2 W=250 -> match Sample1, Sample1 new weight=500
Sample 3 W=250 -> no match, add Sample 3

Here you do not aggregate Sample 3 with Sample 1 and 2 , because you've updated
the weight which you also use in the sort__weight_cmp() routine.

That does not work for me with PEBS-LL. I want aggregation when
samples are identical.

I don't know why you want to aggregate weights.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 11:38   ` Stephane Eranian
@ 2013-01-23 11:54     ` Stephane Eranian
  2013-01-23 16:56       ` Andi Kleen
  2013-01-23 17:00       ` Stephane Eranian
  0 siblings, 2 replies; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 11:54 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Wed, Jan 23, 2013 at 12:38 PM, Stephane Eranian <eranian@google.com> wrote:
> On Thu, Jan 17, 2013 at 9:36 PM, Andi Kleen <andi@firstfloor.org> wrote:
>> From: Andi Kleen <ak@linux.intel.com>
>>
>> perf record has a new option -W that enables weightened sampling.
>>
>> Add sorting support in top/report for the average weight per sample and the
>> total weight sum. This allows to both compare relative cost per event
>> and the total cost over the measurement period.
>>
>> Add the necessary glue to perf report, record and the library.
>>
>> v2: Merge with new hist refactoring.
>> v3: Fix manpage. Remove value check.
>> Rename global_weight to weight and weight to local_weight.
>> v4: Readd sort keys to manpage
>> v5: Move weight to end
>> v6: Move weight to template
>> v7: Rename weight key.
>> Signed-off-by: Andi Kleen <ak@linux.intel.com>
>> ---
>>  tools/perf/Documentation/perf-record.txt |    6 +++
>>  tools/perf/Documentation/perf-report.txt |    2 +-
>>  tools/perf/Documentation/perf-top.txt    |    2 +-
>>  tools/perf/builtin-annotate.c            |    2 +-
>>  tools/perf/builtin-diff.c                |    7 ++--
>>  tools/perf/builtin-record.c              |    2 +
>>  tools/perf/builtin-report.c              |    7 ++--
>>  tools/perf/builtin-top.c                 |    5 ++-
>>  tools/perf/perf.h                        |    1 +
>>  tools/perf/util/event.h                  |    1 +
>>  tools/perf/util/evsel.c                  |   10 ++++++
>>  tools/perf/util/hist.c                   |   23 +++++++++----
>>  tools/perf/util/hist.h                   |    8 +++-
>>  tools/perf/util/session.c                |    3 ++
>>  tools/perf/util/sort.c                   |   51 +++++++++++++++++++++++++++++-
>>  tools/perf/util/sort.h                   |    3 ++
>>  16 files changed, 112 insertions(+), 21 deletions(-)
>>
>> diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
>> index f7d74b2..6f3405e 100644
>> --- a/tools/perf/Documentation/perf-record.txt
>> +++ b/tools/perf/Documentation/perf-record.txt
>> @@ -185,6 +185,12 @@ is enabled for all the sampling events. The sampled branch type is the same for
>>  The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
>>  Note that this feature may not be available on all processors.
>>
>> +-W::
>> +--weight::
>> +Enable weightened sampling. An additional weight is recorded per sample and can be
>> +displayed with the weight and local_weight sort keys.  This currently works for TSX
>> +abort events and some memory events in precise mode on modern Intel CPUs.
>> +
>>  SEE ALSO
>>  --------
>>  linkperf:perf-stat[1], linkperf:perf-list[1]
>> diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
>> index cb4216d..5dabd4d 100644
>> --- a/tools/perf/Documentation/perf-report.txt
>> +++ b/tools/perf/Documentation/perf-report.txt
>> @@ -59,7 +59,7 @@ OPTIONS
>>  --sort=::
>>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
>> -        abort, intx
>> +        abort, intx, local_weight, weight
>>
>>  -p::
>>  --parent=<regex>::
>> diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
>> index 1398b73..3533e0a 100644
>> --- a/tools/perf/Documentation/perf-top.txt
>> +++ b/tools/perf/Documentation/perf-top.txt
>> @@ -114,7 +114,7 @@ Default is to monitor all CPUS.
>>  --sort::
>>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
>> -        abort, intx
>> +        abort, intx,  local_weight, weight
>>
>>  -n::
>>  --show-nr-samples::
>> diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
>> index dc870cf..1bacb7d 100644
>> --- a/tools/perf/builtin-annotate.c
>> +++ b/tools/perf/builtin-annotate.c
>> @@ -62,7 +62,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
>>                 return 0;
>>         }
>>
>> -       he = __hists__add_entry(&evsel->hists, al, NULL, 1);
>> +       he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
>>         if (he == NULL)
>>                 return -ENOMEM;
>>
>> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
>> index 93b852f..03a322f 100644
>> --- a/tools/perf/builtin-diff.c
>> +++ b/tools/perf/builtin-diff.c
>> @@ -248,9 +248,10 @@ int perf_diff__formula(char *buf, size_t size, struct hist_entry *he)
>>  }
>>
>>  static int hists__add_entry(struct hists *self,
>> -                           struct addr_location *al, u64 period)
>> +                           struct addr_location *al, u64 period,
>> +                           u64 weight)
>>  {
>> -       if (__hists__add_entry(self, al, NULL, period) != NULL)
>> +       if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
>>                 return 0;
>>         return -ENOMEM;
>>  }
>> @@ -272,7 +273,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
>>         if (al.filtered)
>>                 return 0;
>>
>> -       if (hists__add_entry(&evsel->hists, &al, sample->period)) {
>> +       if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
>>                 pr_warning("problem incrementing symbol period, skipping event\n");
>>                 return -1;
>>         }
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index e7da893..4e568aa 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -1062,6 +1062,8 @@ const struct option record_options[] = {
>>         OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
>>                      "branch filter mask", "branch stack filter modes",
>>                      parse_branch_stack),
>> +       OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
>> +                   "sample by weight (on special events only)"),
>>         OPT_END()
>>  };
>>
>> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
>> index 072c388..5dc0edd 100644
>> --- a/tools/perf/builtin-report.c
>> +++ b/tools/perf/builtin-report.c
>> @@ -88,7 +88,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
>>                  * and not events sampled. Thus we use a pseudo period of 1.
>>                  */
>>                 he = __hists__add_branch_entry(&evsel->hists, al, parent,
>> -                               &bi[i], 1);
>> +                               &bi[i], 1, 1);
>>                 if (he) {
>>                         struct annotation *notes;
>>                         err = -ENOMEM;
>> @@ -146,7 +146,8 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>>                         return err;
>>         }
>>
>> -       he = __hists__add_entry(&evsel->hists, al, parent, sample->period);
>> +       he = __hists__add_entry(&evsel->hists, al, parent, sample->period,
>> +                                       sample->weight);
>>         if (he == NULL)
>>                 return -ENOMEM;
>>
>> @@ -597,7 +598,7 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
>>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
>> -                  " abort, intx"),
>> +                  " abort, intx,  weight, local_weight"),
>>         OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
>>                     "Show sample percentage for different cpu modes"),
>>         OPT_STRING('p', "parent", &parent_pattern, "regex",
>> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
>> index 6cfb678..9f87db7 100644
>> --- a/tools/perf/builtin-top.c
>> +++ b/tools/perf/builtin-top.c
>> @@ -271,7 +271,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>>  {
>>         struct hist_entry *he;
>>
>> -       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period);
>> +       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
>> +                               sample->weight);
>>         if (he == NULL)
>>                 return NULL;
>>
>> @@ -1232,7 +1233,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
>>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
>> -                  " abort, intx"),
>> +                  " abort, intx, weight, local_weight"),
>>         OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
>>                     "Show a column with the number of samples"),
>>         OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
>> diff --git a/tools/perf/perf.h b/tools/perf/perf.h
>> index c6d315b..7058155 100644
>> --- a/tools/perf/perf.h
>> +++ b/tools/perf/perf.h
>> @@ -238,6 +238,7 @@ struct perf_record_opts {
>>         bool         pipe_output;
>>         bool         raw_samples;
>>         bool         sample_address;
>> +       bool         sample_weight;
>>         bool         sample_time;
>>         bool         sample_id_all_missing;
>>         bool         exclude_guest_missing;
>> diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
>> index 0d573ff..a97fbbe 100644
>> --- a/tools/perf/util/event.h
>> +++ b/tools/perf/util/event.h
>> @@ -88,6 +88,7 @@ struct perf_sample {
>>         u64 id;
>>         u64 stream_id;
>>         u64 period;
>> +       u64 weight;
>>         u32 cpu;
>>         u32 raw_size;
>>         void *raw_data;
>> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
>> index 1b16dd1..805d33e 100644
>> --- a/tools/perf/util/evsel.c
>> +++ b/tools/perf/util/evsel.c
>> @@ -510,6 +510,9 @@ void perf_evsel__config(struct perf_evsel *evsel,
>>                 attr->branch_sample_type = opts->branch_stack;
>>         }
>>
>> +       if (opts->sample_weight)
>> +               attr->sample_type       |= PERF_SAMPLE_WEIGHT;
>> +
>>         attr->mmap = track;
>>         attr->comm = track;
>>
>> @@ -908,6 +911,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>>         data->cpu = data->pid = data->tid = -1;
>>         data->stream_id = data->id = data->time = -1ULL;
>>         data->period = 1;
>> +       data->weight = 0;
>>
>>         if (event->header.type != PERF_RECORD_SAMPLE) {
>>                 if (!evsel->attr.sample_id_all)
>> @@ -1058,6 +1062,12 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>>                 }
>>         }
>>
>> +       data->weight = 0;
>> +       if (type & PERF_SAMPLE_WEIGHT) {
>> +               data->weight = *array;
>> +               array++;
>> +       }
>> +
>>         return 0;
>>  }
>>
>> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
>> index cb17e2a..a8d7647 100644
>> --- a/tools/perf/util/hist.c
>> +++ b/tools/perf/util/hist.c
>> @@ -151,9 +151,11 @@ static void hist_entry__add_cpumode_period(struct hist_entry *he,
>>         }
>>  }
>>
>> -static void he_stat__add_period(struct he_stat *he_stat, u64 period)
>> +static void he_stat__add_period(struct he_stat *he_stat, u64 period,
>> +                               u64 weight)
>>  {
>>         he_stat->period         += period;
>> +       he_stat->weight         += weight;
>>         he_stat->nr_events      += 1;
>>  }
>>
>> @@ -165,12 +167,14 @@ static void he_stat__add_stat(struct he_stat *dest, struct he_stat *src)
>>         dest->period_guest_sys  += src->period_guest_sys;
>>         dest->period_guest_us   += src->period_guest_us;
>>         dest->nr_events         += src->nr_events;
>> +       dest->weight            += src->weight;
>>  }
>>
>>  static void hist_entry__decay(struct hist_entry *he)
>>  {
>>         he->stat.period = (he->stat.period * 7) / 8;
>>         he->stat.nr_events = (he->stat.nr_events * 7) / 8;
>> +       /* XXX need decay for weight too? */
>>  }
>>
>>  static bool hists__decay_entry(struct hists *hists, struct hist_entry *he)
>> @@ -270,7 +274,8 @@ static u8 symbol__parent_filter(const struct symbol *parent)
>>  static struct hist_entry *add_hist_entry(struct hists *hists,
>>                                       struct hist_entry *entry,
>>                                       struct addr_location *al,
>> -                                     u64 period)
>> +                                     u64 period,
>> +                                     u64 weight)
>>  {
>>         struct rb_node **p;
>>         struct rb_node *parent = NULL;
>> @@ -288,7 +293,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
>>                 cmp = hist_entry__cmp(entry, he);
>>
>>                 if (!cmp) {
>> -                       he_stat__add_period(&he->stat, period);
>> +                       he_stat__add_period(&he->stat, period, weight);
>>
> With this approach, you will not aggregate samples with similar
> weights for more than 2 samples.
> Example:
> Sample 1 W=250 -> no match, add Sample 1
> Sample 2 W=250 -> match Sample1, Sample1 new weight=500
> Sample 3 W=250 -> no match, add Sample 3
>
> Here you do not aggregate Sample 3 with Sample 1 and 2 , because you've updated
> the weight which you also use in the sort__weight_cmp() routine.
>
> That does not work for me with PEBS-LL. I want aggregation when
> samples are identical.
>
> I don't know why you want to aggregate weights.

Ok, figured this out. I needed to use local_weight and not weight for
the PEBS-LL case.
Works fine now. So I can use your patch unaltered.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 11:54     ` Stephane Eranian
@ 2013-01-23 16:56       ` Andi Kleen
  2013-01-23 17:00       ` Stephane Eranian
  1 sibling, 0 replies; 42+ messages in thread
From: Andi Kleen @ 2013-01-23 16:56 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Andi Kleen, mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

> Ok, figured this out. I needed to use local_weight and not weight for
> the PEBS-LL case.

There are use cases for both. For TSX (global_)weight is more useful,
so i made it the default.

> Works fine now. So I can use your patch unaltered.

I still cannot sort by the weights, that needs to be solved
at some point. But I think it can be done independently
from merging this code. It's already useful even without sorting.

-Andi

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 11:54     ` Stephane Eranian
  2013-01-23 16:56       ` Andi Kleen
@ 2013-01-23 17:00       ` Stephane Eranian
  2013-01-23 17:13         ` Andi Kleen
  1 sibling, 1 reply; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 17:00 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Wed, Jan 23, 2013 at 12:54 PM, Stephane Eranian <eranian@google.com> wrote:
> On Wed, Jan 23, 2013 at 12:38 PM, Stephane Eranian <eranian@google.com> wrote:
>> On Thu, Jan 17, 2013 at 9:36 PM, Andi Kleen <andi@firstfloor.org> wrote:
>>> From: Andi Kleen <ak@linux.intel.com>
>>>
>>> perf record has a new option -W that enables weightened sampling.
>>>
>>> Add sorting support in top/report for the average weight per sample and the
>>> total weight sum. This allows to both compare relative cost per event
>>> and the total cost over the measurement period.
>>>
>>> Add the necessary glue to perf report, record and the library.
>>>
>>> v2: Merge with new hist refactoring.
>>> v3: Fix manpage. Remove value check.
>>> Rename global_weight to weight and weight to local_weight.
>>> v4: Readd sort keys to manpage
>>> v5: Move weight to end
>>> v6: Move weight to template
>>> v7: Rename weight key.
>>> Signed-off-by: Andi Kleen <ak@linux.intel.com>
>>> ---
>>>  tools/perf/Documentation/perf-record.txt |    6 +++
>>>  tools/perf/Documentation/perf-report.txt |    2 +-
>>>  tools/perf/Documentation/perf-top.txt    |    2 +-
>>>  tools/perf/builtin-annotate.c            |    2 +-
>>>  tools/perf/builtin-diff.c                |    7 ++--
>>>  tools/perf/builtin-record.c              |    2 +
>>>  tools/perf/builtin-report.c              |    7 ++--
>>>  tools/perf/builtin-top.c                 |    5 ++-
>>>  tools/perf/perf.h                        |    1 +
>>>  tools/perf/util/event.h                  |    1 +
>>>  tools/perf/util/evsel.c                  |   10 ++++++
>>>  tools/perf/util/hist.c                   |   23 +++++++++----
>>>  tools/perf/util/hist.h                   |    8 +++-
>>>  tools/perf/util/session.c                |    3 ++
>>>  tools/perf/util/sort.c                   |   51 +++++++++++++++++++++++++++++-
>>>  tools/perf/util/sort.h                   |    3 ++
>>>  16 files changed, 112 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
>>> index f7d74b2..6f3405e 100644
>>> --- a/tools/perf/Documentation/perf-record.txt
>>> +++ b/tools/perf/Documentation/perf-record.txt
>>> @@ -185,6 +185,12 @@ is enabled for all the sampling events. The sampled branch type is the same for
>>>  The various filters must be specified as a comma separated list: --branch-filter any_ret,u,k
>>>  Note that this feature may not be available on all processors.
>>>
>>> +-W::
>>> +--weight::
>>> +Enable weightened sampling. An additional weight is recorded per sample and can be
>>> +displayed with the weight and local_weight sort keys.  This currently works for TSX
>>> +abort events and some memory events in precise mode on modern Intel CPUs.
>>> +
>>>  SEE ALSO
>>>  --------
>>>  linkperf:perf-stat[1], linkperf:perf-list[1]
>>> diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
>>> index cb4216d..5dabd4d 100644
>>> --- a/tools/perf/Documentation/perf-report.txt
>>> +++ b/tools/perf/Documentation/perf-report.txt
>>> @@ -59,7 +59,7 @@ OPTIONS
>>>  --sort=::
>>>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>>>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
>>> -        abort, intx
>>> +        abort, intx, local_weight, weight
>>>
>>>  -p::
>>>  --parent=<regex>::
>>> diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
>>> index 1398b73..3533e0a 100644
>>> --- a/tools/perf/Documentation/perf-top.txt
>>> +++ b/tools/perf/Documentation/perf-top.txt
>>> @@ -114,7 +114,7 @@ Default is to monitor all CPUS.
>>>  --sort::
>>>         Sort by key(s): pid, comm, dso, symbol, parent, srcline,
>>>          dso_from, dso_to, symbol_to, symbol_from, mispredict,
>>> -        abort, intx
>>> +        abort, intx,  local_weight, weight
>>>
>>>  -n::
>>>  --show-nr-samples::
>>> diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
>>> index dc870cf..1bacb7d 100644
>>> --- a/tools/perf/builtin-annotate.c
>>> +++ b/tools/perf/builtin-annotate.c
>>> @@ -62,7 +62,7 @@ static int perf_evsel__add_sample(struct perf_evsel *evsel,
>>>                 return 0;
>>>         }
>>>
>>> -       he = __hists__add_entry(&evsel->hists, al, NULL, 1);
>>> +       he = __hists__add_entry(&evsel->hists, al, NULL, 1, 1);
>>>         if (he == NULL)
>>>                 return -ENOMEM;
>>>
>>> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
>>> index 93b852f..03a322f 100644
>>> --- a/tools/perf/builtin-diff.c
>>> +++ b/tools/perf/builtin-diff.c
>>> @@ -248,9 +248,10 @@ int perf_diff__formula(char *buf, size_t size, struct hist_entry *he)
>>>  }
>>>
>>>  static int hists__add_entry(struct hists *self,
>>> -                           struct addr_location *al, u64 period)
>>> +                           struct addr_location *al, u64 period,
>>> +                           u64 weight)
>>>  {
>>> -       if (__hists__add_entry(self, al, NULL, period) != NULL)
>>> +       if (__hists__add_entry(self, al, NULL, period, weight) != NULL)
>>>                 return 0;
>>>         return -ENOMEM;
>>>  }
>>> @@ -272,7 +273,7 @@ static int diff__process_sample_event(struct perf_tool *tool __maybe_unused,
>>>         if (al.filtered)
>>>                 return 0;
>>>
>>> -       if (hists__add_entry(&evsel->hists, &al, sample->period)) {
>>> +       if (hists__add_entry(&evsel->hists, &al, sample->period, sample->weight)) {
>>>                 pr_warning("problem incrementing symbol period, skipping event\n");
>>>                 return -1;
>>>         }
>>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>>> index e7da893..4e568aa 100644
>>> --- a/tools/perf/builtin-record.c
>>> +++ b/tools/perf/builtin-record.c
>>> @@ -1062,6 +1062,8 @@ const struct option record_options[] = {
>>>         OPT_CALLBACK('j', "branch-filter", &record.opts.branch_stack,
>>>                      "branch filter mask", "branch stack filter modes",
>>>                      parse_branch_stack),
>>> +       OPT_BOOLEAN('W', "weight", &record.opts.sample_weight,
>>> +                   "sample by weight (on special events only)"),
>>>         OPT_END()
>>>  };
>>>
>>> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
>>> index 072c388..5dc0edd 100644
>>> --- a/tools/perf/builtin-report.c
>>> +++ b/tools/perf/builtin-report.c
>>> @@ -88,7 +88,7 @@ static int perf_report__add_branch_hist_entry(struct perf_tool *tool,
>>>                  * and not events sampled. Thus we use a pseudo period of 1.
>>>                  */
>>>                 he = __hists__add_branch_entry(&evsel->hists, al, parent,
>>> -                               &bi[i], 1);
>>> +                               &bi[i], 1, 1);
>>>                 if (he) {
>>>                         struct annotation *notes;
>>>                         err = -ENOMEM;
>>> @@ -146,7 +146,8 @@ static int perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>>>                         return err;
>>>         }
>>>
>>> -       he = __hists__add_entry(&evsel->hists, al, parent, sample->period);
>>> +       he = __hists__add_entry(&evsel->hists, al, parent, sample->period,
>>> +                                       sample->weight);
>>>         if (he == NULL)
>>>                 return -ENOMEM;
>>>
>>> @@ -597,7 +598,7 @@ int cmd_report(int argc, const char **argv, const char *prefix __maybe_unused)
>>>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>>>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>>>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
>>> -                  " abort, intx"),
>>> +                  " abort, intx,  weight, local_weight"),
>>>         OPT_BOOLEAN(0, "showcpuutilization", &symbol_conf.show_cpu_utilization,
>>>                     "Show sample percentage for different cpu modes"),
>>>         OPT_STRING('p', "parent", &parent_pattern, "regex",
>>> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
>>> index 6cfb678..9f87db7 100644
>>> --- a/tools/perf/builtin-top.c
>>> +++ b/tools/perf/builtin-top.c
>>> @@ -271,7 +271,8 @@ static struct hist_entry *perf_evsel__add_hist_entry(struct perf_evsel *evsel,
>>>  {
>>>         struct hist_entry *he;
>>>
>>> -       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period);
>>> +       he = __hists__add_entry(&evsel->hists, al, NULL, sample->period,
>>> +                               sample->weight);
>>>         if (he == NULL)
>>>                 return NULL;
>>>
>>> @@ -1232,7 +1233,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
>>>         OPT_STRING('s', "sort", &sort_order, "key[,key2...]",
>>>                    "sort by key(s): pid, comm, dso, symbol, parent, dso_to,"
>>>                    " dso_from, symbol_to, symbol_from, mispredict, srcline,"
>>> -                  " abort, intx"),
>>> +                  " abort, intx, weight, local_weight"),
>>>         OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
>>>                     "Show a column with the number of samples"),
>>>         OPT_CALLBACK_DEFAULT('G', "call-graph", &top, "output_type,min_percent, call_order",
>>> diff --git a/tools/perf/perf.h b/tools/perf/perf.h
>>> index c6d315b..7058155 100644
>>> --- a/tools/perf/perf.h
>>> +++ b/tools/perf/perf.h
>>> @@ -238,6 +238,7 @@ struct perf_record_opts {
>>>         bool         pipe_output;
>>>         bool         raw_samples;
>>>         bool         sample_address;
>>> +       bool         sample_weight;
>>>         bool         sample_time;
>>>         bool         sample_id_all_missing;
>>>         bool         exclude_guest_missing;
>>> diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
>>> index 0d573ff..a97fbbe 100644
>>> --- a/tools/perf/util/event.h
>>> +++ b/tools/perf/util/event.h
>>> @@ -88,6 +88,7 @@ struct perf_sample {
>>>         u64 id;
>>>         u64 stream_id;
>>>         u64 period;
>>> +       u64 weight;
>>>         u32 cpu;
>>>         u32 raw_size;
>>>         void *raw_data;
>>> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
>>> index 1b16dd1..805d33e 100644
>>> --- a/tools/perf/util/evsel.c
>>> +++ b/tools/perf/util/evsel.c
>>> @@ -510,6 +510,9 @@ void perf_evsel__config(struct perf_evsel *evsel,
>>>                 attr->branch_sample_type = opts->branch_stack;
>>>         }
>>>
>>> +       if (opts->sample_weight)
>>> +               attr->sample_type       |= PERF_SAMPLE_WEIGHT;
>>> +
>>>         attr->mmap = track;
>>>         attr->comm = track;
>>>
>>> @@ -908,6 +911,7 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>>>         data->cpu = data->pid = data->tid = -1;
>>>         data->stream_id = data->id = data->time = -1ULL;
>>>         data->period = 1;
>>> +       data->weight = 0;
>>>
>>>         if (event->header.type != PERF_RECORD_SAMPLE) {
>>>                 if (!evsel->attr.sample_id_all)
>>> @@ -1058,6 +1062,12 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
>>>                 }
>>>         }
>>>
>>> +       data->weight = 0;
>>> +       if (type & PERF_SAMPLE_WEIGHT) {
>>> +               data->weight = *array;
>>> +               array++;
>>> +       }
>>> +
>>>         return 0;
>>>  }
>>>
>>> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
>>> index cb17e2a..a8d7647 100644
>>> --- a/tools/perf/util/hist.c
>>> +++ b/tools/perf/util/hist.c
>>> @@ -151,9 +151,11 @@ static void hist_entry__add_cpumode_period(struct hist_entry *he,
>>>         }
>>>  }
>>>
>>> -static void he_stat__add_period(struct he_stat *he_stat, u64 period)
>>> +static void he_stat__add_period(struct he_stat *he_stat, u64 period,
>>> +                               u64 weight)
>>>  {
>>>         he_stat->period         += period;
>>> +       he_stat->weight         += weight;
>>>         he_stat->nr_events      += 1;
>>>  }
>>>
>>> @@ -165,12 +167,14 @@ static void he_stat__add_stat(struct he_stat *dest, struct he_stat *src)
>>>         dest->period_guest_sys  += src->period_guest_sys;
>>>         dest->period_guest_us   += src->period_guest_us;
>>>         dest->nr_events         += src->nr_events;
>>> +       dest->weight            += src->weight;
>>>  }
>>>
>>>  static void hist_entry__decay(struct hist_entry *he)
>>>  {
>>>         he->stat.period = (he->stat.period * 7) / 8;
>>>         he->stat.nr_events = (he->stat.nr_events * 7) / 8;
>>> +       /* XXX need decay for weight too? */
>>>  }
>>>
>>>  static bool hists__decay_entry(struct hists *hists, struct hist_entry *he)
>>> @@ -270,7 +274,8 @@ static u8 symbol__parent_filter(const struct symbol *parent)
>>>  static struct hist_entry *add_hist_entry(struct hists *hists,
>>>                                       struct hist_entry *entry,
>>>                                       struct addr_location *al,
>>> -                                     u64 period)
>>> +                                     u64 period,
>>> +                                     u64 weight)
>>>  {
>>>         struct rb_node **p;
>>>         struct rb_node *parent = NULL;
>>> @@ -288,7 +293,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
>>>                 cmp = hist_entry__cmp(entry, he);
>>>
>>>                 if (!cmp) {
>>> -                       he_stat__add_period(&he->stat, period);
>>> +                       he_stat__add_period(&he->stat, period, weight);
>>>
>> With this approach, you will not aggregate samples with similar
>> weights for more than 2 samples.
>> Example:
>> Sample 1 W=250 -> no match, add Sample 1
>> Sample 2 W=250 -> match Sample1, Sample1 new weight=500
>> Sample 3 W=250 -> no match, add Sample 3
>>
>> Here you do not aggregate Sample 3 with Sample 1 and 2 , because you've updated
>> the weight which you also use in the sort__weight_cmp() routine.
>>
>> That does not work for me with PEBS-LL. I want aggregation when
>> samples are identical.
>>
>> I don't know why you want to aggregate weights.
>
> Ok, figured this out. I needed to use local_weight and not weight for
> the PEBS-LL case.
> Works fine now. So I can use your patch unaltered.

For PEBS-LL and possibly other special cases, it is important to remember
that perf report always end up sorting by period (hist_collapse_resort). But
for PEBS-LL we want to sort on nr_events * weight. Thus, with your patch,
the only way, I found, to achieve this is by passing:

       add_hist_entry(self, &entry, al, weight, weight);

Or period=weight, then pass period.
That way you ensure that if you have:

20 samples at cost 50
100 samples at cost 1

Then you see with perf report:

Samples Local Weight
        20  50
      100    1

I did update my patch to operate that way and I get the correct answer now.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 17:00       ` Stephane Eranian
@ 2013-01-23 17:13         ` Andi Kleen
  2013-01-23 17:25           ` Stephane Eranian
  0 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2013-01-23 17:13 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Andi Kleen, mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

> For PEBS-LL and possibly other special cases, it is important to remember
> that perf report always end up sorting by period (hist_collapse_resort). But
> for PEBS-LL we want to sort on nr_events * weight. Thus, with your patch,
> the only way, I found, to achieve this is by passing:
> 
>        add_hist_entry(self, &entry, al, weight, weight);

Seems like a hack. IMHO it should always sort by all the keys 
i specified with --sort in exactly that order I specified

I had a similar thing in a really old version of my patches,
but I gave it up because it was too unintuitive.

-Andi

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 17:13         ` Andi Kleen
@ 2013-01-23 17:25           ` Stephane Eranian
  2013-01-23 18:02             ` Andi Kleen
  0 siblings, 1 reply; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 17:25 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Wed, Jan 23, 2013 at 6:13 PM, Andi Kleen <andi@firstfloor.org> wrote:
>> For PEBS-LL and possibly other special cases, it is important to remember
>> that perf report always end up sorting by period (hist_collapse_resort). But
>> for PEBS-LL we want to sort on nr_events * weight. Thus, with your patch,
>> the only way, I found, to achieve this is by passing:
>>
>>        add_hist_entry(self, &entry, al, weight, weight);
>
> Seems like a hack. IMHO it should always sort by all the keys
> i specified with --sort in exactly that order I specified
>
Well, it does except for the "implicit" column which uses period, see
hists__collapse_resort().
And that function is hardcoded to only look at period.

As for the hack, I did not put in in my hist__add_mem_entry() but
rather in the caller.
For PEBS-LL, the period is not important. I think it counts the number
of loads/stores
and not just the qualifying ones. For loads, that means it counts all
loads and not just
the ones above the threshold, but I may be wrong.

> I had a similar thing in a really old version of my patches,
> but I gave it up because it was too unintuitive.
>
Well, but then it does not present a sensible view of the samples when weight is
more important than period.

> -Andi

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 17:25           ` Stephane Eranian
@ 2013-01-23 18:02             ` Andi Kleen
  2013-01-23 18:18               ` Stephane Eranian
  0 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2013-01-23 18:02 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Andi Kleen, mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

> As for the hack, I did not put in in my hist__add_mem_entry() but
> rather in the caller.
> For PEBS-LL, the period is not important. I think it counts the number
> of loads/stores
> and not just the qualifying ones. For loads, that means it counts all
> loads and not just
> the ones above the threshold, but I may be wrong.

Users should just specify the right keys, with a sensible default
for the event.

> 
> > I had a similar thing in a really old version of my patches,
> > but I gave it up because it was too unintuitive.
> >
> Well, but then it does not present a sensible view of the samples when weight is
> more important than period.

Right now weight is not sorted. So you can see the information, but it's
not really nice. Longer term should fix sort.c to actually sort properly,
then it'll work ok and be intuitive.

I don't think hacks like making weight look like period are the right
way to do it. I had those originally, but discarded them.

sort.c just needs to sort properly on all keys.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 18:02             ` Andi Kleen
@ 2013-01-23 18:18               ` Stephane Eranian
  2013-01-23 18:50                 ` Andi Kleen
  0 siblings, 1 reply; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 18:18 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Wed, Jan 23, 2013 at 7:02 PM, Andi Kleen <andi@firstfloor.org> wrote:
>> As for the hack, I did not put in in my hist__add_mem_entry() but
>> rather in the caller.
>> For PEBS-LL, the period is not important. I think it counts the number
>> of loads/stores
>> and not just the qualifying ones. For loads, that means it counts all
>> loads and not just
>> the ones above the threshold, but I may be wrong.
>
> Users should just specify the right keys, with a sensible default
> for the event.
>
But what I was saying is that even when you specify sensible
sort keys, there is ALWAYS an implicit key added first which
is the period. That is what that resort function does.

For PEBS-LL I do force the sort period to:
local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked

But that's not enough,

One solution could be to make period an explicit sort key. It
may then be dropped by some measurements, e.g., when weight is
is used.


>>
>> > I had a similar thing in a really old version of my patches,
>> > but I gave it up because it was too unintuitive.
>> >
>> Well, but then it does not present a sensible view of the samples when weight is
>> more important than period.
>
> Right now weight is not sorted. So you can see the information, but it's
> not really nice. Longer term should fix sort.c to actually sort properly,
> then it'll work ok and be intuitive.
>
> I don't think hacks like making weight look like period are the right
> way to do it. I had those originally, but discarded them.
>
> sort.c just needs to sort properly on all keys.
>
I agree.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 18:18               ` Stephane Eranian
@ 2013-01-23 18:50                 ` Andi Kleen
  2013-01-23 18:57                   ` Stephane Eranian
  0 siblings, 1 reply; 42+ messages in thread
From: Andi Kleen @ 2013-01-23 18:50 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: Andi Kleen, mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

> For PEBS-LL I do force the sort period to:
> local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked
> 
> But that's not enough,
> 
> One solution could be to make period an explicit sort key. It
> may then be dropped by some measurements, e.g., when weight is
> is used.

Makes sense. Agreed, it should probably be an explicit key.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 16/29] perf, tools: Add support for weight v7
  2013-01-23 18:50                 ` Andi Kleen
@ 2013-01-23 18:57                   ` Stephane Eranian
  0 siblings, 0 replies; 42+ messages in thread
From: Stephane Eranian @ 2013-01-23 18:57 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, LKML, Peter Zijlstra, Andrew Morton,
	Arnaldo Carvalho de Melo, Jiri Olsa, Namhyung Kim, Andi Kleen

On Wed, Jan 23, 2013 at 7:50 PM, Andi Kleen <andi@firstfloor.org> wrote:
>> For PEBS-LL I do force the sort period to:
>> local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked
>>
>> But that's not enough,
>>
>> One solution could be to make period an explicit sort key. It
>> may then be dropped by some measurements, e.g., when weight is
>> is used.
>
> Makes sense. Agreed, it should probably be an explicit key.
>
And then we would probably not need that resort business.

> -Andi
>
> --
> ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: perf PMU support for Haswell v7
  2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
                   ` (28 preceding siblings ...)
  2013-01-17 20:36 ` [PATCH 29/29] perf, tools: List kernel supplied event aliases in perf list v2 Andi Kleen
@ 2013-01-24 11:39 ` Ingo Molnar
  29 siblings, 0 replies; 42+ messages in thread
From: Ingo Molnar @ 2013-01-24 11:39 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, linux-kernel, a.p.zijlstra, akpm, acme, eranian, jolsa, namhyung


* Andi Kleen <andi@firstfloor.org> wrote:

> [Updated version for the latest master tree and fixes.  See 
> end for details. All feedback addressed. Ready for merging.]

Could we try a minimal, obvious hardware-enablement series 
first, with all the optional features left out in the first 
step? Your patches look mostly shaped in that way already, so 
this would be mostly a resend of just the basic bits.

( I also finally got access to real Haswell hardware so I'll be
  able to try these out and verify them. The patches are way too
  large to be applied blindly. )

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2013-01-24 11:39 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-17 20:36 perf PMU support for Haswell v7 Andi Kleen
2013-01-17 20:36 ` [PATCH 01/29] perf, x86: Add PEBSv2 record support Andi Kleen
2013-01-17 20:36 ` [PATCH 02/29] perf, x86: Basic Haswell PMU support v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 03/29] perf, x86: Basic Haswell PEBS support v3 Andi Kleen
2013-01-17 20:36 ` [PATCH 04/29] perf, x86: Support the TSX intx/intx_cp qualifiers v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 05/29] perf, kvm: Support the intx/intx_cp modifiers in KVM arch perfmon emulation v4 Andi Kleen
2013-01-20 14:04   ` Gleb Natapov
2013-01-17 20:36 ` [PATCH 06/29] perf, x86: Support PERF_SAMPLE_ADDR on Haswell Andi Kleen
2013-01-17 20:36 ` [PATCH 07/29] perf, x86: Support Haswell v4 LBR format Andi Kleen
2013-01-17 20:36 ` [PATCH 08/29] perf, x86: Disable LBR recording for unknown LBR_FMT Andi Kleen
2013-01-17 20:36 ` [PATCH 09/29] perf, x86: Support LBR filtering by INTX/NOTX/ABORT v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 10/29] perf, tools: Add abort_tx,no_tx,in_tx branch filter options to perf record -j v3 Andi Kleen
2013-01-17 20:36 ` [PATCH 11/29] perf, tools: Support sorting by intx, abort branch flags v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 12/29] perf, x86: Support full width counting Andi Kleen
2013-01-17 20:36 ` [PATCH 13/29] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v3 Andi Kleen
2013-01-17 20:36 ` [PATCH 14/29] perf, core: Add a concept of a weightened sample v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 15/29] perf, x86: Support weight samples for PEBS Andi Kleen
2013-01-17 20:36 ` [PATCH 16/29] perf, tools: Add support for weight v7 Andi Kleen
2013-01-23 11:38   ` Stephane Eranian
2013-01-23 11:54     ` Stephane Eranian
2013-01-23 16:56       ` Andi Kleen
2013-01-23 17:00       ` Stephane Eranian
2013-01-23 17:13         ` Andi Kleen
2013-01-23 17:25           ` Stephane Eranian
2013-01-23 18:02             ` Andi Kleen
2013-01-23 18:18               ` Stephane Eranian
2013-01-23 18:50                 ` Andi Kleen
2013-01-23 18:57                   ` Stephane Eranian
2013-01-17 20:36 ` [PATCH 17/29] perf, core: Add generic transaction flags v3 Andi Kleen
2013-01-17 20:36 ` [PATCH 18/29] perf, x86: Add Haswell specific transaction flag reporting Andi Kleen
2013-01-17 20:36 ` [PATCH 19/29] perf, tools: Add support for record transaction flags v3 Andi Kleen
2013-01-17 20:36 ` [PATCH 20/29] perf, tools: Add browser support for transaction flags v5 Andi Kleen
2013-01-17 20:36 ` [PATCH 21/29] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
2013-01-17 20:36 ` [PATCH 22/29] tools, perf: Add a precise event qualifier v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 23/29] perf, x86: improve sysfs event mapping with event string Andi Kleen
2013-01-17 20:36 ` [PATCH 24/29] perf, x86: Support CPU specific sysfs events Andi Kleen
2013-01-17 20:36 ` [PATCH 25/29] perf, x86: Add Haswell TSX event aliases v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 26/29] perf, tools: Add perf stat --transaction v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 27/29] perf, x86: Add a Haswell precise instructions event v2 Andi Kleen
2013-01-17 20:36 ` [PATCH 28/29] perf, tools: Default to cpu// for events v5 Andi Kleen
2013-01-17 20:36 ` [PATCH 29/29] perf, tools: List kernel supplied event aliases in perf list v2 Andi Kleen
2013-01-24 11:39 ` perf PMU support for Haswell v7 Ingo Molnar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.