linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Basic perf PMU support for Haswell v5
@ 2013-02-07 19:23 Andi Kleen
  2013-02-07 19:23 ` [PATCH 1/5] perf, x86: Add PEBSv2 record support v2 Andi Kleen
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian

This is based on v7 of the full Haswell PMU support, but 
ported to the latest perf/core and stripped down to the
bare bones

Only for very extremly basic usage.

Most interesting new features are not in this patchkit
(full version is
git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc.git hsw/pmu5)

Contains support for:
- Basic Haswell PMU and PEBS support
- Late unmasking of the PMI
- Support for wide counters

v2: Addressed Stephane's feedback. See individual patches for details.
v3: now even more bite-sized. Qualifier constraints merged earlier.
v4: Rename some variables, add some comments and other minor changes.
v5: Address some minor review feedback. Port to latest perf/core
Add some Reviewed/Tested-bys.

-Andi

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/5] perf, x86: Add PEBSv2 record support v2
  2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
@ 2013-02-07 19:23 ` Andi Kleen
  2013-02-12  9:02   ` Ingo Molnar
  2013-02-07 19:23 ` [PATCH 2/5] perf, x86: Basic Haswell PMU support v4 Andi Kleen
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add support for the v2 PEBS format. It has a superset of the v1 PEBS
fields, but has a longer record so we need to adjust the code paths.

The main advantage is the new "EventingRip" support which directly
gives the instruction, not off-by-one instruction. So with precise == 2
we use that directly and don't try to use LBRs and walking basic blocks.
This lowers the overhead significantly.

Some other features are added in later patches.

Reviewed-by: Stephane Eranian <eranian@google.com>
v2: Rename various identifiers. Add more comments. Get rid of a cast.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.c          |    2 +-
 arch/x86/kernel/cpu/perf_event_intel_ds.c |  113 +++++++++++++++++++++++------
 2 files changed, 91 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index bf0f01a..7d3b9bd 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -397,7 +397,7 @@ int x86_pmu_hw_config(struct perf_event *event)
 		 * check that PEBS LBR correction does not conflict with
 		 * whatever the user is asking with attr->branch_sample_type
 		 */
-		if (event->attr.precise_ip > 1) {
+		if (event->attr.precise_ip > 1 && x86_pmu.intel_cap.pebs_format < 2) {
 			u64 *br_type = &event->attr.branch_sample_type;
 
 			if (has_branch_stack(event)) {
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 826054a..3a828ed 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -41,6 +41,22 @@ struct pebs_record_nhm {
 	u64 status, dla, dse, lat;
 };
 
+/*
+ * Same as pebs_record_nhm, with two additional fields.
+ */
+struct pebs_record_fmt2 {
+	struct pebs_record_nhm nhm;
+	/* 
+	 * Real IP of the event. In the Intel documentation this
+	 * is called eventingrip.
+	 */
+	u64 ip_of_the_event;
+	/* 
+	 * TSX tuning information field: abort cycles and abort flags.
+	 */
+	u64 tsx_tuning;
+};
+
 void init_debug_store_on_cpu(int cpu)
 {
 	struct debug_store *ds = per_cpu(cpu_hw_events, cpu).ds;
@@ -559,11 +575,11 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 {
 	/*
 	 * We cast to pebs_record_core since that is a subset of
-	 * both formats and we don't use the other fields in this
-	 * routine.
+	 * all formats.
 	 */
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct pebs_record_core *pebs = __pebs;
+	struct pebs_record_fmt2 *pebs_fmt2 = __pebs;
 	struct perf_sample_data data;
 	struct pt_regs regs;
 
@@ -588,7 +604,10 @@ static void __intel_pmu_pebs_event(struct perf_event *event,
 	regs.bp = pebs->bp;
 	regs.sp = pebs->sp;
 
-	if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(&regs))
+	if (event->attr.precise_ip > 1 && x86_pmu.intel_cap.pebs_format >= 2) {
+		regs.ip = pebs_fmt2->ip_of_the_event;
+		regs.flags |= PERF_EFLAGS_EXACT;
+	} else if (event->attr.precise_ip > 1 && intel_pmu_pebs_fixup_ip(&regs))
 		regs.flags |= PERF_EFLAGS_EXACT;
 	else
 		regs.flags &= ~PERF_EFLAGS_EXACT;
@@ -641,35 +660,21 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs)
 	__intel_pmu_pebs_event(event, iregs, at);
 }
 
-static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
+static void __intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, void *at,
+					void *top)
 {
 	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
 	struct debug_store *ds = cpuc->ds;
-	struct pebs_record_nhm *at, *top;
 	struct perf_event *event = NULL;
 	u64 status = 0;
-	int bit, n;
-
-	if (!x86_pmu.pebs_active)
-		return;
-
-	at  = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
-	top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
+	int bit;
 
 	ds->pebs_index = ds->pebs_buffer_base;
 
-	n = top - at;
-	if (n <= 0)
-		return;
+	for ( ; at < top; at += x86_pmu.pebs_record_size) {
+		struct pebs_record_nhm *p = at;
 
-	/*
-	 * Should not happen, we program the threshold at 1 and do not
-	 * set a reset value.
-	 */
-	WARN_ONCE(n > x86_pmu.max_pebs_events, "Unexpected number of pebs records %d\n", n);
-
-	for ( ; at < top; at++) {
-		for_each_set_bit(bit, (unsigned long *)&at->status, x86_pmu.max_pebs_events) {
+		for_each_set_bit(bit, (unsigned long *)&p->status, x86_pmu.max_pebs_events) {
 			event = cpuc->events[bit];
 			if (!test_bit(bit, cpuc->active_mask))
 				continue;
@@ -692,6 +697,61 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
 	}
 }
 
+static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	struct debug_store *ds = cpuc->ds;
+	struct pebs_record_nhm *at, *top;
+	int n;
+
+	if (!x86_pmu.pebs_active)
+		return;
+
+	at  = (struct pebs_record_nhm *)(unsigned long)ds->pebs_buffer_base;
+	top = (struct pebs_record_nhm *)(unsigned long)ds->pebs_index;
+
+	ds->pebs_index = ds->pebs_buffer_base;
+
+	n = top - at;
+	if (n <= 0)
+		return;
+
+	/*
+	 * Should not happen, we program the threshold at 1 and do not
+	 * set a reset value.
+	 */
+	WARN_ONCE(n > x86_pmu.max_pebs_events,
+		  "Unexpected number of pebs records %d\n", n);
+
+	return __intel_pmu_drain_pebs_nhm(iregs, at, top);
+}
+
+static void intel_pmu_drain_pebs_fmt2(struct pt_regs *iregs)
+{
+	struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
+	struct debug_store *ds = cpuc->ds;
+	struct pebs_record_fmt2 *at, *top;
+	int n;
+
+	if (!x86_pmu.pebs_active)
+		return;
+
+	at  = (struct pebs_record_fmt2 *)(unsigned long)ds->pebs_buffer_base;
+	top = (struct pebs_record_fmt2 *)(unsigned long)ds->pebs_index;
+
+	n = top - at;
+	if (n <= 0)
+		return;
+	/*
+	 * Should not happen, we program the threshold at 1 and do not
+	 * set a reset value.
+	 */
+	WARN_ONCE(n > x86_pmu.max_pebs_events,
+		  "Unexpected number of pebs records %d\n", n);
+
+	return __intel_pmu_drain_pebs_nhm(iregs, at, top);
+}
+
 /*
  * BTS, PEBS probe and setup
  */
@@ -723,6 +783,13 @@ void intel_ds_init(void)
 			x86_pmu.drain_pebs = intel_pmu_drain_pebs_nhm;
 			break;
 
+		case 2:
+			printk(KERN_CONT "PEBS fmt2%c, ", pebs_type);
+			x86_pmu.pebs_record_size = 
+				sizeof(struct pebs_record_fmt2);
+			x86_pmu.drain_pebs = intel_pmu_drain_pebs_fmt2;
+			break;
+
 		default:
 			printk(KERN_CONT "no PEBS fmt%d%c, ", format, pebs_type);
 			x86_pmu.pebs = 0;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/5] perf, x86: Basic Haswell PMU support v4
  2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
  2013-02-07 19:23 ` [PATCH 1/5] perf, x86: Add PEBSv2 record support v2 Andi Kleen
@ 2013-02-07 19:23 ` Andi Kleen
  2013-02-07 19:23 ` [PATCH 3/5] perf, x86: Basic Haswell PEBS " Andi Kleen
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add basic Haswell PMU support.

Similar to SandyBridge, but has a few new events and two
new counter bits.

There are some new counter flags that need to be prevented
from being set on fixed counters, and allowed to be set
for generic counters.

Also we add support for the counter 2 constraint to handle
all raw events.

Contains fixes from Stephane Eranian

v2: Folded TSX bits into standard FIXED_EVENT_CONSTRAINTS
v3: Use SNB LBR init code. Comment fix (Stephane Eranian)
v4: Add the counter2 constraints. Fix comment in the right place.
Reviewed-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/perf_event.h      |    5 ++
 arch/x86/kernel/cpu/perf_event.h       |    5 ++-
 arch/x86/kernel/cpu/perf_event_intel.c |   71 ++++++++++++++++++++++++++++++++
 3 files changed, 80 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 2234eaaec..80ab268 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -29,8 +29,13 @@
 #define ARCH_PERFMON_EVENTSEL_INV			(1ULL << 23)
 #define ARCH_PERFMON_EVENTSEL_CMASK			0xFF000000ULL
 
+#define HSW_INTX					(1ULL << 32)
+#define HSW_INTX_CHECKPOINTED				(1ULL << 33)
+
 #define AMD64_EVENTSEL_GUESTONLY			(1ULL << 40)
 #define AMD64_EVENTSEL_HOSTONLY				(1ULL << 41)
+#define AMD_PERFMON_EVENTSEL_GUESTONLY			(1ULL << 40)
+#define AMD_PERFMON_EVENTSEL_HOSTONLY			(1ULL << 41)
 
 #define AMD64_EVENTSEL_EVENT	\
 	(ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32))
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 7f5c75c..9845672 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -219,11 +219,14 @@ struct cpu_hw_events {
  *  - inv
  *  - edge
  *  - cnt-mask
+ *  - intx
+ *  - intx_cp
  *  The other filters are supported by fixed counters.
  *  The any-thread option is supported starting with v3.
  */
+#define FIXED_EVENT_FLAGS (X86_RAW_EVENT_MASK|HSW_INTX|HSW_INTX_CHECKPOINTED)
 #define FIXED_EVENT_CONSTRAINT(c, n)	\
-	EVENT_CONSTRAINT(c, (1ULL << (32+n)), X86_RAW_EVENT_MASK)
+	EVENT_CONSTRAINT(c, (1ULL << (32+n)), FIXED_EVENT_FLAGS)
 
 /*
  * Constraint on the Event code + UMask
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 93b9e11..58b737d 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -13,6 +13,7 @@
 #include <linux/slab.h>
 #include <linux/export.h>
 
+#include <asm/cpufeature.h>
 #include <asm/hardirq.h>
 #include <asm/apic.h>
 
@@ -133,6 +134,17 @@ static struct extra_reg intel_snb_extra_regs[] __read_mostly = {
 	EVENT_EXTRA_END
 };
 
+static struct event_constraint intel_hsw_event_constraints[] =
+{
+	FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+	FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+	FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+	INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.* */
+	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
+	INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
+	EVENT_CONSTRAINT_END
+};
+
 static u64 intel_pmu_event_map(int hw_event)
 {
 	return intel_perfmon_event_map[hw_event];
@@ -1585,6 +1597,45 @@ static void core_pmu_enable_all(int added)
 	}
 }
 
+static int hsw_hw_config(struct perf_event *event)
+{
+	int ret = intel_pmu_hw_config(event);
+
+	if (ret)
+		return ret;
+	if (!boot_cpu_has(X86_FEATURE_RTM) && !boot_cpu_has(X86_FEATURE_HLE))
+		return 0;
+	event->hw.config |= event->attr.config & (HSW_INTX|HSW_INTX_CHECKPOINTED);
+
+	/*
+	 * INTX/INTX-CP do not play well with PEBS or ANY thread mode.
+	 */
+	if ((event->hw.config & (HSW_INTX|HSW_INTX_CHECKPOINTED)) &&
+	     ((event->hw.config & ARCH_PERFMON_EVENTSEL_ANY) ||
+	      event->attr.precise_ip > 0))
+		return -EOPNOTSUPP;
+
+	return 0;
+}
+
+static struct event_constraint counter2_constraint =
+			EVENT_CONSTRAINT(0, 0x4, 0);
+
+static struct event_constraint *
+hsw_get_event_constraints(struct cpu_hw_events *cpuc, struct perf_event *event)
+{
+	struct event_constraint *c = intel_get_event_constraints(cpuc, event);
+
+	/* Handle special quirk on intx_checkpointed only in counter 2 */
+	if (event->hw.config & HSW_INTX_CHECKPOINTED) {
+		if (c->idxmsk64 & (1U << 2))
+			return &counter2_constraint;
+		return &emptyconstraint;
+	}
+
+	return c;
+}
+
 PMU_FORMAT_ATTR(event,	"config:0-7"	);
 PMU_FORMAT_ATTR(umask,	"config:8-15"	);
 PMU_FORMAT_ATTR(edge,	"config:18"	);
@@ -2107,6 +2158,26 @@ __init int intel_pmu_init(void)
 		break;
 
 
+	case 60: /* Haswell Client */
+	case 70:
+	case 71:
+		memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
+		       sizeof(hw_cache_event_ids));
+
+		intel_pmu_lbr_init_snb();
+
+		x86_pmu.event_constraints = intel_hsw_event_constraints;
+
+		x86_pmu.extra_regs = intel_snb_extra_regs;
+		/* all extra regs are per-cpu when HT is on */
+		x86_pmu.er_flags |= ERF_HAS_RSP_1;
+		x86_pmu.er_flags |= ERF_NO_HT_SHARING;
+
+		x86_pmu.hw_config = hsw_hw_config;
+		x86_pmu.get_event_constraints = hsw_get_event_constraints;
+		pr_cont("Haswell events, ");
+		break;
+
 	default:
 		switch (x86_pmu.version) {
 		case 1:
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/5] perf, x86: Basic Haswell PEBS support v4
  2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
  2013-02-07 19:23 ` [PATCH 1/5] perf, x86: Add PEBSv2 record support v2 Andi Kleen
  2013-02-07 19:23 ` [PATCH 2/5] perf, x86: Basic Haswell PMU support v4 Andi Kleen
@ 2013-02-07 19:23 ` Andi Kleen
  2013-02-07 19:23 ` [PATCH 4/5] perf, x86: Support full width counting v2 Andi Kleen
  2013-02-07 19:23 ` [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
  4 siblings, 0 replies; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add basic PEBS support for Haswell.
The constraints are similar to SandyBridge with a few new events.

v2: Readd missing pebs_aliases
v3: Readd missing hunk. Fix some constraints.
v4: Fix typo in PEBS event table (Stephane Eranian)
Reviewed-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.h          |    2 ++
 arch/x86/kernel/cpu/perf_event_intel.c    |    6 ++++--
 arch/x86/kernel/cpu/perf_event_intel_ds.c |   29 +++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 9845672..ded4667 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -591,6 +591,8 @@ extern struct event_constraint intel_snb_pebs_event_constraints[];
 
 extern struct event_constraint intel_ivb_pebs_event_constraints[];
 
+extern struct event_constraint intel_hsw_pebs_event_constraints[];
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event);
 
 void intel_pmu_pebs_enable(struct perf_event *event);
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 58b737d..aa48048 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -827,7 +827,8 @@ static inline bool intel_pmu_needs_lbr_smpl(struct perf_event *event)
 		return true;
 
 	/* implicit branch sampling to correct PEBS skid */
-	if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1)
+	if (x86_pmu.intel_cap.pebs_trap && event->attr.precise_ip > 1 &&
+	    x86_pmu.intel_cap.pebs_format < 2)
 		return true;
 
 	return false;
@@ -2167,8 +2168,9 @@ __init int intel_pmu_init(void)
 		intel_pmu_lbr_init_snb();
 
 		x86_pmu.event_constraints = intel_hsw_event_constraints;
-
+		x86_pmu.pebs_constraints = intel_hsw_pebs_event_constraints;
 		x86_pmu.extra_regs = intel_snb_extra_regs;
+		x86_pmu.pebs_aliases = intel_pebs_aliases_snb;
 		/* all extra regs are per-cpu when HT is on */
 		x86_pmu.er_flags |= ERF_HAS_RSP_1;
 		x86_pmu.er_flags |= ERF_NO_HT_SHARING;
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index 3a828ed..bae2199 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -437,6 +437,35 @@ struct event_constraint intel_ivb_pebs_event_constraints[] = {
         EVENT_CONSTRAINT_END
 };
 
+struct event_constraint intel_hsw_pebs_event_constraints[] = {
+	INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PRECDIST */
+	INTEL_UEVENT_CONSTRAINT(0x01c2, 0xf), /* UOPS_RETIRED.ALL */
+	INTEL_UEVENT_CONSTRAINT(0x02c2, 0xf), /* UOPS_RETIRED.RETIRE_SLOTS */
+	INTEL_EVENT_CONSTRAINT(0xc4, 0xf),    /* BR_INST_RETIRED.* */
+	INTEL_UEVENT_CONSTRAINT(0x01c5, 0xf), /* BR_MISP_RETIRED.CONDITIONAL */
+	INTEL_UEVENT_CONSTRAINT(0x04c5, 0xf), /* BR_MISP_RETIRED.ALL_BRANCHES */
+	INTEL_UEVENT_CONSTRAINT(0x20c5, 0xf), /* BR_MISP_RETIRED.NEAR_TAKEN */
+	INTEL_EVENT_CONSTRAINT(0xcd, 0x8),    /* MEM_TRANS_RETIRED.* */
+	INTEL_UEVENT_CONSTRAINT(0x11d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x12d0, 0xf), /* MEM_UOPS_RETIRED.STLB_MISS_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x21d0, 0xf), /* MEM_UOPS_RETIRED.LOCK_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x41d0, 0xf), /* MEM_UOPS_RETIRED.SPLIT_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x42d0, 0xf), /* MEM_UOPS_RETIRED.SPLIT_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x81d0, 0xf), /* MEM_UOPS_RETIRED.ALL_LOADS */
+	INTEL_UEVENT_CONSTRAINT(0x82d0, 0xf), /* MEM_UOPS_RETIRED.ALL_STORES */
+	INTEL_UEVENT_CONSTRAINT(0x01d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L1_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x02d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L2_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x04d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.L3_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x40d1, 0xf), /* MEM_LOAD_UOPS_RETIRED.HIT_LFB */
+	INTEL_UEVENT_CONSTRAINT(0x01d2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS */
+	INTEL_UEVENT_CONSTRAINT(0x02d2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT */
+	INTEL_UEVENT_CONSTRAINT(0x01d3, 0xf), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM */
+	INTEL_UEVENT_CONSTRAINT(0x04c8, 0xf), /* HLE_RETIRED.Abort */
+	INTEL_UEVENT_CONSTRAINT(0x04c9, 0xf), /* RTM_RETIRED.Abort */
+
+	EVENT_CONSTRAINT_END
+};
+
 struct event_constraint *intel_pebs_constraints(struct perf_event *event)
 {
 	struct event_constraint *c;
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/5] perf, x86: Support full width counting v2
  2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
                   ` (2 preceding siblings ...)
  2013-02-07 19:23 ` [PATCH 3/5] perf, x86: Basic Haswell PEBS " Andi Kleen
@ 2013-02-07 19:23 ` Andi Kleen
  2013-02-12  8:42   ` Ingo Molnar
  2013-02-07 19:23 ` [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
  4 siblings, 1 reply; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Recent Intel CPUs like Haswell and IvyBridge have a new alternative MSR
range for perfctrs that allows writing the full counter width. Enable this
range if the hardware reports it using a new capability bit.

This lowers the overhead of perf stat slightly because it has to do less
interrupts to accumulate the counter value. On Haswell it also avoids some
problems with TSX aborting when the end of the counter range is reached.

v2: Print the feature at boot
Reviewed-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/uapi/asm/msr-index.h  |    3 +++
 arch/x86/kernel/cpu/perf_event.h       |    1 +
 arch/x86/kernel/cpu/perf_event_intel.c |    7 +++++++
 3 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h
index 433a59f..af41a77 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -163,6 +163,9 @@
 #define MSR_KNC_EVNTSEL0               0x00000028
 #define MSR_KNC_EVNTSEL1               0x00000029
 
+/* Alternative perfctr range with full access. */
+#define MSR_IA32_PMC0			0x000004c1
+
 /* AMD64 MSRs. Not complete. See the architecture manual for a more
    complete list. */
 
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index ded4667..adaa0b0 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -278,6 +278,7 @@ union perf_capabilities {
 		u64	pebs_arch_reg:1;
 		u64	pebs_format:4;
 		u64	smm_freeze:1;
+		u64	fw_write:1;
 	};
 	u64	capabilities;
 };
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index aa48048..06dcc0c 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -2228,5 +2228,12 @@ __init int intel_pmu_init(void)
 		}
 	}
 
+	/* Support full width counters using alternative MSR range */
+	if (x86_pmu.intel_cap.fw_write) {
+		x86_pmu.max_period = x86_pmu.cntval_mask;
+		x86_pmu.perfctr = MSR_IA32_PMC0;
+		pr_cont("full-width counters, ");
+	}
+
 	return 0;
 }
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset
  2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
                   ` (3 preceding siblings ...)
  2013-02-07 19:23 ` [PATCH 4/5] perf, x86: Support full width counting v2 Andi Kleen
@ 2013-02-07 19:23 ` Andi Kleen
  2013-02-12  8:43   ` Ingo Molnar
  4 siblings, 1 reply; 11+ messages in thread
From: Andi Kleen @ 2013-02-07 19:23 UTC (permalink / raw)
  To: mingo; +Cc: linux-kernel, eranian, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

This avoids some problems with spurious PMIs on Haswell.
Haswell seems to behave more like P4 in this regard. Do
the same thing as the P4 perf handler by unmasking
the NMI only at the end. Shouldn't make any difference
for earlier non P4 cores.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c |   16 ++++++----------
 1 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 06dcc0c..00144a5 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1122,16 +1122,6 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
 
 	cpuc = &__get_cpu_var(cpu_hw_events);
 
-	/*
-	 * Some chipsets need to unmask the LVTPC in a particular spot
-	 * inside the nmi handler.  As a result, the unmasking was pushed
-	 * into all the nmi handlers.
-	 *
-	 * This handler doesn't seem to have any issues with the unmasking
-	 * so it was left at the top.
-	 */
-	apic_write(APIC_LVTPC, APIC_DM_NMI);
-
 	intel_pmu_disable_all();
 	handled = intel_pmu_drain_bts_buffer();
 	status = intel_pmu_get_status();
@@ -1191,6 +1181,12 @@ again:
 
 done:
 	intel_pmu_enable_all(0);
+	/*
+	 * Only unmask the NMI after the overflow counters
+	 * have been reset. This avoids spurious NMIs on
+	 * Haswell CPUs.
+	 */
+	apic_write(APIC_LVTPC, APIC_DM_NMI);
 	return handled;
 }
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/5] perf, x86: Support full width counting v2
  2013-02-07 19:23 ` [PATCH 4/5] perf, x86: Support full width counting v2 Andi Kleen
@ 2013-02-12  8:42   ` Ingo Molnar
  0 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2013-02-12  8:42 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, eranian, Andi Kleen, Peter Zijlstra,
	Arnaldo Carvalho de Melo


* Andi Kleen <andi@firstfloor.org> wrote:

> From: Andi Kleen <ak@linux.intel.com>
> 
> Recent Intel CPUs like Haswell and IvyBridge have a new alternative MSR
> range for perfctrs that allows writing the full counter width. Enable this
> range if the hardware reports it using a new capability bit.
> 
> This lowers the overhead of perf stat slightly because it has to do less
> interrupts to accumulate the counter value. On Haswell it also avoids some
> problems with TSX aborting when the end of the counter range is reached.
> 
> v2: Print the feature at boot
> Reviewed-by: Stephane Eranian <eranian@google.com>
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  arch/x86/include/uapi/asm/msr-index.h  |    3 +++
>  arch/x86/kernel/cpu/perf_event.h       |    1 +
>  arch/x86/kernel/cpu/perf_event_intel.c |    7 +++++++
>  3 files changed, 11 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h
> index 433a59f..af41a77 100644
> --- a/arch/x86/include/uapi/asm/msr-index.h
> +++ b/arch/x86/include/uapi/asm/msr-index.h
> @@ -163,6 +163,9 @@
>  #define MSR_KNC_EVNTSEL0               0x00000028
>  #define MSR_KNC_EVNTSEL1               0x00000029
>  
> +/* Alternative perfctr range with full access. */
> +#define MSR_IA32_PMC0			0x000004c1
> +
>  /* AMD64 MSRs. Not complete. See the architecture manual for a more
>     complete list. */
>  
> diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
> index ded4667..adaa0b0 100644
> --- a/arch/x86/kernel/cpu/perf_event.h
> +++ b/arch/x86/kernel/cpu/perf_event.h
> @@ -278,6 +278,7 @@ union perf_capabilities {
>  		u64	pebs_arch_reg:1;
>  		u64	pebs_format:4;
>  		u64	smm_freeze:1;
> +		u64	fw_write:1;

No brownies for obfuscation: that should be named 
full_width_write, not a meaningless abbreviation that anyone 
crossing the code could mistake as 'firewall write' or 'forward 
write' or anything.

Also a comment line at the field should explain its meaning.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset
  2013-02-07 19:23 ` [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
@ 2013-02-12  8:43   ` Ingo Molnar
  2013-02-12 15:14     ` Andi Kleen
  0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2013-02-12  8:43 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, eranian, Andi Kleen


* Andi Kleen <andi@firstfloor.org> wrote:

> From: Andi Kleen <ak@linux.intel.com>
> 
> This avoids some problems with spurious PMIs on Haswell.
> Haswell seems to behave more like P4 in this regard. Do
> the same thing as the P4 perf handler by unmasking
> the NMI only at the end. Shouldn't make any difference
> for earlier non P4 cores.

Was this stress-tested on all affected main CPU types, or only 
on Haswell?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/5] perf, x86: Add PEBSv2 record support v2
  2013-02-07 19:23 ` [PATCH 1/5] perf, x86: Add PEBSv2 record support v2 Andi Kleen
@ 2013-02-12  9:02   ` Ingo Molnar
  0 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2013-02-12  9:02 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, eranian, Andi Kleen


* Andi Kleen <andi@firstfloor.org> wrote:

> From: Andi Kleen <ak@linux.intel.com>
> 
> Add support for the v2 PEBS format. It has a superset of the v1 PEBS
> fields, but has a longer record so we need to adjust the code paths.

The changelog talks about a 'v2' PEBS format - the code talks 
about 'fmt2':

> +/*
> + * Same as pebs_record_nhm, with two additional fields.
> + */
> +struct pebs_record_fmt2 {

In light of the existing pebs_record_core and pebs_record_nhm 
naming a new perf_record_hsw would be more natural.

Or if numbered versions are used, all PEBS record hardware 
versions should be numbered.

Right now your extension makes it all look pretty inconsistent.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset
  2013-02-12  8:43   ` Ingo Molnar
@ 2013-02-12 15:14     ` Andi Kleen
  2013-02-13  9:10       ` Ingo Molnar
  0 siblings, 1 reply; 11+ messages in thread
From: Andi Kleen @ 2013-02-12 15:14 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Andi Kleen, linux-kernel, eranian, Andi Kleen

On Tue, Feb 12, 2013 at 09:43:46AM +0100, Ingo Molnar wrote:
> Was this stress-tested on all affected main CPU types, or only 
> on Haswell?

I tested it on Haswell and Ivy Bridge. I can also try
Westmere and a Saltwell(Atom), but for the majority of other family 6
systems I'll need to rely on the community.

White listing is somewhat difficult because it affects the architectural
mode too.

I don't really expect problems from this change, we should probably
have always done it like this.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset
  2013-02-12 15:14     ` Andi Kleen
@ 2013-02-13  9:10       ` Ingo Molnar
  0 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2013-02-13  9:10 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-kernel, eranian, Andi Kleen, Peter Zijlstra,
	Arnaldo Carvalho de Melo


* Andi Kleen <andi@firstfloor.org> wrote:

> On Tue, Feb 12, 2013 at 09:43:46AM +0100, Ingo Molnar wrote:
> > Was this stress-tested on all affected main CPU types, or only 
> > on Haswell?
> 
> I tested it on Haswell and Ivy Bridge. I can also try Westmere 
> and a Saltwell(Atom), but for the majority of other family 6 
> systems I'll need to rely on the community.

The systems you tested should be OK.

> White listing is somewhat difficult because it affects the 
> architectural mode too.

Yeah, I'd rather avoid that.

> I don't really expect problems from this change, we should 
> probably have always done it like this.

I expect potential problems: the ordering of the operations in 
the NMI handler was always very fragile, resulting in hard to 
debug hangs - which sometimes needed hours long very intense PMU 
stress-testing to trigger.

That is why I asked how heavily you've tested this. Once the 
series passes review I'll keep this patch last to make it easy 
to revert/zap if it causes problems.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-02-13  9:10 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-07 19:23 Basic perf PMU support for Haswell v5 Andi Kleen
2013-02-07 19:23 ` [PATCH 1/5] perf, x86: Add PEBSv2 record support v2 Andi Kleen
2013-02-12  9:02   ` Ingo Molnar
2013-02-07 19:23 ` [PATCH 2/5] perf, x86: Basic Haswell PMU support v4 Andi Kleen
2013-02-07 19:23 ` [PATCH 3/5] perf, x86: Basic Haswell PEBS " Andi Kleen
2013-02-07 19:23 ` [PATCH 4/5] perf, x86: Support full width counting v2 Andi Kleen
2013-02-12  8:42   ` Ingo Molnar
2013-02-07 19:23 ` [PATCH 5/5] perf, x86: Move NMI clearing to end of PMI handler after the counter registers are reset Andi Kleen
2013-02-12  8:43   ` Ingo Molnar
2013-02-12 15:14     ` Andi Kleen
2013-02-13  9:10       ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).