linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/4] perf: Enable symbolic event names
@ 2015-05-01  7:05 Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 1/4] perf: Create a table of Power7 PMU events Sukadev Bhattiprolu
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-01  7:05 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

Implement ability to specify Power PMU events by their symbolic event
names rather than raw codes. This approach pulls tables of the Power7
and Power8 PMU events into the perf source tree and uses these tables
to create aliases for the PMU events. With these aliases users can run:

	perf stat -e PM_1PLUS_PPC_CMPL:ku sleep 1
or
	perf stat -e cpu/PM_VSU_SINGLE/ sleep 1

This is an early POC patchset based on discussions with Jiri Olsa,
Michael Ellerman and Ingo Molnar. Lightly tested on Power7 and Power8.

Can other architectures can implement arch_get_events_table() and similarly 
use symoblic event names?

I am also assuming that if the header files like power8-events.h are
easily readable, we don't need the JSON files anymore?

TODO:
	- Maybe translate event names to lower-case?
	- Allow perf to process event descriptions (need Andi Kleen's patch)


Sukadev Bhattiprolu (4):
  perf: Create a table of Power7 PMU events
  perf: Create a table of Power8 PMU events
  perf/powerpc: Move mfspr and friends to header file
  perf: Create aliases for Power PMU events

 tools/perf/arch/powerpc/util/Build           |    2 +-
 tools/perf/arch/powerpc/util/header.c        |    9 +-
 tools/perf/arch/powerpc/util/header.h        |    9 +
 tools/perf/arch/powerpc/util/pmu-events.c    |   52 +
 tools/perf/arch/powerpc/util/pmu-events.h    |   17 +
 tools/perf/arch/powerpc/util/power7-events.h | 3315 +++++++++++++
 tools/perf/arch/powerpc/util/power8-events.h | 6408 ++++++++++++++++++++++++++
 tools/perf/util/pmu.c                        |   77 +
 tools/perf/util/pmu.h                        |   10 +
 9 files changed, 9890 insertions(+), 9 deletions(-)
 create mode 100644 tools/perf/arch/powerpc/util/header.h
 create mode 100644 tools/perf/arch/powerpc/util/pmu-events.c
 create mode 100644 tools/perf/arch/powerpc/util/pmu-events.h
 create mode 100644 tools/perf/arch/powerpc/util/power7-events.h
 create mode 100644 tools/perf/arch/powerpc/util/power8-events.h

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC][PATCH 1/4] perf: Create a table of Power7 PMU events
  2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
@ 2015-05-01  7:05 ` Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 2/4] perf: Create a table of Power8 " Sukadev Bhattiprolu
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-01  7:05 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

This table will be used in a follow-on patch to allow specifying
Power7 events by name rather than by their raw codes.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 tools/perf/arch/powerpc/util/power7-events.h | 3315 ++++++++++++++++++++++++++
 1 file changed, 3315 insertions(+)
 create mode 100644 tools/perf/arch/powerpc/util/power7-events.h

diff --git a/tools/perf/arch/powerpc/util/power7-events.h b/tools/perf/arch/powerpc/util/power7-events.h
new file mode 100644
index 0000000..a2f928b
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/power7-events.h
@@ -0,0 +1,3315 @@
+#ifndef __POWER7_EVENTS_H__
+#define __POWER7_EVENTS_H__
+
+/*
+* File:    power7_events.h
+* CVS:
+* Author:  Corey Ashford
+*          cjashfor@us.ibm.com
+* Mods:    Sukadev Bhattiprolu
+*          sukadev@linux.vnet.ibm.com
+* Mods:    <your name here>
+*          <your email address>
+*
+* (C) Copyright IBM Corporation, 2009.  All Rights Reserved.
+* Contributed by Corey Ashford <cjashfor.ibm.com>
+*
+* Note: This code was generated based on power7-events.h in libpfm4
+*
+* Documentation on the PMU events can be found at:
+*  http://www.power.org/documentation/comprehensive-pmu-event-reference-power7
+*/
+
+static const struct perf_pmu_event power7_pmu_events[] = {
+{
+	.name = "PM_IC_DEMAND_L2_BR_ALL",
+	.code = 0x4898,
+	.short_desc = " L2 I cache demand request due to BHT or redirect",
+	.long_desc = " L2 I cache demand request due to BHT or redirect",
+},
+{
+	.name = "PM_GCT_UTIL_7_TO_10_SLOTS",
+	.code = 0x20a0,
+	.short_desc = "GCT Utilization 7-10 entries",
+	.long_desc = "GCT Utilization 7-10 entries",
+},
+{
+	.name = "PM_PMC2_SAVED",
+	.code = 0x10022,
+	.short_desc = "PMC2 Rewind Value saved",
+	.long_desc = "PMC2 was counting speculatively. The speculative condition was met and the counter value was committed by copying it to the backup register.",
+},
+{
+	.name = "PM_CMPLU_STALL_DFU",
+	.code = 0x2003c,
+	.short_desc = "Completion stall caused by Decimal Floating Point Unit",
+	.long_desc = "Completion stall caused by Decimal Floating Point Unit",
+},
+{
+	.name = "PM_VSU0_16FLOP",
+	.code = 0xa0a4,
+	.short_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)  ",
+	.long_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)  ",
+},
+{
+	.name = "PM_MRK_LSU_DERAT_MISS",
+	.code = 0x3d05a,
+	.short_desc = "Marked DERAT Miss",
+	.long_desc = "Marked DERAT Miss",
+},
+{
+	.name = "PM_MRK_ST_CMPL",
+	.code = 0x10034,
+	.short_desc = "marked  store finished (was complete)",
+	.long_desc = "A sampled store has completed (data home)",
+},
+{
+	.name = "PM_NEST_PAIR3_ADD",
+	.code = 0x40881,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair3 ADD",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair3 ADD",
+},
+{
+	.name = "PM_L2_ST_DISP",
+	.code = 0x46180,
+	.short_desc = "All successful store dispatches",
+	.long_desc = "All successful store dispatches",
+},
+{
+	.name = "PM_L2_CASTOUT_MOD",
+	.code = 0x16180,
+	.short_desc = "L2 Castouts - Modified (M, Mu, Me)",
+	.long_desc = "An L2 line in the Modified state was castout. Total for all slices.",
+},
+{
+	.name = "PM_ISEG",
+	.code = 0x20a4,
+	.short_desc = "ISEG Exception",
+	.long_desc = "ISEG Exception",
+},
+{
+	.name = "PM_MRK_INST_TIMEO",
+	.code = 0x40034,
+	.short_desc = "marked Instruction finish timeout ",
+	.long_desc = "The number of instructions finished since the last progress indicator from a marked instruction exceeded the threshold. The marked instruction was flushed.",
+},
+{
+	.name = "PM_L2_RCST_DISP_FAIL_ADDR",
+	.code = 0x36282,
+	.short_desc = " L2  RC store dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+	.long_desc = " L2  RC store dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+},
+{
+	.name = "PM_LSU1_DC_PREF_STREAM_CONFIRM",
+	.code = 0xd0b6,
+	.short_desc = "LS1 'Dcache prefetch stream confirmed",
+	.long_desc = "LS1 'Dcache prefetch stream confirmed",
+},
+{
+	.name = "PM_IERAT_WR_64K",
+	.code = 0x40be,
+	.short_desc = "large page 64k ",
+	.long_desc = "large page 64k ",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_16M",
+	.code = 0x4d05e,
+	.short_desc = "Marked Data TLB misses for 16M page",
+	.long_desc = "Data TLB references to 16M pages by a marked instruction that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_IERAT_MISS",
+	.code = 0x100f6,
+	.short_desc = "IERAT Miss (Not implemented as DI on POWER6)",
+	.long_desc = "A translation request missed the Instruction Effective to Real Address Translation (ERAT) table",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_LMEM",
+	.code = 0x4d052,
+	.short_desc = "Marked PTEG loaded from local memory",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from memory attached to the same module this proccessor is located on due to a marked load or store.",
+},
+{
+	.name = "PM_FLOP",
+	.code = 0x100f4,
+	.short_desc = "Floating Point Operation Finished",
+	.long_desc = "A floating point operation has completed",
+},
+{
+	.name = "PM_THRD_PRIO_4_5_CYC",
+	.code = 0x40b4,
+	.short_desc = " Cycles thread running at priority level 4 or 5",
+	.long_desc = " Cycles thread running at priority level 4 or 5",
+},
+{
+	.name = "PM_BR_PRED_TA",
+	.code = 0x40aa,
+	.short_desc = "Branch predict - target address",
+	.long_desc = "The target address of a branch instruction was predicted.",
+},
+{
+	.name = "PM_CMPLU_STALL_FXU",
+	.code = 0x20014,
+	.short_desc = "Completion stall caused by FXU instruction",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes was a fixed point instruction.",
+},
+{
+	.name = "PM_EXT_INT",
+	.code = 0x200f8,
+	.short_desc = "external interrupt",
+	.long_desc = "An interrupt due to an external exception occurred",
+},
+{
+	.name = "PM_VSU_FSQRT_FDIV",
+	.code = 0xa888,
+	.short_desc = "four flops operation (fdiv,fsqrt) Scalar Instructions only!",
+	.long_desc = "DP vector versions of fdiv,fsqrt ",
+},
+{
+	.name = "PM_MRK_LD_MISS_EXPOSED_CYC",
+	.code = 0x1003e,
+	.short_desc = "Marked Load exposed Miss ",
+	.long_desc = "Marked Load exposed Miss ",
+},
+{
+	.name = "PM_LSU1_LDF",
+	.code = 0xc086,
+	.short_desc = "LS1  Scalar Loads ",
+	.long_desc = "A floating point load was executed by LSU1",
+},
+{
+	.name = "PM_IC_WRITE_ALL",
+	.code = 0x488c,
+	.short_desc = "Icache sectors written, prefetch + demand",
+	.long_desc = "Icache sectors written, prefetch + demand",
+},
+{
+	.name = "PM_LSU0_SRQ_STFWD",
+	.code = 0xc0a0,
+	.short_desc = "LS0 SRQ forwarded data to a load",
+	.long_desc = "Data from a store instruction was forwarded to a load on unit 0.  A load that misses L1 but becomes a store forward is treated as a load miss and it causes the DL1 load miss event to be counted.  It does not go into the LMQ. If a load that hits L1 but becomes a store forward, then it's not treated as a load miss.",
+},
+{
+	.name = "PM_PTEG_FROM_RL2L3_MOD",
+	.code = 0x1c052,
+	.short_desc = "PTEG loaded from remote L2 or L3 modified",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with modified (M) data from an L2  or L3 on a remote module due to a demand load or store.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_SHR",
+	.code = 0x1d04e,
+	.short_desc = "Marked data loaded from another L3 on same chip shared",
+	.long_desc = "Marked data loaded from another L3 on same chip shared",
+},
+{
+	.name = "PM_DATA_FROM_L21_MOD",
+	.code = 0x3c046,
+	.short_desc = "Data loaded from another L2 on same chip modified",
+	.long_desc = "Data loaded from another L2 on same chip modified",
+},
+{
+	.name = "PM_VSU1_SCAL_DOUBLE_ISSUED",
+	.code = 0xb08a,
+	.short_desc = "Double Precision scalar instruction issued on Pipe1",
+	.long_desc = "Double Precision scalar instruction issued on Pipe1",
+},
+{
+	.name = "PM_VSU0_8FLOP",
+	.code = 0xa0a0,
+	.short_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+	.long_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+},
+{
+	.name = "PM_POWER_EVENT1",
+	.code = 0x1006e,
+	.short_desc = "Power Management Event 1",
+	.long_desc = "Power Management Event 1",
+},
+{
+	.name = "PM_DISP_CLB_HELD_BAL",
+	.code = 0x2092,
+	.short_desc = "Dispatch/CLB Hold: Balance",
+	.long_desc = "Dispatch/CLB Hold: Balance",
+},
+{
+	.name = "PM_VSU1_2FLOP",
+	.code = 0xa09a,
+	.short_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+	.long_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_LWSYNC_HELD",
+	.code = 0x209a,
+	.short_desc = "LWSYNC held at dispatch",
+	.long_desc = "Cycles a LWSYNC instruction was held at dispatch. LWSYNC instructions are held at dispatch until all previous loads are done and all previous stores have issued. LWSYNC enters the Store Request Queue and is sent to the storage subsystem but does not wait for a response.",
+},
+{
+	.name = "PM_PTEG_FROM_DL2L3_SHR",
+	.code = 0x3c054,
+	.short_desc = "PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with shared (T or SL) data from an L2 or L3 on a remote module due to a demand load or store.",
+},
+{
+	.name = "PM_INST_FROM_L21_MOD",
+	.code = 0x34046,
+	.short_desc = "Instruction fetched from another L2 on same chip modified",
+	.long_desc = "Instruction fetched from another L2 on same chip modified",
+},
+{
+	.name = "PM_IERAT_XLATE_WR_16MPLUS",
+	.code = 0x40bc,
+	.short_desc = "large page 16M+",
+	.long_desc = "large page 16M+",
+},
+{
+	.name = "PM_IC_REQ_ALL",
+	.code = 0x4888,
+	.short_desc = "Icache requests, prefetch + demand",
+	.long_desc = "Icache requests, prefetch + demand",
+},
+{
+	.name = "PM_DSLB_MISS",
+	.code = 0xd090,
+	.short_desc = "Data SLB Miss - Total of all segment sizes",
+	.long_desc = "A SLB miss for a data request occurred. SLB misses trap to the operating system to resolve.",
+},
+{
+	.name = "PM_L3_MISS",
+	.code = 0x1f082,
+	.short_desc = "L3 Misses ",
+	.long_desc = "L3 Misses ",
+},
+{
+	.name = "PM_LSU0_L1_PREF",
+	.code = 0xd0b8,
+	.short_desc = " LS0 L1 cache data prefetches",
+	.long_desc = " LS0 L1 cache data prefetches",
+},
+{
+	.name = "PM_VSU_SCALAR_SINGLE_ISSUED",
+	.code = 0xb884,
+	.short_desc = "Single Precision scalar instruction issued on Pipe0",
+	.long_desc = "Single Precision scalar instruction issued on Pipe0",
+},
+{
+	.name = "PM_LSU1_DC_PREF_STREAM_CONFIRM_STRIDE",
+	.code = 0xd0be,
+	.short_desc = "LS1  Dcache Strided prefetch stream confirmed",
+	.long_desc = "LS1  Dcache Strided prefetch stream confirmed",
+},
+{
+	.name = "PM_L2_INST",
+	.code = 0x36080,
+	.short_desc = "Instruction Load Count",
+	.long_desc = "Instruction Load Count",
+},
+{
+	.name = "PM_VSU0_FRSP",
+	.code = 0xa0b4,
+	.short_desc = "Round to single precision instruction executed",
+	.long_desc = "Round to single precision instruction executed",
+},
+{
+	.name = "PM_FLUSH_DISP",
+	.code = 0x2082,
+	.short_desc = "Dispatch flush",
+	.long_desc = "Dispatch flush",
+},
+{
+	.name = "PM_PTEG_FROM_L2MISS",
+	.code = 0x4c058,
+	.short_desc = "PTEG loaded from L2 miss",
+	.long_desc = "A Page Table Entry was loaded into the TLB but not from the local L2.",
+},
+{
+	.name = "PM_VSU1_DQ_ISSUED",
+	.code = 0xb09a,
+	.short_desc = "128BIT Decimal Issued on Pipe1",
+	.long_desc = "128BIT Decimal Issued on Pipe1",
+},
+{
+	.name = "PM_CMPLU_STALL_LSU",
+	.code = 0x20012,
+	.short_desc = "Completion stall caused by LSU instruction",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes was a load/store instruction.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DMEM",
+	.code = 0x1d04a,
+	.short_desc = "Marked data loaded from distant memory",
+	.long_desc = "The processor's Data Cache was reloaded with data from memory attached to a distant module due to a marked load.",
+},
+{
+	.name = "PM_LSU_FLUSH_ULD",
+	.code = 0xc8b0,
+	.short_desc = "Flush: Unaligned Load",
+	.long_desc = "A load was flushed because it was unaligned (crossed a 64byte boundary, or 32 byte if it missed the L1).  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_PTEG_FROM_LMEM",
+	.code = 0x4c052,
+	.short_desc = "PTEG loaded from local memory",
+	.long_desc = "A Page Table Entry was loaded into the TLB from memory attached to the same module this proccessor is located on.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_16M",
+	.code = 0x3d05c,
+	.short_desc = "Marked DERAT misses for 16M page",
+	.long_desc = "A marked data request (load or store) missed the ERAT for 16M page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_THRD_ALL_RUN_CYC",
+	.code = 0x2000c,
+	.short_desc = "All Threads in run_cycles",
+	.long_desc = "Cycles when all threads had their run latches set. Operating systems use the run latch to indicate when they are doing useful work.",
+},
+{
+	.name = "PM_MEM0_PREFETCH_DISP",
+	.code = 0x20083,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair1 Bit1",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair1 Bit1",
+},
+{
+	.name = "PM_MRK_STALL_CMPLU_CYC_COUNT",
+	.code = 0x3003f,
+	.short_desc = "Marked Group Completion Stall cycles (use edge detect to count #)",
+	.long_desc = "Marked Group Completion Stall cycles (use edge detect to count #)",
+},
+{
+	.name = "PM_DATA_FROM_DL2L3_MOD",
+	.code = 0x3c04c,
+	.short_desc = "Data loaded from distant L2 or L3 modified",
+	.long_desc = "The processor's Data Cache was reloaded with modified (M) data from an L2  or L3 on a distant module due to a demand load",
+},
+{
+	.name = "PM_VSU_FRSP",
+	.code = 0xa8b4,
+	.short_desc = "Round to single precision instruction executed",
+	.long_desc = "Round to single precision instruction executed",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_MOD",
+	.code = 0x3d046,
+	.short_desc = "Marked data loaded from another L2 on same chip modified",
+	.long_desc = "Marked data loaded from another L2 on same chip modified",
+},
+{
+	.name = "PM_PMC1_OVERFLOW",
+	.code = 0x20010,
+	.short_desc = "Overflow from counter 1",
+	.long_desc = "Overflows from PMC1 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_VSU0_SINGLE",
+	.code = 0xa0a8,
+	.short_desc = "FPU single precision",
+	.long_desc = "VSU0 executed single precision instruction",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L3MISS",
+	.code = 0x2d058,
+	.short_desc = "Marked PTEG loaded from L3 miss",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from beyond the L3 due to a marked load or store",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L31_SHR",
+	.code = 0x2d056,
+	.short_desc = "Marked PTEG loaded from another L3 on same chip shared",
+	.long_desc = "Marked PTEG loaded from another L3 on same chip shared",
+},
+{
+	.name = "PM_VSU0_VECTOR_SP_ISSUED",
+	.code = 0xb090,
+	.short_desc = "Single Precision vector instruction issued (executed)",
+	.long_desc = "Single Precision vector instruction issued (executed)",
+},
+{
+	.name = "PM_VSU1_FEST",
+	.code = 0xa0ba,
+	.short_desc = "Estimate instruction executed",
+	.long_desc = "Estimate instruction executed",
+},
+{
+	.name = "PM_MRK_INST_DISP",
+	.code = 0x20030,
+	.short_desc = "marked instruction dispatch",
+	.long_desc = "A marked instruction was dispatched",
+},
+{
+	.name = "PM_VSU0_COMPLEX_ISSUED",
+	.code = 0xb096,
+	.short_desc = "Complex VMX instruction issued",
+	.long_desc = "Complex VMX instruction issued",
+},
+{
+	.name = "PM_LSU1_FLUSH_UST",
+	.code = 0xc0b6,
+	.short_desc = "LS1 Flush: Unaligned Store",
+	.long_desc = "A store was flushed from unit 1 because it was unaligned (crossed a 4K boundary)",
+},
+{
+	.name = "PM_INST_CMPL",
+	.code = 0x2,
+	.short_desc = "# PPC Instructions Finished",
+	.long_desc = "Number of PowerPC Instructions that completed.",
+},
+{
+	.name = "PM_FXU_IDLE",
+	.code = 0x1000e,
+	.short_desc = "fxu0 idle and fxu1 idle",
+	.long_desc = "FXU0 and FXU1 are both idle.",
+},
+{
+	.name = "PM_LSU0_FLUSH_ULD",
+	.code = 0xc0b0,
+	.short_desc = "LS0 Flush: Unaligned Load",
+	.long_desc = "A load was flushed from unit 0 because it was unaligned (crossed a 64 byte boundary, or 32 byte if it missed the L1)",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_MOD",
+	.code = 0x3d04c,
+	.short_desc = "Marked data loaded from distant L2 or L3 modified",
+	.long_desc = "The processor's Data Cache was reloaded with modified (M) data from an L2  or L3 on a distant module due to a marked load.",
+},
+{
+	.name = "PM_LSU_LMQ_SRQ_EMPTY_ALL_CYC",
+	.code = 0x3001c,
+	.short_desc = "ALL threads lsu empty (lmq and srq empty)",
+	.long_desc = "ALL threads lsu empty (lmq and srq empty)",
+},
+{
+	.name = "PM_LSU1_REJECT_LMQ_FULL",
+	.code = 0xc0a6,
+	.short_desc = "LS1 Reject: LMQ Full (LHR)",
+	.long_desc = "Total cycles the Load Store Unit 1 is busy rejecting instructions because the Load Miss Queue was full. The LMQ has eight entries.  If all eight entries are full, subsequent load instructions are rejected.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L21_MOD",
+	.code = 0x3e056,
+	.short_desc = "Instruction PTEG loaded from another L2 on same chip modified",
+	.long_desc = "Instruction PTEG loaded from another L2 on same chip modified",
+},
+{
+	.name = "PM_INST_FROM_RL2L3_MOD",
+	.code = 0x14042,
+	.short_desc = "Instruction fetched from remote L2 or L3 modified",
+	.long_desc = "An instruction fetch group was fetched with modified  (M) data from an L2 or L3 on a remote module. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_SHL_CREATED",
+	.code = 0x5082,
+	.short_desc = "SHL table entry Created",
+	.long_desc = "SHL table entry Created",
+},
+{
+	.name = "PM_L2_ST_HIT",
+	.code = 0x46182,
+	.short_desc = "All successful store dispatches that were L2Hits",
+	.long_desc = "A store request hit in the L2 directory.  This event includes all requests to this L2 from all sources. Total for all slices.",
+},
+{
+	.name = "PM_DATA_FROM_DMEM",
+	.code = 0x1c04a,
+	.short_desc = "Data loaded from distant memory",
+	.long_desc = "The processor's Data Cache was reloaded with data from memory attached to a distant module due to a demand load",
+},
+{
+	.name = "PM_L3_LD_MISS",
+	.code = 0x2f082,
+	.short_desc = "L3 demand LD Miss",
+	.long_desc = "L3 demand LD Miss",
+},
+{
+	.name = "PM_FXU1_BUSY_FXU0_IDLE",
+	.code = 0x4000e,
+	.short_desc = "fxu0 idle and fxu1 busy. ",
+	.long_desc = "FXU0 was idle while FXU1 was busy",
+},
+{
+	.name = "PM_DISP_CLB_HELD_RES",
+	.code = 0x2094,
+	.short_desc = "Dispatch/CLB Hold: Resource",
+	.long_desc = "Dispatch/CLB Hold: Resource",
+},
+{
+	.name = "PM_L2_SN_SX_I_DONE",
+	.code = 0x36382,
+	.short_desc = "SNP dispatched and went from Sx or Tx to Ix",
+	.long_desc = "SNP dispatched and went from Sx or Tx to Ix",
+},
+{
+	.name = "PM_GRP_CMPL",
+	.code = 0x30004,
+	.short_desc = "group completed",
+	.long_desc = "A group completed. Microcoded instructions that span multiple groups will generate this event once per group.",
+},
+{
+	.name = "PM_STCX_CMPL",
+	.code = 0xc098,
+	.short_desc = "STCX executed",
+	.long_desc = "Conditional stores with reservation completed",
+},
+{
+	.name = "PM_VSU0_2FLOP",
+	.code = 0xa098,
+	.short_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+	.long_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_L3_PREF_MISS",
+	.code = 0x3f082,
+	.short_desc = "L3 Prefetch  Directory Miss",
+	.long_desc = "L3 Prefetch  Directory Miss",
+},
+{
+	.name = "PM_LSU_SRQ_SYNC_CYC",
+	.code = 0xd096,
+	.short_desc = "A sync is in the SRQ",
+	.long_desc = "Cycles that a sync instruction is active in the Store Request Queue.",
+},
+{
+	.name = "PM_LSU_REJECT_ERAT_MISS",
+	.code = 0x20064,
+	.short_desc = "LSU Reject due to ERAT (up to 2 per cycles)",
+	.long_desc = "Total cycles the Load Store Unit is busy rejecting instructions due to an ERAT miss. Combined unit 0 + 1. Requests that miss the Derat are rejected and retried until the request hits in the Erat.",
+},
+{
+	.name = "PM_L1_ICACHE_MISS",
+	.code = 0x200fc,
+	.short_desc = "Demand iCache Miss",
+	.long_desc = "An instruction fetch request missed the L1 cache.",
+},
+{
+	.name = "PM_LSU1_FLUSH_SRQ",
+	.code = 0xc0be,
+	.short_desc = "LS1 Flush: SRQ",
+	.long_desc = "Load Hit Store flush.  A younger load was flushed from unit 1 because it hits (overlaps) an older store that is already in the SRQ or in the same group.  If the real addresses match but the effective addresses do not, an alias condition exists that prevents store forwarding.  If the load and store are in the same group the load must be flushed to separate the two instructions. ",
+},
+{
+	.name = "PM_LD_REF_L1_LSU0",
+	.code = 0xc080,
+	.short_desc = "LS0 L1 D cache load references counted at finish",
+	.long_desc = "Load references to Level 1 Data Cache, by unit 0.",
+},
+{
+	.name = "PM_VSU0_FEST",
+	.code = 0xa0b8,
+	.short_desc = "Estimate instruction executed",
+	.long_desc = "Estimate instruction executed",
+},
+{
+	.name = "PM_VSU_VECTOR_SINGLE_ISSUED",
+	.code = 0xb890,
+	.short_desc = "Single Precision vector instruction issued (executed)",
+	.long_desc = "Single Precision vector instruction issued (executed)",
+},
+{
+	.name = "PM_FREQ_UP",
+	.code = 0x4000c,
+	.short_desc = "Power Management: Above Threshold A",
+	.long_desc = "Processor frequency was sped up due to power management",
+},
+{
+	.name = "PM_DATA_FROM_LMEM",
+	.code = 0x3c04a,
+	.short_desc = "Data loaded from local memory",
+	.long_desc = "The processor's Data Cache was reloaded from memory attached to the same module this proccessor is located on.",
+},
+{
+	.name = "PM_LSU1_LDX",
+	.code = 0xc08a,
+	.short_desc = "LS1  Vector Loads",
+	.long_desc = "LS1  Vector Loads",
+},
+{
+	.name = "PM_PMC3_OVERFLOW",
+	.code = 0x40010,
+	.short_desc = "Overflow from counter 3",
+	.long_desc = "Overflows from PMC3 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_MRK_BR_MPRED",
+	.code = 0x30036,
+	.short_desc = "Marked Branch Mispredicted",
+	.long_desc = "A marked branch was mispredicted",
+},
+{
+	.name = "PM_SHL_MATCH",
+	.code = 0x5086,
+	.short_desc = "SHL Table Match",
+	.long_desc = "SHL Table Match",
+},
+{
+	.name = "PM_MRK_BR_TAKEN",
+	.code = 0x10036,
+	.short_desc = "Marked Branch Taken",
+	.long_desc = "A marked branch was taken",
+},
+{
+	.name = "PM_CMPLU_STALL_BRU",
+	.code = 0x4004e,
+	.short_desc = "Completion stall due to BRU",
+	.long_desc = "Completion stall due to BRU",
+},
+{
+	.name = "PM_ISLB_MISS",
+	.code = 0xd092,
+	.short_desc = "Instruction SLB Miss - Tota of all segment sizes",
+	.long_desc = "A SLB miss for an instruction fetch as occurred",
+},
+{
+	.name = "PM_CYC",
+	.code = 0x1e,
+	.short_desc = "Cycles",
+	.long_desc = "Processor Cycles",
+},
+{
+	.name = "PM_DISP_HELD_THERMAL",
+	.code = 0x30006,
+	.short_desc = "Dispatch Held due to Thermal",
+	.long_desc = "Dispatch Held due to Thermal",
+},
+{
+	.name = "PM_INST_PTEG_FROM_RL2L3_SHR",
+	.code = 0x2e054,
+	.short_desc = "Instruction PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "Instruction PTEG loaded from remote L2 or L3 shared",
+},
+{
+	.name = "PM_LSU1_SRQ_STFWD",
+	.code = 0xc0a2,
+	.short_desc = "LS1 SRQ forwarded data to a load",
+	.long_desc = "Data from a store instruction was forwarded to a load on unit 1.  A load that misses L1 but becomes a store forward is treated as a load miss and it causes the DL1 load miss event to be counted.  It does not go into the LMQ. If a load that hits L1 but becomes a store forward, then it's not treated as a load miss.",
+},
+{
+	.name = "PM_GCT_NOSLOT_BR_MPRED",
+	.code = 0x4001a,
+	.short_desc = "GCT empty by branch  mispredict",
+	.long_desc = "Cycles when the Global Completion Table has no slots from this thread because of a branch misprediction.",
+},
+{
+	.name = "PM_1PLUS_PPC_CMPL",
+	.code = 0x100f2,
+	.short_desc = "1 or more ppc  insts finished",
+	.long_desc = "A group containing at least one PPC instruction completed. For microcoded instructions that span multiple groups, this will only occur once.",
+},
+{
+	.name = "PM_PTEG_FROM_DMEM",
+	.code = 0x2c052,
+	.short_desc = "PTEG loaded from distant memory",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with data from memory attached to a distant module due to a demand load or store.",
+},
+{
+	.name = "PM_VSU_2FLOP",
+	.code = 0xa898,
+	.short_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+	.long_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_GCT_FULL_CYC",
+	.code = 0x4086,
+	.short_desc = "Cycles No room in EAT",
+	.long_desc = "The Global Completion Table is completely full.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_CYC",
+	.code = 0x40020,
+	.short_desc = "Marked ld latency Data source 0001 (L3)",
+	.long_desc = "Cycles a marked load waited for data from this level of the storage system.  Counting begins when a marked load misses the data cache and ends when the data is reloaded into the data cache.  To calculate average latency divide this count by the number of marked misses to the same level.",
+},
+{
+	.name = "PM_LSU_SRQ_S0_ALLOC",
+	.code = 0xd09d,
+	.short_desc = "Slot 0 of SRQ valid",
+	.long_desc = "Slot 0 of SRQ valid",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_4K",
+	.code = 0x1d05c,
+	.short_desc = "Marked DERAT misses for 4K page",
+	.long_desc = "A marked data request (load or store) missed the ERAT for 4K page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_BR_MPRED_TA",
+	.code = 0x40ae,
+	.short_desc = "Branch mispredict - target address",
+	.long_desc = "A branch instruction target was incorrectly predicted. This will result in a branch mispredict flush unless a flush is detected from an older instruction.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L2MISS",
+	.code = 0x4e058,
+	.short_desc = "Instruction PTEG loaded from L2 miss",
+	.long_desc = "Instruction PTEG loaded from L2 miss",
+},
+{
+	.name = "PM_DPU_HELD_POWER",
+	.code = 0x20006,
+	.short_desc = "Dispatch Held due to Power Management",
+	.long_desc = "Cycles that Instruction Dispatch was held due to power management. More than one hold condition can exist at the same time",
+},
+{
+	.name = "PM_RUN_INST_CMPL",
+	.code = 0x400fa,
+	.short_desc = "Run_Instructions",
+	.long_desc = "Number of run instructions completed. ",
+},
+{
+	.name = "PM_MRK_VSU_FIN",
+	.code = 0x30032,
+	.short_desc = "vsu (fpu) marked  instr finish",
+	.long_desc = "vsu (fpu) marked  instr finish",
+},
+{
+	.name = "PM_LSU_SRQ_S0_VALID",
+	.code = 0xd09c,
+	.short_desc = "Slot 0 of SRQ valid",
+	.long_desc = "This signal is asserted every cycle that the Store Request Queue slot zero is valid. The SRQ is 32 entries long and is allocated round-robin.  In SMT mode the SRQ is split between the two threads (16 entries each).",
+},
+{
+	.name = "PM_GCT_EMPTY_CYC",
+	.code = 0x20008,
+	.short_desc = "GCT empty, all threads",
+	.long_desc = "Cycles when the Global Completion Table was completely empty.  No thread had an entry allocated.",
+},
+{
+	.name = "PM_IOPS_DISP",
+	.code = 0x30014,
+	.short_desc = "IOPS dispatched",
+	.long_desc = "IOPS dispatched",
+},
+{
+	.name = "PM_RUN_SPURR",
+	.code = 0x10008,
+	.short_desc = "Run SPURR",
+	.long_desc = "Run SPURR",
+},
+{
+	.name = "PM_PTEG_FROM_L21_MOD",
+	.code = 0x3c056,
+	.short_desc = "PTEG loaded from another L2 on same chip modified",
+	.long_desc = "PTEG loaded from another L2 on same chip modified",
+},
+{
+	.name = "PM_VSU0_1FLOP",
+	.code = 0xa080,
+	.short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg, xsadd, xsmul, xssub, xscmp, xssel, xsabs, xsnabs, xsre, xssqrte, xsneg) operation finished",
+	.long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg, xsadd, xsmul, xssub, xscmp, xssel, xsabs, xsnabs, xsre, xssqrte, xsneg) operation finished",
+},
+{
+	.name = "PM_SNOOP_TLBIE",
+	.code = 0xd0b2,
+	.short_desc = "TLBIE snoop",
+	.long_desc = "A tlbie was snooped from another processor.",
+},
+{
+	.name = "PM_DATA_FROM_L3MISS",
+	.code = 0x2c048,
+	.short_desc = "Demand LD - L3 Miss (not L2 hit and not L3 hit)",
+	.long_desc = "The processor's Data Cache was reloaded from beyond L3 due to a demand load",
+},
+{
+	.name = "PM_VSU_SINGLE",
+	.code = 0xa8a8,
+	.short_desc = "Vector or Scalar single precision",
+	.long_desc = "Vector or Scalar single precision",
+},
+{
+	.name = "PM_DTLB_MISS_16G",
+	.code = 0x1c05e,
+	.short_desc = "Data TLB miss for 16G page",
+	.long_desc = "Data TLB references to 16GB pages that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_CMPLU_STALL_VECTOR",
+	.code = 0x2001c,
+	.short_desc = "Completion stall caused by Vector instruction",
+	.long_desc = "Completion stall caused by Vector instruction",
+},
+{
+	.name = "PM_FLUSH",
+	.code = 0x400f8,
+	.short_desc = "Flush (any type)",
+	.long_desc = "Flushes occurred including LSU and Branch flushes.",
+},
+{
+	.name = "PM_L2_LD_HIT",
+	.code = 0x36182,
+	.short_desc = "All successful load dispatches that were L2 hits",
+	.long_desc = "A load request (data or instruction) hit in the L2 directory.  Includes speculative, prefetched, and demand requests.  This event includes all requests to this L2 from all sources.  Total for all slices",
+},
+{
+	.name = "PM_NEST_PAIR2_AND",
+	.code = 0x30883,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair2 AND",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair2 AND",
+},
+{
+	.name = "PM_VSU1_1FLOP",
+	.code = 0xa082,
+	.short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg, xsadd, xsmul, xssub, xscmp, xssel, xsabs, xsnabs, xsre, xssqrte, xsneg) operation finished",
+	.long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg, xsadd, xsmul, xssub, xscmp, xssel, xsabs, xsnabs, xsre, xssqrte, xsneg) operation finished",
+},
+{
+	.name = "PM_IC_PREF_REQ",
+	.code = 0x408a,
+	.short_desc = "Instruction prefetch requests",
+	.long_desc = "An instruction prefetch request has been made.",
+},
+{
+	.name = "PM_L3_LD_HIT",
+	.code = 0x2f080,
+	.short_desc = "L3 demand LD Hits",
+	.long_desc = "L3 demand LD Hits",
+},
+{
+	.name = "PM_GCT_NOSLOT_IC_MISS",
+	.code = 0x2001a,
+	.short_desc = "GCT empty by I cache miss",
+	.long_desc = "Cycles when the Global Completion Table has no slots from this thread because of an Instruction Cache miss.",
+},
+{
+	.name = "PM_DISP_HELD",
+	.code = 0x10006,
+	.short_desc = "Dispatch Held",
+	.long_desc = "Dispatch Held",
+},
+{
+	.name = "PM_L2_LD",
+	.code = 0x16080,
+	.short_desc = "Data Load Count",
+	.long_desc = "Data Load Count",
+},
+{
+	.name = "PM_LSU_FLUSH_SRQ",
+	.code = 0xc8bc,
+	.short_desc = "Flush: SRQ",
+	.long_desc = "Load Hit Store flush.  A younger load was flushed because it hits (overlaps) an older store that is already in the SRQ or in the same group.  If the real addresses match but the effective addresses do not, an alias condition exists that prevents store forwarding.  If the load and store are in the same group the load must be flushed to separate the two instructions.  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_BC_PLUS_8_CONV",
+	.code = 0x40b8,
+	.short_desc = "BC+8 Converted",
+	.long_desc = "BC+8 Converted",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_MOD_CYC",
+	.code = 0x40026,
+	.short_desc = "Marked ld latency Data source 0111  (L3.1 M same chip)",
+	.long_desc = "Marked ld latency Data source 0111  (L3.1 M same chip)",
+},
+{
+	.name = "PM_CMPLU_STALL_VECTOR_LONG",
+	.code = 0x4004a,
+	.short_desc = "completion stall due to long latency vector instruction",
+	.long_desc = "completion stall due to long latency vector instruction",
+},
+{
+	.name = "PM_L2_RCST_BUSY_RC_FULL",
+	.code = 0x26282,
+	.short_desc = " L2  activated Busy to the core for stores due to all RC full",
+	.long_desc = " L2  activated Busy to the core for stores due to all RC full",
+},
+{
+	.name = "PM_TB_BIT_TRANS",
+	.code = 0x300f8,
+	.short_desc = "Time Base bit transition",
+	.long_desc = "When the selected time base bit (as specified in MMCR0[TBSEL])transitions from 0 to 1 ",
+},
+{
+	.name = "PM_THERMAL_MAX",
+	.code = 0x40006,
+	.short_desc = "Processor In Thermal MAX",
+	.long_desc = "The processor experienced a thermal overload condition. This bit is sticky, it remains set until cleared by software.",
+},
+{
+	.name = "PM_LSU1_FLUSH_ULD",
+	.code = 0xc0b2,
+	.short_desc = "LS 1 Flush: Unaligned Load",
+	.long_desc = "A load was flushed from unit 1 because it was unaligned (crossed a 64 byte boundary, or 32 byte if it missed the L1).",
+},
+{
+	.name = "PM_LSU1_REJECT_LHS",
+	.code = 0xc0ae,
+	.short_desc = "LS1  Reject: Load Hit Store",
+	.long_desc = "Load Store Unit 1 rejected a load instruction that had an address overlap with an older store in the store queue. The store must be committed and de-allocated from the Store Queue before the load can execute successfully.",
+},
+{
+	.name = "PM_LSU_LRQ_S0_ALLOC",
+	.code = 0xd09f,
+	.short_desc = "Slot 0 of LRQ valid",
+	.long_desc = "Slot 0 of LRQ valid",
+},
+{
+	.name = "PM_L3_CO_L31",
+	.code = 0x4f080,
+	.short_desc = "L3 Castouts to Memory",
+	.long_desc = "L3 Castouts to Memory",
+},
+{
+	.name = "PM_POWER_EVENT4",
+	.code = 0x4006e,
+	.short_desc = "Power Management Event 4",
+	.long_desc = "Power Management Event 4",
+},
+{
+	.name = "PM_DATA_FROM_L31_SHR",
+	.code = 0x1c04e,
+	.short_desc = "Data loaded from another L3 on same chip shared",
+	.long_desc = "Data loaded from another L3 on same chip shared",
+},
+{
+	.name = "PM_BR_UNCOND",
+	.code = 0x409e,
+	.short_desc = "Unconditional Branch",
+	.long_desc = "An unconditional branch was executed.",
+},
+{
+	.name = "PM_LSU1_DC_PREF_STREAM_ALLOC",
+	.code = 0xd0aa,
+	.short_desc = "LS 1 D cache new prefetch stream allocated",
+	.long_desc = "LS 1 D cache new prefetch stream allocated",
+},
+{
+	.name = "PM_PMC4_REWIND",
+	.code = 0x10020,
+	.short_desc = "PMC4 Rewind Event",
+	.long_desc = "PMC4 was counting speculatively. The speculative condition was not met and the counter was restored to its previous value.",
+},
+{
+	.name = "PM_L2_RCLD_DISP",
+	.code = 0x16280,
+	.short_desc = " L2  RC load dispatch attempt",
+	.long_desc = " L2  RC load dispatch attempt",
+},
+{
+	.name = "PM_THRD_PRIO_2_3_CYC",
+	.code = 0x40b2,
+	.short_desc = " Cycles thread running at priority level 2 or 3",
+	.long_desc = " Cycles thread running at priority level 2 or 3",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L2MISS",
+	.code = 0x4d058,
+	.short_desc = "Marked PTEG loaded from L2 miss",
+	.long_desc = "A Page Table Entry was loaded into the ERAT but not from the local L2 due to a marked load or store.",
+},
+{
+	.name = "PM_IC_DEMAND_L2_BHT_REDIRECT",
+	.code = 0x4098,
+	.short_desc = " L2 I cache demand request due to BHT redirect",
+	.long_desc = "A demand (not prefetch) miss to the instruction cache was sent to the L2 as a result of a branch prediction redirect (CR mispredict).",
+},
+{
+	.name = "PM_LSU_DERAT_MISS",
+	.code = 0x200f6,
+	.short_desc = "DERAT Reloaded due to a DERAT miss",
+	.long_desc = "Total D-ERAT Misses.  Requests that miss the Derat are rejected and retried until the request hits in the Erat. This may result in multiple erat misses for the same instruction.  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_L2",
+	.code = 0x4094,
+	.short_desc = "L2 Squashed request",
+	.long_desc = "L2 Squashed request",
+},
+{
+	.name = "PM_MRK_FIN_STALL_CYC_COUNT",
+	.code = 0x1003d,
+	.short_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count #)",
+	.long_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count #)",
+},
+{
+	.name = "PM_BR_PRED_CCACHE",
+	.code = 0x40a0,
+	.short_desc = "Count Cache Predictions",
+	.long_desc = "The count value of a Branch and Count instruction was predicted",
+},
+{
+	.name = "PM_GCT_UTIL_1_TO_2_SLOTS",
+	.code = 0x209c,
+	.short_desc = "GCT Utilization 1-2 entries",
+	.long_desc = "GCT Utilization 1-2 entries",
+},
+{
+	.name = "PM_MRK_ST_CMPL_INT",
+	.code = 0x30034,
+	.short_desc = "marked  store complete (data home) with intervention",
+	.long_desc = "A marked store previously sent to the memory subsystem completed (data home) after requiring intervention",
+},
+{
+	.name = "PM_LSU_TWO_TABLEWALK_CYC",
+	.code = 0xd0a6,
+	.short_desc = "Cycles when two tablewalks pending on this thread",
+	.long_desc = "Cycles when two tablewalks pending on this thread",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3MISS",
+	.code = 0x2d048,
+	.short_desc = "Marked data loaded from L3 miss",
+	.long_desc = "DL1 was reloaded from beyond L3 due to a marked load.",
+},
+{
+	.name = "PM_GCT_NOSLOT_CYC",
+	.code = 0x100f8,
+	.short_desc = "No itags assigned ",
+	.long_desc = "Cycles when the Global Completion Table has no slots from this thread.",
+},
+{
+	.name = "PM_LSU_SET_MPRED",
+	.code = 0xc0a8,
+	.short_desc = "Line already in cache at reload time",
+	.long_desc = "Line already in cache at reload time",
+},
+{
+	.name = "PM_FLUSH_DISP_TLBIE",
+	.code = 0x208a,
+	.short_desc = "Dispatch Flush: TLBIE",
+	.long_desc = "Dispatch Flush: TLBIE",
+},
+{
+	.name = "PM_VSU1_FCONV",
+	.code = 0xa0b2,
+	.short_desc = "Convert instruction executed",
+	.long_desc = "Convert instruction executed",
+},
+{
+	.name = "PM_DERAT_MISS_16G",
+	.code = 0x4c05c,
+	.short_desc = "DERAT misses for 16G page",
+	.long_desc = "A data request (load or store) missed the ERAT for 16G page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_INST_FROM_LMEM",
+	.code = 0x3404a,
+	.short_desc = "Instruction fetched from local memory",
+	.long_desc = "An instruction fetch group was fetched from memory attached to the same module this proccessor is located on.  Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_IC_DEMAND_L2_BR_REDIRECT",
+	.code = 0x409a,
+	.short_desc = " L2 I cache demand request due to branch redirect",
+	.long_desc = "A demand (not prefetch) miss to the instruction cache was sent to the L2 as a result of a branch prediction redirect (either ALL mispredicted or Target).",
+},
+{
+	.name = "PM_CMPLU_STALL_SCALAR_LONG",
+	.code = 0x20018,
+	.short_desc = "Completion stall caused by long latency scalar instruction",
+	.long_desc = "Completion stall caused by long latency scalar instruction",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L2",
+	.code = 0x1e050,
+	.short_desc = "Instruction PTEG loaded from L2",
+	.long_desc = "Instruction PTEG loaded from L2",
+},
+{
+	.name = "PM_PTEG_FROM_L2",
+	.code = 0x1c050,
+	.short_desc = "PTEG loaded from L2",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from the local L2 due to a demand load or store.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_SHR_CYC",
+	.code = 0x20024,
+	.short_desc = "Marked ld latency Data source 0100 (L2.1 S)",
+	.long_desc = "Marked load latency Data source 0100 (L2.1 S)",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_4K",
+	.code = 0x2d05a,
+	.short_desc = "Marked Data TLB misses for 4K page",
+	.long_desc = "Data TLB references to 4KB pages by a marked instruction that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_VSU0_FPSCR",
+	.code = 0xb09c,
+	.short_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+	.long_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+},
+{
+	.name = "PM_VSU1_VECT_DOUBLE_ISSUED",
+	.code = 0xb082,
+	.short_desc = "Double Precision vector instruction issued on Pipe1",
+	.long_desc = "Double Precision vector instruction issued on Pipe1",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_RL2L3_MOD",
+	.code = 0x1d052,
+	.short_desc = "Marked PTEG loaded from remote L2 or L3 modified",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with shared (T or SL) data from an L2 or L3 on a remote module due to a marked load or store.",
+},
+{
+	.name = "PM_MEM0_RQ_DISP",
+	.code = 0x10083,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair0 Bit1",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair0 Bit1",
+},
+{
+	.name = "PM_L2_LD_MISS",
+	.code = 0x26080,
+	.short_desc = "Data Load Miss",
+	.long_desc = "Data Load Miss",
+},
+{
+	.name = "PM_VMX_RESULT_SAT_1",
+	.code = 0xb0a0,
+	.short_desc = "Valid result with sat=1",
+	.long_desc = "Valid result with sat=1",
+},
+{
+	.name = "PM_L1_PREF",
+	.code = 0xd8b8,
+	.short_desc = "L1 Prefetches",
+	.long_desc = "A request to prefetch data into the L1 was made",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LMEM_CYC",
+	.code = 0x2002c,
+	.short_desc = "Marked ld latency Data Source 1100 (Local Memory)",
+	.long_desc = "Cycles a marked load waited for data from this level of the storage system.  Counting begins when a marked load misses the data cache and ends when the data is reloaded into the data cache.  To calculate average latency divide this count by the number of marked misses to the same level.",
+},
+{
+	.name = "PM_GRP_IC_MISS_NONSPEC",
+	.code = 0x1000c,
+	.short_desc = "Group experienced non-speculative I cache miss",
+	.long_desc = "Number of groups, counted at completion, that have encountered an instruction cache miss.",
+},
+{
+	.name = "PM_PB_NODE_PUMP",
+	.code = 0x10081,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair0 Bit0",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair0 Bit0",
+},
+{
+	.name = "PM_SHL_MERGED",
+	.code = 0x5084,
+	.short_desc = "SHL table entry merged with existing",
+	.long_desc = "SHL table entry merged with existing",
+},
+{
+	.name = "PM_NEST_PAIR1_ADD",
+	.code = 0x20881,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair1 ADD",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair1 ADD",
+},
+{
+	.name = "PM_DATA_FROM_L3",
+	.code = 0x1c048,
+	.short_desc = "Data loaded from L3",
+	.long_desc = "The processor's Data Cache was reloaded from the local L3 due to a demand load.",
+},
+{
+	.name = "PM_LSU_FLUSH",
+	.code = 0x208e,
+	.short_desc = "Flush initiated by LSU",
+	.long_desc = "A flush was initiated by the Load Store Unit.",
+},
+{
+	.name = "PM_LSU_SRQ_SYNC_COUNT",
+	.code = 0xd097,
+	.short_desc = "SRQ sync count (edge of PM_LSU_SRQ_SYNC_CYC)",
+	.long_desc = "SRQ sync count (edge of PM_LSU_SRQ_SYNC_CYC)",
+},
+{
+	.name = "PM_PMC2_OVERFLOW",
+	.code = 0x30010,
+	.short_desc = "Overflow from counter 2",
+	.long_desc = "Overflows from PMC2 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_LSU_LDF",
+	.code = 0xc884,
+	.short_desc = "All Scalar Loads",
+	.long_desc = "LSU executed Floating Point load instruction.  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_POWER_EVENT3",
+	.code = 0x3006e,
+	.short_desc = "Power Management Event 3",
+	.long_desc = "Power Management Event 3",
+},
+{
+	.name = "PM_DISP_WT",
+	.code = 0x30008,
+	.short_desc = "Dispatched Starved (not held, nothing to dispatch)",
+	.long_desc = "Dispatched Starved (not held, nothing to dispatch)",
+},
+{
+	.name = "PM_CMPLU_STALL_REJECT",
+	.code = 0x40016,
+	.short_desc = "Completion stall caused by reject",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes suffered a load/store reject. This is a subset of PM_CMPLU_STALL_LSU.",
+},
+{
+	.name = "PM_IC_BANK_CONFLICT",
+	.code = 0x4082,
+	.short_desc = "Read blocked due to interleave conflict.  ",
+	.long_desc = "Read blocked due to interleave conflict.  ",
+},
+{
+	.name = "PM_BR_MPRED_CR_TA",
+	.code = 0x48ae,
+	.short_desc = "Branch mispredict - taken/not taken and target",
+	.long_desc = "Branch mispredict - taken/not taken and target",
+},
+{
+	.name = "PM_L2_INST_MISS",
+	.code = 0x36082,
+	.short_desc = "Instruction Load Misses",
+	.long_desc = "Instruction Load Misses",
+},
+{
+	.name = "PM_CMPLU_STALL_ERAT_MISS",
+	.code = 0x40018,
+	.short_desc = "Completion stall caused by ERAT miss",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes suffered an ERAT miss. This is a subset of  PM_CMPLU_STALL_REJECT.",
+},
+{
+	.name = "PM_NEST_PAIR2_ADD",
+	.code = 0x30881,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair2 ADD",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair2 ADD",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH",
+	.code = 0xd08c,
+	.short_desc = "Flush: (marked) : All Cases",
+	.long_desc = "Marked flush initiated by LSU",
+},
+{
+	.name = "PM_L2_LDST",
+	.code = 0x16880,
+	.short_desc = "Data Load+Store Count",
+	.long_desc = "Data Load+Store Count",
+},
+{
+	.name = "PM_INST_FROM_L31_SHR",
+	.code = 0x1404e,
+	.short_desc = "Instruction fetched from another L3 on same chip shared",
+	.long_desc = "Instruction fetched from another L3 on same chip shared",
+},
+{
+	.name = "PM_VSU0_FIN",
+	.code = 0xa0bc,
+	.short_desc = "VSU0 Finished an instruction",
+	.long_desc = "VSU0 Finished an instruction",
+},
+{
+	.name = "PM_LARX_LSU",
+	.code = 0xc894,
+	.short_desc = "Larx Finished",
+	.long_desc = "Larx Finished",
+},
+{
+	.name = "PM_INST_FROM_RMEM",
+	.code = 0x34042,
+	.short_desc = "Instruction fetched from remote memory",
+	.long_desc = "An instruction fetch group was fetched from memory attached to a different module than this proccessor is located on.  Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_DISP_CLB_HELD_TLBIE",
+	.code = 0x2096,
+	.short_desc = "Dispatch Hold: Due to TLBIE",
+	.long_desc = "Dispatch Hold: Due to TLBIE",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DMEM_CYC",
+	.code = 0x2002e,
+	.short_desc = "Marked ld latency Data Source 1110 (Distant Memory)",
+	.long_desc = "Marked ld latency Data Source 1110 (Distant Memory)",
+},
+{
+	.name = "PM_BR_PRED_CR",
+	.code = 0x40a8,
+	.short_desc = "Branch predict - taken/not taken",
+	.long_desc = "A conditional branch instruction was predicted as taken or not taken.",
+},
+{
+	.name = "PM_LSU_REJECT",
+	.code = 0x10064,
+	.short_desc = "LSU Reject (up to 2 per cycle)",
+	.long_desc = "The Load Store Unit rejected an instruction. Combined Unit 0 + 1",
+},
+{
+	.name = "PM_GCT_UTIL_3_TO_6_SLOTS",
+	.code = 0x209e,
+	.short_desc = "GCT Utilization 3-6 entries",
+	.long_desc = "GCT Utilization 3-6 entries",
+},
+{
+	.name = "PM_CMPLU_STALL_END_GCT_NOSLOT",
+	.code = 0x10028,
+	.short_desc = "Count ended because GCT went empty",
+	.long_desc = "Count ended because GCT went empty",
+},
+{
+	.name = "PM_LSU0_REJECT_LMQ_FULL",
+	.code = 0xc0a4,
+	.short_desc = "LS0 Reject: LMQ Full (LHR)",
+	.long_desc = "Total cycles the Load Store Unit 0 is busy rejecting instructions because the Load Miss Queue was full. The LMQ has eight entries.  If all eight entries are full, subsequent load instructions are rejected.",
+},
+{
+	.name = "PM_VSU_FEST",
+	.code = 0xa8b8,
+	.short_desc = "Estimate instruction executed",
+	.long_desc = "Estimate instruction executed",
+},
+{
+	.name = "PM_NEST_PAIR0_AND",
+	.code = 0x10883,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair0 AND",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair0 AND",
+},
+{
+	.name = "PM_PTEG_FROM_L3",
+	.code = 0x2c050,
+	.short_desc = "PTEG loaded from L3",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local L3 due to a demand load.",
+},
+{
+	.name = "PM_POWER_EVENT2",
+	.code = 0x2006e,
+	.short_desc = "Power Management Event 2",
+	.long_desc = "Power Management Event 2",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_PAGE",
+	.code = 0x4090,
+	.short_desc = "Prefetch Canceled due to page boundary",
+	.long_desc = "Prefetch Canceled due to page boundary",
+},
+{
+	.name = "PM_VSU0_FSQRT_FDIV",
+	.code = 0xa088,
+	.short_desc = "four flops operation (fdiv,fsqrt,xsdiv,xssqrt) Scalar Instructions only!",
+	.long_desc = "four flops operation (fdiv,fsqrt,xsdiv,xssqrt) Scalar Instructions only!",
+},
+{
+	.name = "PM_MRK_GRP_CMPL",
+	.code = 0x40030,
+	.short_desc = "Marked group complete",
+	.long_desc = "A group containing a sampled instruction completed.  Microcoded instructions that span multiple groups will generate this event once per group.",
+},
+{
+	.name = "PM_VSU0_SCAL_DOUBLE_ISSUED",
+	.code = 0xb088,
+	.short_desc = "Double Precision scalar instruction issued on Pipe0",
+	.long_desc = "Double Precision scalar instruction issued on Pipe0",
+},
+{
+	.name = "PM_GRP_DISP",
+	.code = 0x3000a,
+	.short_desc = "dispatch_success (Group Dispatched)",
+	.long_desc = "A group was dispatched",
+},
+{
+	.name = "PM_LSU0_LDX",
+	.code = 0xc088,
+	.short_desc = "LS0 Vector Loads",
+	.long_desc = "LS0 Vector Loads",
+},
+{
+	.name = "PM_DATA_FROM_L2",
+	.code = 0x1c040,
+	.short_desc = "Data loaded from L2",
+	.long_desc = "The processor's Data Cache was reloaded from the local L2 due to a demand load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_MOD",
+	.code = 0x1d042,
+	.short_desc = "Marked data loaded from remote L2 or L3 modified",
+	.long_desc = "The processor's Data Cache was reloaded with modified (M) data from an L2  or L3 on a remote module due to a marked load.",
+},
+{
+	.name = "PM_LD_REF_L1",
+	.code = 0xc880,
+	.short_desc = " L1 D cache load references counted at finish",
+	.long_desc = " L1 D cache load references counted at finish",
+},
+{
+	.name = "PM_VSU0_VECT_DOUBLE_ISSUED",
+	.code = 0xb080,
+	.short_desc = "Double Precision vector instruction issued on Pipe0",
+	.long_desc = "Double Precision vector instruction issued on Pipe0",
+},
+{
+	.name = "PM_VSU1_2FLOP_DOUBLE",
+	.code = 0xa08e,
+	.short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)  ",
+	.long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)  ",
+},
+{
+	.name = "PM_THRD_PRIO_6_7_CYC",
+	.code = 0x40b6,
+	.short_desc = " Cycles thread running at priority level 6 or 7",
+	.long_desc = " Cycles thread running at priority level 6 or 7",
+},
+{
+	.name = "PM_BC_PLUS_8_RSLV_TAKEN",
+	.code = 0x40ba,
+	.short_desc = "BC+8 Resolve outcome was Taken, resulting in the conditional instruction being canceled",
+	.long_desc = "BC+8 Resolve outcome was Taken, resulting in the conditional instruction being canceled",
+},
+{
+	.name = "PM_BR_MPRED_CR",
+	.code = 0x40ac,
+	.short_desc = "Branch mispredict - taken/not taken",
+	.long_desc = "A conditional branch instruction was incorrectly predicted as taken or not taken.  The branch execution unit detects a branch mispredict because the CR value is opposite of the predicted value. This will result in a branch redirect flush if not overfidden by a flush of an older instruction.",
+},
+{
+	.name = "PM_L3_CO_MEM",
+	.code = 0x4f082,
+	.short_desc = "L3 Castouts to L3.1",
+	.long_desc = "L3 Castouts to L3.1",
+},
+{
+	.name = "PM_LD_MISS_L1",
+	.code = 0x400f0,
+	.short_desc = "Load Missed L1",
+	.long_desc = "Load references that miss the Level 1 Data cache. Combined unit 0 + 1.",
+},
+{
+	.name = "PM_DATA_FROM_RL2L3_MOD",
+	.code = 0x1c042,
+	.short_desc = "Data loaded from remote L2 or L3 modified",
+	.long_desc = "The processor's Data Cache was reloaded with modified (M) data from an L2  or L3 on a remote module due to a demand load",
+},
+{
+	.name = "PM_LSU_SRQ_FULL_CYC",
+	.code = 0x1001a,
+	.short_desc = "Storage Queue is full and is blocking dispatch",
+	.long_desc = "Cycles the Store Request Queue is full.",
+},
+{
+	.name = "PM_TABLEWALK_CYC",
+	.code = 0x10026,
+	.short_desc = "Cycles when a tablewalk (I or D) is active",
+	.long_desc = "Cycles doing instruction or data tablewalks",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_RMEM",
+	.code = 0x3d052,
+	.short_desc = "Marked PTEG loaded from remote memory",
+	.long_desc = "A Page Table Entry was loaded into the ERAT. POWER6 does not have a TLB",
+},
+{
+	.name = "PM_LSU_SRQ_STFWD",
+	.code = 0xc8a0,
+	.short_desc = "Load got data from a store",
+	.long_desc = "Data from a store instruction was forwarded to a load.  A load that misses L1 but becomes a store forward is treated as a load miss and it causes the DL1 load miss event to be counted.  It does not go into the LMQ. If a load that hits L1 but becomes a store forward, then it's not treated as a load miss. Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_RMEM",
+	.code = 0x3e052,
+	.short_desc = "Instruction PTEG loaded from remote memory",
+	.long_desc = "Instruction PTEG loaded from remote memory",
+},
+{
+	.name = "PM_FXU0_FIN",
+	.code = 0x10004,
+	.short_desc = "FXU0 Finished",
+	.long_desc = "The Fixed Point unit 0 finished an instruction and produced a result.  Instructions that finish may not necessary complete.",
+},
+{
+	.name = "PM_LSU1_L1_SW_PREF",
+	.code = 0xc09e,
+	.short_desc = "LSU1 Software L1 Prefetches, including SW Transient Prefetches",
+	.long_desc = "LSU1 Software L1 Prefetches, including SW Transient Prefetches",
+},
+{
+	.name = "PM_PTEG_FROM_L31_MOD",
+	.code = 0x1c054,
+	.short_desc = "PTEG loaded from another L3 on same chip modified",
+	.long_desc = "PTEG loaded from another L3 on same chip modified",
+},
+{
+	.name = "PM_PMC5_OVERFLOW",
+	.code = 0x10024,
+	.short_desc = "Overflow from counter 5",
+	.long_desc = "Overflows from PMC5 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_LD_REF_L1_LSU1",
+	.code = 0xc082,
+	.short_desc = "LS1 L1 D cache load references counted at finish",
+	.long_desc = "Load references to Level 1 Data Cache, by unit 1.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L21_SHR",
+	.code = 0x4e056,
+	.short_desc = "Instruction PTEG loaded from another L2 on same chip shared",
+	.long_desc = "Instruction PTEG loaded from another L2 on same chip shared",
+},
+{
+	.name = "PM_CMPLU_STALL_THRD",
+	.code = 0x1001c,
+	.short_desc = "Completion Stalled due to thread conflict.  Group ready to complete but it was another thread's turn",
+	.long_desc = "Completion Stalled due to thread conflict.  Group ready to complete but it was another thread's turn",
+},
+{
+	.name = "PM_DATA_FROM_RMEM",
+	.code = 0x3c042,
+	.short_desc = "Data loaded from remote memory",
+	.long_desc = "The processor's Data Cache was reloaded from memory attached to a different module than this proccessor is located on.",
+},
+{
+	.name = "PM_VSU0_SCAL_SINGLE_ISSUED",
+	.code = 0xb084,
+	.short_desc = "Single Precision scalar instruction issued on Pipe0",
+	.long_desc = "Single Precision scalar instruction issued on Pipe0",
+},
+{
+	.name = "PM_BR_MPRED_LSTACK",
+	.code = 0x40a6,
+	.short_desc = "Branch Mispredict due to Link Stack",
+	.long_desc = "Branch Mispredict due to Link Stack",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_MOD_CYC",
+	.code = 0x40028,
+	.short_desc = "Marked ld latency Data source 1001 (L2.5/L3.5 M same 4 chip node)",
+	.long_desc = "Marked ld latency Data source 1001 (L2.5/L3.5 M same 4 chip node)",
+},
+{
+	.name = "PM_LSU0_FLUSH_UST",
+	.code = 0xc0b4,
+	.short_desc = "LS0 Flush: Unaligned Store",
+	.long_desc = "A store was flushed from unit 0 because it was unaligned (crossed a 4K boundary).",
+},
+{
+	.name = "PM_LSU_NCST",
+	.code = 0xc090,
+	.short_desc = "Non-cachable Stores sent to nest",
+	.long_desc = "Non-cachable Stores sent to nest",
+},
+{
+	.name = "PM_BR_TAKEN",
+	.code = 0x20004,
+	.short_desc = "Branch Taken",
+	.long_desc = "A branch instruction was taken. This could have been a conditional branch or an unconditional branch",
+},
+{
+	.name = "PM_INST_PTEG_FROM_LMEM",
+	.code = 0x4e052,
+	.short_desc = "Instruction PTEG loaded from local memory",
+	.long_desc = "Instruction PTEG loaded from local memory",
+},
+{
+	.name = "PM_GCT_NOSLOT_BR_MPRED_IC_MISS",
+	.code = 0x4001c,
+	.short_desc = "GCT empty by branch  mispredict + IC miss",
+	.long_desc = "No slot in GCT caused by branch mispredict or I cache miss",
+},
+{
+	.name = "PM_DTLB_MISS_4K",
+	.code = 0x2c05a,
+	.short_desc = "Data TLB miss for 4K page",
+	.long_desc = "Data TLB references to 4KB pages that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_PMC4_SAVED",
+	.code = 0x30022,
+	.short_desc = "PMC4 Rewind Value saved (matched condition)",
+	.long_desc = "PMC4 was counting speculatively. The speculative condition was met and the counter value was committed by copying it to the backup register.",
+},
+{
+	.name = "PM_VSU1_PERMUTE_ISSUED",
+	.code = 0xb092,
+	.short_desc = "Permute VMX Instruction Issued",
+	.long_desc = "Permute VMX Instruction Issued",
+},
+{
+	.name = "PM_SLB_MISS",
+	.code = 0xd890,
+	.short_desc = "Data + Instruction SLB Miss - Total of all segment sizes",
+	.long_desc = "Total of all Segment Lookaside Buffer (SLB) misses, Instructions + Data.",
+},
+{
+	.name = "PM_LSU1_FLUSH_LRQ",
+	.code = 0xc0ba,
+	.short_desc = "LS1 Flush: LRQ",
+	.long_desc = "Load Hit Load or Store Hit Load flush.  A younger load was flushed from unit 1 because it executed before an older store and they had overlapping data OR two loads executed out of order and they have byte overlap and there was a snoop in between to an overlapped byte.",
+},
+{
+	.name = "PM_DTLB_MISS",
+	.code = 0x300fc,
+	.short_desc = "TLB reload valid",
+	.long_desc = "Data TLB misses, all page sizes.",
+},
+{
+	.name = "PM_VSU1_FRSP",
+	.code = 0xa0b6,
+	.short_desc = "Round to single precision instruction executed",
+	.long_desc = "Round to single precision instruction executed",
+},
+{
+	.name = "PM_VSU_VECTOR_DOUBLE_ISSUED",
+	.code = 0xb880,
+	.short_desc = "Double Precision vector instruction issued on Pipe0",
+	.long_desc = "Double Precision vector instruction issued on Pipe0",
+},
+{
+	.name = "PM_L2_CASTOUT_SHR",
+	.code = 0x16182,
+	.short_desc = "L2 Castouts - Shared (T, Te, Si, S)",
+	.long_desc = "An L2 line in the Shared state was castout. Total for all slices.",
+},
+{
+	.name = "PM_DATA_FROM_DL2L3_SHR",
+	.code = 0x3c044,
+	.short_desc = "Data loaded from distant L2 or L3 shared",
+	.long_desc = "The processor's Data Cache was reloaded with shared (T or SL) data from an L2 or L3 on a distant module due to a demand load",
+},
+{
+	.name = "PM_VSU1_STF",
+	.code = 0xb08e,
+	.short_desc = "FPU store (SP or DP) issued on Pipe1",
+	.long_desc = "FPU store (SP or DP) issued on Pipe1",
+},
+{
+	.name = "PM_ST_FIN",
+	.code = 0x200f0,
+	.short_desc = "Store Instructions Finished",
+	.long_desc = "Store requests sent to the nest.",
+},
+{
+	.name = "PM_PTEG_FROM_L21_SHR",
+	.code = 0x4c056,
+	.short_desc = "PTEG loaded from another L2 on same chip shared",
+	.long_desc = "PTEG loaded from another L2 on same chip shared",
+},
+{
+	.name = "PM_L2_LOC_GUESS_WRONG",
+	.code = 0x26480,
+	.short_desc = "L2 guess loc and guess was not correct (ie data remote)",
+	.long_desc = "L2 guess loc and guess was not correct (ie data remote)",
+},
+{
+	.name = "PM_MRK_STCX_FAIL",
+	.code = 0xd08e,
+	.short_desc = "Marked STCX failed",
+	.long_desc = "A marked stcx (stwcx or stdcx) failed",
+},
+{
+	.name = "PM_LSU0_REJECT_LHS",
+	.code = 0xc0ac,
+	.short_desc = "LS0 Reject: Load Hit Store",
+	.long_desc = "Load Store Unit 0 rejected a load instruction that had an address overlap with an older store in the store queue. The store must be committed and de-allocated from the Store Queue before the load can execute successfully.",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_HIT",
+	.code = 0x4092,
+	.short_desc = "Prefetch Canceled due to icache hit",
+	.long_desc = "Prefetch Canceled due to icache hit",
+},
+{
+	.name = "PM_L3_PREF_BUSY",
+	.code = 0x4f080,
+	.short_desc = "Prefetch machines >= threshold (8,16,20,24)",
+	.long_desc = "Prefetch machines >= threshold (8,16,20,24)",
+},
+{
+	.name = "PM_MRK_BRU_FIN",
+	.code = 0x2003a,
+	.short_desc = "bru marked instr finish",
+	.long_desc = "The branch unit finished a marked instruction. Instructions that finish may not necessary complete.",
+},
+{
+	.name = "PM_LSU1_NCLD",
+	.code = 0xc08e,
+	.short_desc = "LS1 Non-cachable Loads counted at finish",
+	.long_desc = "A non-cacheable load was executed by Unit 0.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L31_MOD",
+	.code = 0x1e054,
+	.short_desc = "Instruction PTEG loaded from another L3 on same chip modified",
+	.long_desc = "Instruction PTEG loaded from another L3 on same chip modified",
+},
+{
+	.name = "PM_LSU_NCLD",
+	.code = 0xc88c,
+	.short_desc = "Non-cachable Loads counted at finish",
+	.long_desc = "A non-cacheable load was executed. Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_LSU_LDX",
+	.code = 0xc888,
+	.short_desc = "All Vector loads (vsx vector + vmx vector)",
+	.long_desc = "All Vector loads (vsx vector + vmx vector)",
+},
+{
+	.name = "PM_L2_LOC_GUESS_CORRECT",
+	.code = 0x16480,
+	.short_desc = "L2 guess loc and guess was correct (ie data local)",
+	.long_desc = "L2 guess loc and guess was correct (ie data local)",
+},
+{
+	.name = "PM_THRESH_TIMEO",
+	.code = 0x10038,
+	.short_desc = "Threshold  timeout  event",
+	.long_desc = "The threshold timer expired",
+},
+{
+	.name = "PM_L3_PREF_ST",
+	.code = 0xd0ae,
+	.short_desc = "L3 cache ST prefetches",
+	.long_desc = "L3 cache ST prefetches",
+},
+{
+	.name = "PM_DISP_CLB_HELD_SYNC",
+	.code = 0x2098,
+	.short_desc = "Dispatch/CLB Hold: Sync type instruction",
+	.long_desc = "Dispatch/CLB Hold: Sync type instruction",
+},
+{
+	.name = "PM_VSU_SIMPLE_ISSUED",
+	.code = 0xb894,
+	.short_desc = "Simple VMX instruction issued",
+	.long_desc = "Simple VMX instruction issued",
+},
+{
+	.name = "PM_VSU1_SINGLE",
+	.code = 0xa0aa,
+	.short_desc = "FPU single precision",
+	.long_desc = "VSU1 executed single precision instruction",
+},
+{
+	.name = "PM_DATA_TABLEWALK_CYC",
+	.code = 0x3001a,
+	.short_desc = "Data Tablewalk Active",
+	.long_desc = "Cycles a translation tablewalk is active.  While a tablewalk is active any request attempting to access the TLB will be rejected and retried.",
+},
+{
+	.name = "PM_L2_RC_ST_DONE",
+	.code = 0x36380,
+	.short_desc = "RC did st to line that was Tx or Sx",
+	.long_desc = "RC did st to line that was Tx or Sx",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L21_MOD",
+	.code = 0x3d056,
+	.short_desc = "Marked PTEG loaded from another L2 on same chip modified",
+	.long_desc = "Marked PTEG loaded from another L2 on same chip modified",
+},
+{
+	.name = "PM_LARX_LSU1",
+	.code = 0xc096,
+	.short_desc = "ls1 Larx Finished",
+	.long_desc = "A larx (lwarx or ldarx) was executed on side 1 ",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RMEM",
+	.code = 0x3d042,
+	.short_desc = "Marked data loaded from remote memory",
+	.long_desc = "The processor's Data Cache was reloaded due to a marked load from memory attached to a different module than this proccessor is located on.",
+},
+{
+	.name = "PM_DISP_CLB_HELD",
+	.code = 0x2090,
+	.short_desc = "CLB Hold: Any Reason",
+	.long_desc = "CLB Hold: Any Reason",
+},
+{
+	.name = "PM_DERAT_MISS_4K",
+	.code = 0x1c05c,
+	.short_desc = "DERAT misses for 4K page",
+	.long_desc = "A data request (load or store) missed the ERAT for 4K page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_L2_RCLD_DISP_FAIL_ADDR",
+	.code = 0x16282,
+	.short_desc = " L2  RC load dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+	.long_desc = " L2  RC load dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+},
+{
+	.name = "PM_SEG_EXCEPTION",
+	.code = 0x28a4,
+	.short_desc = "ISEG + DSEG Exception",
+	.long_desc = "ISEG + DSEG Exception",
+},
+{
+	.name = "PM_FLUSH_DISP_SB",
+	.code = 0x208c,
+	.short_desc = "Dispatch Flush: Scoreboard",
+	.long_desc = "Dispatch Flush: Scoreboard",
+},
+{
+	.name = "PM_L2_DC_INV",
+	.code = 0x26182,
+	.short_desc = "Dcache invalidates from L2 ",
+	.long_desc = "The L2 invalidated a line in processor's data cache.  This is caused by the L2 line being cast out or invalidated. Total for all slices",
+},
+{
+	.name = "PM_PTEG_FROM_DL2L3_MOD",
+	.code = 0x4c054,
+	.short_desc = "PTEG loaded from distant L2 or L3 modified",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with modified (M) data from an L2  or L3 on a distant module due to a demand load or store.",
+},
+{
+	.name = "PM_DSEG",
+	.code = 0x20a6,
+	.short_desc = "DSEG Exception",
+	.long_desc = "DSEG Exception",
+},
+{
+	.name = "PM_BR_PRED_LSTACK",
+	.code = 0x40a2,
+	.short_desc = "Link Stack Predictions",
+	.long_desc = "The target address of a Branch to Link instruction was predicted by the link stack.",
+},
+{
+	.name = "PM_VSU0_STF",
+	.code = 0xb08c,
+	.short_desc = "FPU store (SP or DP) issued on Pipe0",
+	.long_desc = "FPU store (SP or DP) issued on Pipe0",
+},
+{
+	.name = "PM_LSU_FX_FIN",
+	.code = 0x10066,
+	.short_desc = "LSU Finished a FX operation  (up to 2 per cycle)",
+	.long_desc = "LSU Finished a FX operation  (up to 2 per cycle)",
+},
+{
+	.name = "PM_DERAT_MISS_16M",
+	.code = 0x3c05c,
+	.short_desc = "DERAT misses for 16M page",
+	.long_desc = "A data request (load or store) missed the ERAT for 16M page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_DL2L3_MOD",
+	.code = 0x4d054,
+	.short_desc = "Marked PTEG loaded from distant L2 or L3 modified",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with modified (M) data from an L2  or L3 on a distant module due to a marked load or store.",
+},
+{
+	.name = "PM_GCT_UTIL_11_PLUS_SLOTS",
+	.code = 0x20a2,
+	.short_desc = "GCT Utilization 11+ entries",
+	.long_desc = "GCT Utilization 11+ entries",
+},
+{
+	.name = "PM_INST_FROM_L3",
+	.code = 0x14048,
+	.short_desc = "Instruction fetched from L3",
+	.long_desc = "An instruction fetch group was fetched from L3. Fetch Groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_MRK_IFU_FIN",
+	.code = 0x3003a,
+	.short_desc = "IFU non-branch marked instruction finished",
+	.long_desc = "The Instruction Fetch Unit finished a marked instruction.",
+},
+{
+	.name = "PM_ITLB_MISS",
+	.code = 0x400fc,
+	.short_desc = "ITLB Reloaded (always zero on POWER6)",
+	.long_desc = "A TLB miss for an Instruction Fetch has occurred",
+},
+{
+	.name = "PM_VSU_STF",
+	.code = 0xb88c,
+	.short_desc = "FPU store (SP or DP) issued on Pipe0",
+	.long_desc = "FPU store (SP or DP) issued on Pipe0",
+},
+{
+	.name = "PM_LSU_FLUSH_UST",
+	.code = 0xc8b4,
+	.short_desc = "Flush: Unaligned Store",
+	.long_desc = "A store was flushed because it was unaligned (crossed a 4K boundary).  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_L2_LDST_MISS",
+	.code = 0x26880,
+	.short_desc = "Data Load+Store Miss",
+	.long_desc = "Data Load+Store Miss",
+},
+{
+	.name = "PM_FXU1_FIN",
+	.code = 0x40004,
+	.short_desc = "FXU1 Finished",
+	.long_desc = "The Fixed Point unit 1 finished an instruction and produced a result. Instructions that finish may not necessary complete.",
+},
+{
+	.name = "PM_SHL_DEALLOCATED",
+	.code = 0x5080,
+	.short_desc = "SHL Table entry deallocated",
+	.long_desc = "SHL Table entry deallocated",
+},
+{
+	.name = "PM_L2_SN_M_WR_DONE",
+	.code = 0x46382,
+	.short_desc = "SNP dispatched for a write and was M",
+	.long_desc = "SNP dispatched for a write and was M",
+},
+{
+	.name = "PM_LSU_REJECT_SET_MPRED",
+	.code = 0xc8a8,
+	.short_desc = "Reject: Set Predict Wrong",
+	.long_desc = "The Load Store Unit rejected an instruction because the cache set was improperly predicted. This is a fast reject and will be immediately redispatched. Combined Unit 0 + 1",
+},
+{
+	.name = "PM_L3_PREF_LD",
+	.code = 0xd0ac,
+	.short_desc = "L3 cache LD prefetches",
+	.long_desc = "L3 cache LD prefetches",
+},
+{
+	.name = "PM_L2_SN_M_RD_DONE",
+	.code = 0x46380,
+	.short_desc = "SNP dispatched for a read and was M",
+	.long_desc = "SNP dispatched for a read and was M",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_16G",
+	.code = 0x4d05c,
+	.short_desc = "Marked DERAT misses for 16G page",
+	.long_desc = "A marked data request (load or store) missed the ERAT for 16G page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_VSU_FCONV",
+	.code = 0xa8b0,
+	.short_desc = "Convert instruction executed",
+	.long_desc = "Convert instruction executed",
+},
+{
+	.name = "PM_ANY_THRD_RUN_CYC",
+	.code = 0x100fa,
+	.short_desc = "One of threads in run_cycles ",
+	.long_desc = "One of threads in run_cycles ",
+},
+{
+	.name = "PM_LSU_LMQ_FULL_CYC",
+	.code = 0xd0a4,
+	.short_desc = "LMQ full",
+	.long_desc = "The Load Miss Queue was full.",
+},
+{
+	.name = "PM_MRK_LSU_REJECT_LHS",
+	.code = 0xd082,
+	.short_desc = " Reject(marked): Load Hit Store",
+	.long_desc = "The Load Store Unit rejected a marked load instruction that had an address overlap with an older store in the store queue. The store must be committed and de-allocated from the Store Queue before the load can execute successfully",
+},
+{
+	.name = "PM_MRK_LD_MISS_L1_CYC",
+	.code = 0x4003e,
+	.short_desc = "L1 data load miss cycles",
+	.long_desc = "L1 data load miss cycles",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_CYC",
+	.code = 0x20020,
+	.short_desc = "Marked ld latency Data source 0000 (L2 hit)",
+	.long_desc = "Cycles a marked load waited for data from this level of the storage system.  Counting begins when a marked load misses the data cache and ends when the data is reloaded into the data cache.  To calculate average latency divide this count by the number of marked misses to the same level.",
+},
+{
+	.name = "PM_INST_IMC_MATCH_DISP",
+	.code = 0x30016,
+	.short_desc = "IMC Matches dispatched",
+	.long_desc = "IMC Matches dispatched",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RMEM_CYC",
+	.code = 0x4002c,
+	.short_desc = "Marked ld latency Data source 1101  (Memory same 4 chip node)",
+	.long_desc = "Cycles a marked load waited for data from this level of the storage system.  Counting begins when a marked load misses the data cache and ends when the data is reloaded into the data cache.  To calculate average latency divide this count by the number of marked misses to the same level.",
+},
+{
+	.name = "PM_VSU0_SIMPLE_ISSUED",
+	.code = 0xb094,
+	.short_desc = "Simple VMX instruction issued",
+	.long_desc = "Simple VMX instruction issued",
+},
+{
+	.name = "PM_CMPLU_STALL_DIV",
+	.code = 0x40014,
+	.short_desc = "Completion stall caused by DIV instruction",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes was a fixed point divide instruction. This is a subset of PM_CMPLU_STALL_FXU.",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_RL2L3_SHR",
+	.code = 0x2d054,
+	.short_desc = "Marked PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from memory attached to a different module than this proccessor is located on due to a marked load or store.",
+},
+{
+	.name = "PM_VSU_FMA_DOUBLE",
+	.code = 0xa890,
+	.short_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+	.long_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+},
+{
+	.name = "PM_VSU_4FLOP",
+	.code = 0xa89c,
+	.short_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+	.long_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU1_FIN",
+	.code = 0xa0be,
+	.short_desc = "VSU1 Finished an instruction",
+	.long_desc = "VSU1 Finished an instruction",
+},
+{
+	.name = "PM_NEST_PAIR1_AND",
+	.code = 0x20883,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair1 AND",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair1 AND",
+},
+{
+	.name = "PM_INST_PTEG_FROM_RL2L3_MOD",
+	.code = 0x1e052,
+	.short_desc = "Instruction PTEG loaded from remote L2 or L3 modified",
+	.long_desc = "Instruction PTEG loaded from remote L2 or L3 modified",
+},
+{
+	.name = "PM_RUN_CYC",
+	.code = 0x200f4,
+	.short_desc = "Run_cycles",
+	.long_desc = "Processor Cycles gated by the run latch.  Operating systems use the run latch to indicate when they are doing useful work.  The run latch is typically cleared in the OS idle loop.  Gating by the run latch filters out the idle loop.",
+},
+{
+	.name = "PM_PTEG_FROM_RMEM",
+	.code = 0x3c052,
+	.short_desc = "PTEG loaded from remote memory",
+	.long_desc = "A Page Table Entry was loaded into the TLB from memory attached to a different module than this proccessor is located on.",
+},
+{
+	.name = "PM_LSU_LRQ_S0_VALID",
+	.code = 0xd09e,
+	.short_desc = "Slot 0 of LRQ valid",
+	.long_desc = "This signal is asserted every cycle that the Load Request Queue slot zero is valid. The SRQ is 32 entries long and is allocated round-robin.  In SMT mode the LRQ is split between the two threads (16 entries each).",
+},
+{
+	.name = "PM_LSU0_LDF",
+	.code = 0xc084,
+	.short_desc = "LS0 Scalar  Loads",
+	.long_desc = "A floating point load was executed by LSU0",
+},
+{
+	.name = "PM_FLUSH_COMPLETION",
+	.code = 0x30012,
+	.short_desc = "Completion Flush",
+	.long_desc = "Completion Flush",
+},
+{
+	.name = "PM_ST_MISS_L1",
+	.code = 0x300f0,
+	.short_desc = "L1 D cache store misses",
+	.long_desc = "A store missed the dcache.  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_L2_NODE_PUMP",
+	.code = 0x36480,
+	.short_desc = "RC req that was a local (aka node) pump attempt",
+	.long_desc = "RC req that was a local (aka node) pump attempt",
+},
+{
+	.name = "PM_INST_FROM_DL2L3_SHR",
+	.code = 0x34044,
+	.short_desc = "Instruction fetched from distant L2 or L3 shared",
+	.long_desc = "An instruction fetch group was fetched with shared  (S) data from the L2 or L3 on a distant module. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_MRK_STALL_CMPLU_CYC",
+	.code = 0x3003e,
+	.short_desc = "Marked Group Completion Stall cycles ",
+	.long_desc = "Marked Group Completion Stall cycles ",
+},
+{
+	.name = "PM_VSU1_DENORM",
+	.code = 0xa0ae,
+	.short_desc = "FPU denorm operand",
+	.long_desc = "VSU1 received denormalized data",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_SHR_CYC",
+	.code = 0x20026,
+	.short_desc = "Marked ld latency Data source 0110 (L3.1 S) ",
+	.long_desc = "Marked load latency Data source 0110 (L3.1 S) ",
+},
+{
+	.name = "PM_NEST_PAIR0_ADD",
+	.code = 0x10881,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair0 ADD",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair0 ADD",
+},
+{
+	.name = "PM_INST_FROM_L3MISS",
+	.code = 0x24048,
+	.short_desc = "Instruction fetched missed L3",
+	.long_desc = "An instruction fetch group was fetched from beyond L3. Fetch groups can contain up to 8 instructions.",
+},
+{
+	.name = "PM_EE_OFF_EXT_INT",
+	.code = 0x2080,
+	.short_desc = "ee off and external interrupt",
+	.long_desc = "Cycles when an interrupt due to an external exception is pending but external exceptions were masked.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_DMEM",
+	.code = 0x2e052,
+	.short_desc = "Instruction PTEG loaded from distant memory",
+	.long_desc = "Instruction PTEG loaded from distant memory",
+},
+{
+	.name = "PM_INST_FROM_DL2L3_MOD",
+	.code = 0x3404c,
+	.short_desc = "Instruction fetched from distant L2 or L3 modified",
+	.long_desc = "An instruction fetch group was fetched with modified  (M) data from an L2 or L3 on a distant module. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_PMC6_OVERFLOW",
+	.code = 0x30024,
+	.short_desc = "Overflow from counter 6",
+	.long_desc = "Overflows from PMC6 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_VSU_2FLOP_DOUBLE",
+	.code = 0xa88c,
+	.short_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+	.long_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+},
+{
+	.name = "PM_TLB_MISS",
+	.code = 0x20066,
+	.short_desc = "TLB Miss (I + D)",
+	.long_desc = "Total of Data TLB mises + Instruction TLB misses",
+},
+{
+	.name = "PM_FXU_BUSY",
+	.code = 0x2000e,
+	.short_desc = "fxu0 busy and fxu1 busy.",
+	.long_desc = "Cycles when both FXU0 and FXU1 are busy.",
+},
+{
+	.name = "PM_L2_RCLD_DISP_FAIL_OTHER",
+	.code = 0x26280,
+	.short_desc = " L2  RC load dispatch attempt failed due to other reasons",
+	.long_desc = " L2  RC load dispatch attempt failed due to other reasons",
+},
+{
+	.name = "PM_LSU_REJECT_LMQ_FULL",
+	.code = 0xc8a4,
+	.short_desc = "Reject: LMQ Full (LHR)",
+	.long_desc = "Total cycles the Load Store Unit is busy rejecting instructions because the Load Miss Queue was full. The LMQ has eight entries.  If all the eight entries are full, subsequent load instructions are rejected. Combined unit 0 + 1.",
+},
+{
+	.name = "PM_IC_RELOAD_SHR",
+	.code = 0x4096,
+	.short_desc = "Reloading line to be shared between the threads",
+	.long_desc = "An Instruction Cache request was made by this thread and the cache line was already in the cache for the other thread. The line is marked valid for all threads.",
+},
+{
+	.name = "PM_GRP_MRK",
+	.code = 0x10031,
+	.short_desc = "IDU Marked Instruction",
+	.long_desc = "A group was sampled (marked).  The group is called a marked group.  One instruction within the group is tagged for detailed monitoring.  The sampled instruction is called a marked instructions.  Events associated with the marked instruction are annotated with the marked term.",
+},
+{
+	.name = "PM_MRK_ST_NEST",
+	.code = 0x20034,
+	.short_desc = "marked store sent to Nest",
+	.long_desc = "A sampled store has been sent to the memory subsystem",
+},
+{
+	.name = "PM_VSU1_FSQRT_FDIV",
+	.code = 0xa08a,
+	.short_desc = "four flops operation (fdiv,fsqrt,xsdiv,xssqrt) Scalar Instructions only!",
+	.long_desc = "four flops operation (fdiv,fsqrt,xsdiv,xssqrt) Scalar Instructions only!",
+},
+{
+	.name = "PM_LSU0_FLUSH_LRQ",
+	.code = 0xc0b8,
+	.short_desc = "LS0 Flush: LRQ",
+	.long_desc = "Load Hit Load or Store Hit Load flush.  A younger load was flushed from unit 0 because it executed before an older store and they had overlapping data OR two loads executed out of order and they have byte overlap and there was a snoop in between to an overlapped byte.",
+},
+{
+	.name = "PM_LARX_LSU0",
+	.code = 0xc094,
+	.short_desc = "ls0 Larx Finished",
+	.long_desc = "A larx (lwarx or ldarx) was executed on side 0 ",
+},
+{
+	.name = "PM_IBUF_FULL_CYC",
+	.code = 0x4084,
+	.short_desc = "Cycles No room in ibuff",
+	.long_desc = "Cycles with the Instruction Buffer was full.  The Instruction Buffer is a circular queue of 64 instructions per thread, organized as 16 groups of 4 instructions.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_SHR_CYC",
+	.code = 0x2002a,
+	.short_desc = "Marked ld latency Data Source 1010 (Distant L2.75/L3.75 S)",
+	.long_desc = "Marked ld latency Data Source 1010 (Distant L2.75/L3.75 S)",
+},
+{
+	.name = "PM_LSU_DC_PREF_STREAM_ALLOC",
+	.code = 0xd8a8,
+	.short_desc = "D cache new prefetch stream allocated",
+	.long_desc = "D cache new prefetch stream allocated",
+},
+{
+	.name = "PM_GRP_MRK_CYC",
+	.code = 0x10030,
+	.short_desc = "cycles IDU marked instruction before dispatch",
+	.long_desc = "cycles IDU marked instruction before dispatch",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_SHR_CYC",
+	.code = 0x20028,
+	.short_desc = "Marked ld latency Data Source 1000 (Remote L2.5/L3.5 S)",
+	.long_desc = "Marked load latency Data Source 1000 (Remote L2.5/L3.5 S)",
+},
+{
+	.name = "PM_L2_GLOB_GUESS_CORRECT",
+	.code = 0x16482,
+	.short_desc = "L2 guess glb and guess was correct (ie data remote)",
+	.long_desc = "L2 guess glb and guess was correct (ie data remote)",
+},
+{
+	.name = "PM_LSU_REJECT_LHS",
+	.code = 0xc8ac,
+	.short_desc = "Reject: Load Hit Store",
+	.long_desc = "The Load Store Unit rejected a load load instruction that had an address overlap with an older store in the store queue. The store must be committed and de-allocated from the Store Queue before the load can execute successfully. Combined Unit 0 + 1",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LMEM",
+	.code = 0x3d04a,
+	.short_desc = "Marked data loaded from local memory",
+	.long_desc = "The processor's Data Cache was reloaded due to a marked load from memory attached to the same module this proccessor is located on.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L3",
+	.code = 0x2e050,
+	.short_desc = "Instruction PTEG loaded from L3",
+	.long_desc = "Instruction PTEG loaded from L3",
+},
+{
+	.name = "PM_FREQ_DOWN",
+	.code = 0x3000c,
+	.short_desc = "Frequency is being slewed down due to Power Management",
+	.long_desc = "Processor frequency was slowed down due to power management",
+},
+{
+	.name = "PM_PB_RETRY_NODE_PUMP",
+	.code = 0x30081,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair2 Bit0",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair2 Bit0",
+},
+{
+	.name = "PM_INST_FROM_RL2L3_SHR",
+	.code = 0x1404c,
+	.short_desc = "Instruction fetched from remote L2 or L3 shared",
+	.long_desc = "An instruction fetch group was fetched with shared  (S) data from the L2 or L3 on a remote module. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_MRK_INST_ISSUED",
+	.code = 0x10032,
+	.short_desc = "Marked instruction issued",
+	.long_desc = "A marked instruction was issued to an execution unit.",
+},
+{
+	.name = "PM_PTEG_FROM_L3MISS",
+	.code = 0x2c058,
+	.short_desc = "PTEG loaded from L3 miss",
+	.long_desc = " Page Table Entry was loaded into the ERAT from beyond the L3 due to a demand load or store.",
+},
+{
+	.name = "PM_RUN_PURR",
+	.code = 0x400f4,
+	.short_desc = "Run_PURR",
+	.long_desc = "The Processor Utilization of Resources Register was incremented while the run latch was set. The PURR registers will be incremented roughly in the ratio in which the instructions are dispatched from the two threads. ",
+},
+{
+	.name = "PM_MRK_GRP_IC_MISS",
+	.code = 0x40038,
+	.short_desc = "Marked group experienced  I cache miss",
+	.long_desc = "A group containing a marked (sampled) instruction experienced an instruction cache miss.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3",
+	.code = 0x1d048,
+	.short_desc = "Marked data loaded from L3",
+	.long_desc = "The processor's Data Cache was reloaded from the local L3 due to a marked load.",
+},
+{
+	.name = "PM_CMPLU_STALL_DCACHE_MISS",
+	.code = 0x20016,
+	.short_desc = "Completion stall caused by D cache miss",
+	.long_desc = "Following a completion stall (any period when no groups completed) the last instruction to finish before completion resumes suffered a Data Cache Miss. Data Cache Miss has higher priority than any other Load/Store delay, so if an instruction encounters multiple delays only the Data Cache Miss will be reported and the entire delay period will be charged to Data Cache Miss. This is a subset of PM_CMPLU_STALL_LSU.",
+},
+{
+	.name = "PM_PTEG_FROM_RL2L3_SHR",
+	.code = 0x2c054,
+	.short_desc = "PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "A Page Table Entry was loaded into the ERAT with shared (T or SL) data from an L2 or L3 on a remote module due to a demand load or store.",
+},
+{
+	.name = "PM_LSU_FLUSH_LRQ",
+	.code = 0xc8b8,
+	.short_desc = "Flush: LRQ",
+	.long_desc = "Load Hit Load or Store Hit Load flush.  A younger load was flushed because it executed before an older store and they had overlapping data OR two loads executed out of order and they have byte overlap and there was a snoop in between to an overlapped byte.  Combined Unit 0 + 1.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_64K",
+	.code = 0x2d05c,
+	.short_desc = "Marked DERAT misses for 64K page",
+	.long_desc = "A marked data request (load or store) missed the ERAT for 64K page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_INST_PTEG_FROM_DL2L3_MOD",
+	.code = 0x4e054,
+	.short_desc = "Instruction PTEG loaded from distant L2 or L3 modified",
+	.long_desc = "Instruction PTEG loaded from distant L2 or L3 modified",
+},
+{
+	.name = "PM_L2_ST_MISS",
+	.code = 0x26082,
+	.short_desc = "Data Store Miss",
+	.long_desc = "Data Store Miss",
+},
+{
+	.name = "PM_LWSYNC",
+	.code = 0xd094,
+	.short_desc = "lwsync count (easier to use than IMC)",
+	.long_desc = "lwsync count (easier to use than IMC)",
+},
+{
+	.name = "PM_LSU0_DC_PREF_STREAM_CONFIRM_STRIDE",
+	.code = 0xd0bc,
+	.short_desc = "LS0 Dcache Strided prefetch stream confirmed",
+	.long_desc = "LS0 Dcache Strided prefetch stream confirmed",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L21_SHR",
+	.code = 0x4d056,
+	.short_desc = "Marked PTEG loaded from another L2 on same chip shared",
+	.long_desc = "Marked PTEG loaded from another L2 on same chip shared",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_LRQ",
+	.code = 0xd088,
+	.short_desc = "Flush: (marked) LRQ",
+	.long_desc = "Load Hit Load or Store Hit Load flush.  A marked load was flushed because it executed before an older store and they had overlapping data OR two loads executed out of order and they have byte overlap and there was a snoop in between to an overlapped byte.",
+},
+{
+	.name = "PM_INST_IMC_MATCH_CMPL",
+	.code = 0x100f0,
+	.short_desc = "IMC Match Count",
+	.long_desc = "Number of instructions resulting from the marked instructions expansion that completed.",
+},
+{
+	.name = "PM_NEST_PAIR3_AND",
+	.code = 0x40883,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair3 AND",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair3 AND",
+},
+{
+	.name = "PM_PB_RETRY_SYS_PUMP",
+	.code = 0x40081,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair3 Bit0",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair3 Bit0",
+},
+{
+	.name = "PM_MRK_INST_FIN",
+	.code = 0x30030,
+	.short_desc = "marked instr finish any unit ",
+	.long_desc = "One of the execution units finished a marked instruction.  Instructions that finish may not necessary complete",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_DL2L3_SHR",
+	.code = 0x3d054,
+	.short_desc = "Marked PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from memory attached to a different module than this proccessor is located on due to a marked load or store.",
+},
+{
+	.name = "PM_INST_FROM_L31_MOD",
+	.code = 0x14044,
+	.short_desc = "Instruction fetched from another L3 on same chip modified",
+	.long_desc = "Instruction fetched from another L3 on same chip modified",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_64K",
+	.code = 0x3d05e,
+	.short_desc = "Marked Data TLB misses for 64K page",
+	.long_desc = "Data TLB references to 64KB pages by a marked instruction that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_LSU_FIN",
+	.code = 0x30066,
+	.short_desc = "LSU Finished an instruction (up to 2 per cycle)",
+	.long_desc = "LSU Finished an instruction (up to 2 per cycle)",
+},
+{
+	.name = "PM_MRK_LSU_REJECT",
+	.code = 0x40064,
+	.short_desc = "LSU marked reject (up to 2 per cycle)",
+	.long_desc = "LSU marked reject (up to 2 per cycle)",
+},
+{
+	.name = "PM_L2_CO_FAIL_BUSY",
+	.code = 0x16382,
+	.short_desc = " L2  RC Cast Out dispatch attempt failed due to all CO machines busy",
+	.long_desc = " L2  RC Cast Out dispatch attempt failed due to all CO machines busy",
+},
+{
+	.name = "PM_MEM0_WQ_DISP",
+	.code = 0x40083,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair3 Bit1",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair3 Bit1",
+},
+{
+	.name = "PM_DATA_FROM_L31_MOD",
+	.code = 0x1c044,
+	.short_desc = "Data loaded from another L3 on same chip modified",
+	.long_desc = "Data loaded from another L3 on same chip modified",
+},
+{
+	.name = "PM_THERMAL_WARN",
+	.code = 0x10016,
+	.short_desc = "Processor in Thermal Warning",
+	.long_desc = "Processor in Thermal Warning",
+},
+{
+	.name = "PM_VSU0_4FLOP",
+	.code = 0xa09c,
+	.short_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+	.long_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_BR_MPRED_CCACHE",
+	.code = 0x40a4,
+	.short_desc = "Branch Mispredict due to Count Cache prediction",
+	.long_desc = "A branch instruction target was incorrectly predicted by the ccount cache. This will result in a branch redirect flush if not overfidden by a flush of an older instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_IFU",
+	.code = 0x4004c,
+	.short_desc = "Completion stall due to IFU ",
+	.long_desc = "Completion stall due to IFU ",
+},
+{
+	.name = "PM_L1_DEMAND_WRITE",
+	.code = 0x408c,
+	.short_desc = "Instruction Demand sectors wriittent into IL1",
+	.long_desc = "Instruction Demand sectors wriittent into IL1",
+},
+{
+	.name = "PM_FLUSH_BR_MPRED",
+	.code = 0x2084,
+	.short_desc = "Flush caused by branch mispredict",
+	.long_desc = "A flush was caused by a branch mispredict.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_16G",
+	.code = 0x1d05e,
+	.short_desc = "Marked Data TLB misses for 16G page",
+	.long_desc = "Data TLB references to 16GB pages by a marked instruction that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_DMEM",
+	.code = 0x2d052,
+	.short_desc = "Marked PTEG loaded from distant memory",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from memory attached to a different module than this proccessor is located on due to a marked load or store.",
+},
+{
+	.name = "PM_L2_RCST_DISP",
+	.code = 0x36280,
+	.short_desc = " L2  RC store dispatch attempt",
+	.long_desc = " L2  RC store dispatch attempt",
+},
+{
+	.name = "PM_CMPLU_STALL",
+	.code = 0x4000a,
+	.short_desc = "No groups completed, GCT not empty",
+	.long_desc = "No groups completed, GCT not empty",
+},
+{
+	.name = "PM_LSU_PARTIAL_CDF",
+	.code = 0xc0aa,
+	.short_desc = "A partial cacheline was returned from the L3",
+	.long_desc = "A partial cacheline was returned from the L3",
+},
+{
+	.name = "PM_DISP_CLB_HELD_SB",
+	.code = 0x20a8,
+	.short_desc = "Dispatch/CLB Hold: Scoreboard",
+	.long_desc = "Dispatch/CLB Hold: Scoreboard",
+},
+{
+	.name = "PM_VSU0_FMA_DOUBLE",
+	.code = 0xa090,
+	.short_desc = "four flop DP vector operations (xvmadddp, xvnmadddp, xvmsubdp, xvmsubdp)",
+	.long_desc = "four flop DP vector operations (xvmadddp, xvnmadddp, xvmsubdp, xvmsubdp)",
+},
+{
+	.name = "PM_FXU0_BUSY_FXU1_IDLE",
+	.code = 0x3000e,
+	.short_desc = "fxu0 busy and fxu1 idle",
+	.long_desc = "FXU0 is busy while FXU1 was idle",
+},
+{
+	.name = "PM_IC_DEMAND_CYC",
+	.code = 0x10018,
+	.short_desc = "Cycles when a demand ifetch was pending",
+	.long_desc = "Cycles when a demand ifetch was pending",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_SHR",
+	.code = 0x3d04e,
+	.short_desc = "Marked data loaded from another L2 on same chip shared",
+	.long_desc = "Marked data loaded from another L2 on same chip shared",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_UST",
+	.code = 0xd086,
+	.short_desc = "Flush: (marked) Unaligned Store",
+	.long_desc = "A marked store was flushed because it was unaligned",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L3MISS",
+	.code = 0x2e058,
+	.short_desc = "Instruction PTEG loaded from L3 miss",
+	.long_desc = "Instruction PTEG loaded from L3 miss",
+},
+{
+	.name = "PM_VSU_DENORM",
+	.code = 0xa8ac,
+	.short_desc = "Vector or Scalar denorm operand",
+	.long_desc = "Vector or Scalar denorm operand",
+},
+{
+	.name = "PM_MRK_LSU_PARTIAL_CDF",
+	.code = 0xd080,
+	.short_desc = "A partial cacheline was returned from the L3 for a marked load",
+	.long_desc = "A partial cacheline was returned from the L3 for a marked load",
+},
+{
+	.name = "PM_INST_FROM_L21_SHR",
+	.code = 0x3404e,
+	.short_desc = "Instruction fetched from another L2 on same chip shared",
+	.long_desc = "Instruction fetched from another L2 on same chip shared",
+},
+{
+	.name = "PM_IC_PREF_WRITE",
+	.code = 0x408e,
+	.short_desc = "Instruction prefetch written into IL1",
+	.long_desc = "Number of Instruction Cache entries written because of prefetch. Prefetch entries are marked least recently used and are candidates for eviction if they are not needed to satify a demand fetch.",
+},
+{
+	.name = "PM_BR_PRED",
+	.code = 0x409c,
+	.short_desc = "Branch Predictions made",
+	.long_desc = "A branch prediction was made. This could have been a target prediction, a condition prediction, or both",
+},
+{
+	.name = "PM_INST_FROM_DMEM",
+	.code = 0x1404a,
+	.short_desc = "Instruction fetched from distant memory",
+	.long_desc = "An instruction fetch group was fetched from memory attached to a distant module. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_ALL",
+	.code = 0x4890,
+	.short_desc = "Prefetch Canceled due to page boundary or icache hit",
+	.long_desc = "Prefetch Canceled due to page boundary or icache hit",
+},
+{
+	.name = "PM_LSU_DC_PREF_STREAM_CONFIRM",
+	.code = 0xd8b4,
+	.short_desc = "Dcache new prefetch stream confirmed",
+	.long_desc = "Dcache new prefetch stream confirmed",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_SRQ",
+	.code = 0xd08a,
+	.short_desc = "Flush: (marked) SRQ",
+	.long_desc = "Load Hit Store flush.  A marked load was flushed because it hits (overlaps) an older store that is already in the SRQ or in the same group.  If the real addresses match but the effective addresses do not, an alias condition exists that prevents store forwarding.  If the load and store are in the same group the load must be flushed to separate the two instructions. ",
+},
+{
+	.name = "PM_MRK_FIN_STALL_CYC",
+	.code = 0x1003c,
+	.short_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) ",
+	.long_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) ",
+},
+{
+	.name = "PM_L2_RCST_DISP_FAIL_OTHER",
+	.code = 0x46280,
+	.short_desc = " L2  RC store dispatch attempt failed due to other reasons",
+	.long_desc = " L2  RC store dispatch attempt failed due to other reasons",
+},
+{
+	.name = "PM_VSU1_DD_ISSUED",
+	.code = 0xb098,
+	.short_desc = "64BIT Decimal Issued on Pipe1",
+	.long_desc = "64BIT Decimal Issued on Pipe1",
+},
+{
+	.name = "PM_PTEG_FROM_L31_SHR",
+	.code = 0x2c056,
+	.short_desc = "PTEG loaded from another L3 on same chip shared",
+	.long_desc = "PTEG loaded from another L3 on same chip shared",
+},
+{
+	.name = "PM_DATA_FROM_L21_SHR",
+	.code = 0x3c04e,
+	.short_desc = "Data loaded from another L2 on same chip shared",
+	.long_desc = "Data loaded from another L2 on same chip shared",
+},
+{
+	.name = "PM_LSU0_NCLD",
+	.code = 0xc08c,
+	.short_desc = "LS0 Non-cachable Loads counted at finish",
+	.long_desc = "A non-cacheable load was executed by unit 0.",
+},
+{
+	.name = "PM_VSU1_4FLOP",
+	.code = 0xa09e,
+	.short_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+	.long_desc = "four flops operation (scalar fdiv, fsqrt; DP vector version of fmadd, fnmadd, fmsub, fnmsub; SP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU1_8FLOP",
+	.code = 0xa0a2,
+	.short_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+	.long_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+},
+{
+	.name = "PM_VSU_8FLOP",
+	.code = 0xa8a0,
+	.short_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+	.long_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub) ",
+},
+{
+	.name = "PM_LSU_LMQ_SRQ_EMPTY_CYC",
+	.code = 0x2003e,
+	.short_desc = "LSU empty (lmq and srq empty)",
+	.long_desc = "Cycles when both the LMQ and SRQ are empty (LSU is idle)",
+},
+{
+	.name = "PM_DTLB_MISS_64K",
+	.code = 0x3c05e,
+	.short_desc = "Data TLB miss for 64K page",
+	.long_desc = "Data TLB references to 64KB pages that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_THRD_CONC_RUN_INST",
+	.code = 0x300f4,
+	.short_desc = "Concurrent Run Instructions",
+	.long_desc = "Instructions completed by this thread when both threads had their run latches set.",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L2",
+	.code = 0x1d050,
+	.short_desc = "Marked PTEG loaded from L2",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from the local L2 due to a marked load or store.",
+},
+{
+	.name = "PM_PB_SYS_PUMP",
+	.code = 0x20081,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair1 Bit0",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair1 Bit0",
+},
+{
+	.name = "PM_VSU_FIN",
+	.code = 0xa8bc,
+	.short_desc = "VSU0 Finished an instruction",
+	.long_desc = "VSU0 Finished an instruction",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_MOD",
+	.code = 0x1d044,
+	.short_desc = "Marked data loaded from another L3 on same chip modified",
+	.long_desc = "Marked data loaded from another L3 on same chip modified",
+},
+{
+	.name = "PM_THRD_PRIO_0_1_CYC",
+	.code = 0x40b0,
+	.short_desc = " Cycles thread running at priority level 0 or 1",
+	.long_desc = " Cycles thread running at priority level 0 or 1",
+},
+{
+	.name = "PM_DERAT_MISS_64K",
+	.code = 0x2c05c,
+	.short_desc = "DERAT misses for 64K page",
+	.long_desc = "A data request (load or store) missed the ERAT for 64K page and resulted in an ERAT reload.",
+},
+{
+	.name = "PM_PMC2_REWIND",
+	.code = 0x30020,
+	.short_desc = "PMC2 Rewind Event (did not match condition)",
+	.long_desc = "PMC2 was counting speculatively. The speculative condition was not met and the counter was restored to its previous value.",
+},
+{
+	.name = "PM_INST_FROM_L2",
+	.code = 0x14040,
+	.short_desc = "Instruction fetched from L2",
+	.long_desc = "An instruction fetch group was fetched from L2. Fetch Groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_GRP_BR_MPRED_NONSPEC",
+	.code = 0x1000a,
+	.short_desc = "Group experienced non-speculative branch redirect",
+	.long_desc = "Group experienced non-speculative branch redirect",
+},
+{
+	.name = "PM_INST_DISP",
+	.code = 0x200f2,
+	.short_desc = "# PPC Dispatched",
+	.long_desc = "Number of PowerPC instructions successfully dispatched.",
+},
+{
+	.name = "PM_MEM0_RD_CANCEL_TOTAL",
+	.code = 0x30083,
+	.short_desc = " Nest events (MC0/MC1/PB/GX), Pair2 Bit1",
+	.long_desc = " Nest events (MC0/MC1/PB/GX), Pair2 Bit1",
+},
+{
+	.name = "PM_LSU0_DC_PREF_STREAM_CONFIRM",
+	.code = 0xd0b4,
+	.short_desc = "LS0 Dcache prefetch stream confirmed",
+	.long_desc = "LS0 Dcache prefetch stream confirmed",
+},
+{
+	.name = "PM_L1_DCACHE_RELOAD_VALID",
+	.code = 0x300f6,
+	.short_desc = "L1 reload data source valid",
+	.long_desc = "The data source information is valid,the data cache has been reloaded.  Prior to POWER5+ this included data cache reloads due to prefetch activity.  With POWER5+ this now only includes reloads due to demand loads.",
+},
+{
+	.name = "PM_VSU_SCALAR_DOUBLE_ISSUED",
+	.code = 0xb888,
+	.short_desc = "Double Precision scalar instruction issued on Pipe0",
+	.long_desc = "Double Precision scalar instruction issued on Pipe0",
+},
+{
+	.name = "PM_L3_PREF_HIT",
+	.code = 0x3f080,
+	.short_desc = "L3 Prefetch Directory Hit",
+	.long_desc = "L3 Prefetch Directory Hit",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L31_MOD",
+	.code = 0x1d054,
+	.short_desc = "Marked PTEG loaded from another L3 on same chip modified",
+	.long_desc = "Marked PTEG loaded from another L3 on same chip modified",
+},
+{
+	.name = "PM_CMPLU_STALL_STORE",
+	.code = 0x2004a,
+	.short_desc = "Completion stall due to store instruction",
+	.long_desc = "Completion stall due to store instruction",
+},
+{
+	.name = "PM_MRK_FXU_FIN",
+	.code = 0x20038,
+	.short_desc = "fxu marked  instr finish",
+	.long_desc = "One of the Fixed Point Units finished a marked instruction.  Instructions that finish may not necessary complete.",
+},
+{
+	.name = "PM_PMC4_OVERFLOW",
+	.code = 0x10010,
+	.short_desc = "Overflow from counter 4",
+	.long_desc = "Overflows from PMC4 are counted.  This effectively widens the PMC. The Overflow from the original PMC will not trigger an exception even if the PMU is configured to generate exceptions on overflow.",
+},
+{
+	.name = "PM_MRK_PTEG_FROM_L3",
+	.code = 0x2d050,
+	.short_desc = "Marked PTEG loaded from L3",
+	.long_desc = "A Page Table Entry was loaded into the ERAT from the local L3 due to a marked load or store.",
+},
+{
+	.name = "PM_LSU0_LMQ_LHR_MERGE",
+	.code = 0xd098,
+	.short_desc = "LS0  Load Merged with another cacheline request",
+	.long_desc = "LS0  Load Merged with another cacheline request",
+},
+{
+	.name = "PM_BTAC_HIT",
+	.code = 0x508a,
+	.short_desc = "BTAC Correct Prediction",
+	.long_desc = "BTAC Correct Prediction",
+},
+{
+	.name = "PM_L3_RD_BUSY",
+	.code = 0x4f082,
+	.short_desc = "Rd machines busy >= threshold (2,4,6,8)",
+	.long_desc = "Rd machines busy >= threshold (2,4,6,8)",
+},
+{
+	.name = "PM_LSU0_L1_SW_PREF",
+	.code = 0xc09c,
+	.short_desc = "LSU0 Software L1 Prefetches, including SW Transient Prefetches",
+	.long_desc = "LSU0 Software L1 Prefetches, including SW Transient Prefetches",
+},
+{
+	.name = "PM_INST_FROM_L2MISS",
+	.code = 0x44048,
+	.short_desc = "Instruction fetched missed L2",
+	.long_desc = "An instruction fetch group was fetched from beyond the local L2.",
+},
+{
+	.name = "PM_LSU0_DC_PREF_STREAM_ALLOC",
+	.code = 0xd0a8,
+	.short_desc = "LS0 D cache new prefetch stream allocated",
+	.long_desc = "LS0 D cache new prefetch stream allocated",
+},
+{
+	.name = "PM_L2_ST",
+	.code = 0x16082,
+	.short_desc = "Data Store Count",
+	.long_desc = "Data Store Count",
+},
+{
+	.name = "PM_VSU0_DENORM",
+	.code = 0xa0ac,
+	.short_desc = "FPU denorm operand",
+	.long_desc = "VSU0 received denormalized data",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_SHR",
+	.code = 0x3d044,
+	.short_desc = "Marked data loaded from distant L2 or L3 shared",
+	.long_desc = "The processor's Data Cache was reloaded with shared (T or SL) data from an L2 or L3 on a distant module due to a marked load.",
+},
+{
+	.name = "PM_BR_PRED_CR_TA",
+	.code = 0x48aa,
+	.short_desc = "Branch predict - taken/not taken and target",
+	.long_desc = "Both the condition (taken or not taken) and the target address of a branch instruction was predicted.",
+},
+{
+	.name = "PM_VSU0_FCONV",
+	.code = 0xa0b0,
+	.short_desc = "Convert instruction executed",
+	.long_desc = "Convert instruction executed",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_ULD",
+	.code = 0xd084,
+	.short_desc = "Flush: (marked) Unaligned Load",
+	.long_desc = "A marked load was flushed because it was unaligned (crossed a 64byte boundary, or 32 byte if it missed the L1)",
+},
+{
+	.name = "PM_BTAC_MISS",
+	.code = 0x5088,
+	.short_desc = "BTAC Mispredicted",
+	.long_desc = "BTAC Mispredicted",
+},
+{
+	.name = "PM_MRK_LD_MISS_EXPOSED_CYC_COUNT",
+	.code = 0x1003f,
+	.short_desc = "Marked Load exposed Miss (use edge detect to count #)",
+	.long_desc = "Marked Load exposed Miss (use edge detect to count #)",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2",
+	.code = 0x1d040,
+	.short_desc = "Marked data loaded from L2",
+	.long_desc = "The processor's Data Cache was reloaded from the local L2 due to a marked load.",
+},
+{
+	.name = "PM_LSU_DCACHE_RELOAD_VALID",
+	.code = 0xd0a2,
+	.short_desc = "count per sector of lines reloaded in L1 (demand + prefetch) ",
+	.long_desc = "count per sector of lines reloaded in L1 (demand + prefetch) ",
+},
+{
+	.name = "PM_VSU_FMA",
+	.code = 0xa884,
+	.short_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+	.long_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+},
+{
+	.name = "PM_LSU0_FLUSH_SRQ",
+	.code = 0xc0bc,
+	.short_desc = "LS0 Flush: SRQ",
+	.long_desc = "Load Hit Store flush.  A younger load was flushed from unit 0 because it hits (overlaps) an older store that is already in the SRQ or in the same group.  If the real addresses match but the effective addresses do not, an alias condition exists that prevents store forwarding.  If the load and store are in the same group the load must be flushed to separate the two instructions. ",
+},
+{
+	.name = "PM_LSU1_L1_PREF",
+	.code = 0xd0ba,
+	.short_desc = " LS1 L1 cache data prefetches",
+	.long_desc = " LS1 L1 cache data prefetches",
+},
+{
+	.name = "PM_IOPS_CMPL",
+	.code = 0x10014,
+	.short_desc = "Internal Operations completed",
+	.long_desc = "Number of internal operations that completed.",
+},
+{
+	.name = "PM_L2_SYS_PUMP",
+	.code = 0x36482,
+	.short_desc = "RC req that was a global (aka system) pump attempt",
+	.long_desc = "RC req that was a global (aka system) pump attempt",
+},
+{
+	.name = "PM_L2_RCLD_BUSY_RC_FULL",
+	.code = 0x46282,
+	.short_desc = " L2  activated Busy to the core for loads due to all RC full",
+	.long_desc = " L2  activated Busy to the core for loads due to all RC full",
+},
+{
+	.name = "PM_LSU_LMQ_S0_ALLOC",
+	.code = 0xd0a1,
+	.short_desc = "Slot 0 of LMQ valid",
+	.long_desc = "Slot 0 of LMQ valid",
+},
+{
+	.name = "PM_FLUSH_DISP_SYNC",
+	.code = 0x2088,
+	.short_desc = "Dispatch Flush: Sync",
+	.long_desc = "Dispatch Flush: Sync",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_MOD_CYC",
+	.code = 0x4002a,
+	.short_desc = "Marked ld latency Data source 1011  (L2.75/L3.75 M different 4 chip node)",
+	.long_desc = "Marked ld latency Data source 1011  (L2.75/L3.75 M different 4 chip node)",
+},
+{
+	.name = "PM_L2_IC_INV",
+	.code = 0x26180,
+	.short_desc = "Icache Invalidates from L2 ",
+	.long_desc = "Icache Invalidates from L2 ",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_MOD_CYC",
+	.code = 0x40024,
+	.short_desc = "Marked ld latency Data source 0101 (L2.1 M same chip)",
+	.long_desc = "Marked ld latency Data source 0101 (L2.1 M same chip)",
+},
+{
+	.name = "PM_L3_PREF_LDST",
+	.code = 0xd8ac,
+	.short_desc = "L3 cache prefetches LD + ST",
+	.long_desc = "L3 cache prefetches LD + ST",
+},
+{
+	.name = "PM_LSU_SRQ_EMPTY_CYC",
+	.code = 0x40008,
+	.short_desc = "ALL threads srq empty",
+	.long_desc = "The Store Request Queue is empty",
+},
+{
+	.name = "PM_LSU_LMQ_S0_VALID",
+	.code = 0xd0a0,
+	.short_desc = "Slot 0 of LMQ valid",
+	.long_desc = "This signal is asserted every cycle that the Load Request Queue slot zero is valid. The SRQ is 32 entries long and is allocated round-robin.  In SMT mode the LRQ is split between the two threads (16 entries each).",
+},
+{
+	.name = "PM_FLUSH_PARTIAL",
+	.code = 0x2086,
+	.short_desc = "Partial flush",
+	.long_desc = "Partial flush",
+},
+{
+	.name = "PM_VSU1_FMA_DOUBLE",
+	.code = 0xa092,
+	.short_desc = "four flop DP vector operations (xvmadddp, xvnmadddp, xvmsubdp, xvmsubdp)",
+	.long_desc = "four flop DP vector operations (xvmadddp, xvnmadddp, xvmsubdp, xvmsubdp)",
+},
+{
+	.name = "PM_1PLUS_PPC_DISP",
+	.code = 0x400f2,
+	.short_desc = "Cycles at least one Instr Dispatched",
+	.long_desc = "A group containing at least one PPC instruction was dispatched. For microcoded instructions that span multiple groups, this will only occur once.",
+},
+{
+	.name = "PM_DATA_FROM_L2MISS",
+	.code = 0x200fe,
+	.short_desc = "Demand LD - L2 Miss (not L2 hit)",
+	.long_desc = "The processor's Data Cache was reloaded but not from the local L2.",
+},
+{
+	.name = "PM_SUSPENDED",
+	.code = 0x0,
+	.short_desc = "Counter OFF",
+	.long_desc = "The counter is suspended (does not count)",
+},
+{
+	.name = "PM_VSU0_FMA",
+	.code = 0xa084,
+	.short_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub, xsmadd, xsnmadd, xsmsub, xsnmsub) Scalar instructions only!",
+	.long_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub, xsmadd, xsnmadd, xsmsub, xsnmsub) Scalar instructions only!",
+},
+{
+	.name = "PM_CMPLU_STALL_SCALAR",
+	.code = 0x40012,
+	.short_desc = "Completion stall caused by FPU instruction",
+	.long_desc = "Completion stall caused by FPU instruction",
+},
+{
+	.name = "PM_STCX_FAIL",
+	.code = 0xc09a,
+	.short_desc = "STCX failed",
+	.long_desc = "A stcx (stwcx or stdcx) failed",
+},
+{
+	.name = "PM_VSU0_FSQRT_FDIV_DOUBLE",
+	.code = 0xa094,
+	.short_desc = "eight flop DP vector operations (xvfdivdp, xvsqrtdp ",
+	.long_desc = "eight flop DP vector operations (xvfdivdp, xvsqrtdp ",
+},
+{
+	.name = "PM_DC_PREF_DST",
+	.code = 0xd0b0,
+	.short_desc = "Data Stream Touch",
+	.long_desc = "A prefetch stream was started using the DST instruction.",
+},
+{
+	.name = "PM_VSU1_SCAL_SINGLE_ISSUED",
+	.code = 0xb086,
+	.short_desc = "Single Precision scalar instruction issued on Pipe1",
+	.long_desc = "Single Precision scalar instruction issued on Pipe1",
+},
+{
+	.name = "PM_L3_HIT",
+	.code = 0x1f080,
+	.short_desc = "L3 Hits",
+	.long_desc = "L3 Hits",
+},
+{
+	.name = "PM_L2_GLOB_GUESS_WRONG",
+	.code = 0x26482,
+	.short_desc = "L2 guess glb and guess was not correct (ie data local)",
+	.long_desc = "L2 guess glb and guess was not correct (ie data local)",
+},
+{
+	.name = "PM_MRK_DFU_FIN",
+	.code = 0x20032,
+	.short_desc = "Decimal Unit marked Instruction Finish",
+	.long_desc = "The Decimal Floating Point Unit finished a marked instruction.",
+},
+{
+	.name = "PM_INST_FROM_L1",
+	.code = 0x4080,
+	.short_desc = "Instruction fetches from L1",
+	.long_desc = "An instruction fetch group was fetched from L1. Fetch Groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_BRU_FIN",
+	.code = 0x10068,
+	.short_desc = "Branch Instruction Finished ",
+	.long_desc = "The Branch execution unit finished an instruction",
+},
+{
+	.name = "PM_IC_DEMAND_REQ",
+	.code = 0x4088,
+	.short_desc = "Demand Instruction fetch request",
+	.long_desc = "Demand Instruction fetch request",
+},
+{
+	.name = "PM_VSU1_FSQRT_FDIV_DOUBLE",
+	.code = 0xa096,
+	.short_desc = "eight flop DP vector operations (xvfdivdp, xvsqrtdp ",
+	.long_desc = "eight flop DP vector operations (xvfdivdp, xvsqrtdp ",
+},
+{
+	.name = "PM_VSU1_FMA",
+	.code = 0xa086,
+	.short_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub, xsmadd, xsnmadd, xsmsub, xsnmsub) Scalar instructions only!",
+	.long_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub, xsmadd, xsnmadd, xsmsub, xsnmsub) Scalar instructions only!",
+},
+{
+	.name = "PM_MRK_LD_MISS_L1",
+	.code = 0x20036,
+	.short_desc = "Marked DL1 Demand Miss",
+	.long_desc = "Marked L1 D cache load misses",
+},
+{
+	.name = "PM_VSU0_2FLOP_DOUBLE",
+	.code = 0xa08c,
+	.short_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)  ",
+	.long_desc = "two flop DP vector operation (xvadddp, xvmuldp, xvsubdp, xvcmpdp, xvseldp, xvabsdp, xvnabsdp, xvredp ,xvsqrtedp, vxnegdp)  ",
+},
+{
+	.name = "PM_LSU_DC_PREF_STRIDED_STREAM_CONFIRM",
+	.code = 0xd8bc,
+	.short_desc = "Dcache Strided prefetch stream confirmed (software + hardware)",
+	.long_desc = "Dcache Strided prefetch stream confirmed (software + hardware)",
+},
+{
+	.name = "PM_INST_PTEG_FROM_L31_SHR",
+	.code = 0x2e056,
+	.short_desc = "Instruction PTEG loaded from another L3 on same chip shared",
+	.long_desc = "Instruction PTEG loaded from another L3 on same chip shared",
+},
+{
+	.name = "PM_MRK_LSU_REJECT_ERAT_MISS",
+	.code = 0x30064,
+	.short_desc = "LSU marked reject due to ERAT (up to 2 per cycle)",
+	.long_desc = "LSU marked reject due to ERAT (up to 2 per cycle)",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2MISS",
+	.code = 0x4d048,
+	.short_desc = "Marked data loaded missed L2",
+	.long_desc = "DL1 was reloaded from beyond L2 due to a marked demand load.",
+},
+{
+	.name = "PM_DATA_FROM_RL2L3_SHR",
+	.code = 0x1c04c,
+	.short_desc = "Data loaded from remote L2 or L3 shared",
+	.long_desc = "The processor's Data Cache was reloaded with shared (T or SL) data from an L2 or L3 on a remote module due to a demand load",
+},
+{
+	.name = "PM_INST_FROM_PREF",
+	.code = 0x14046,
+	.short_desc = "Instruction fetched from prefetch",
+	.long_desc = "An instruction fetch group was fetched from the prefetch buffer. Fetch groups can contain up to 8 instructions",
+},
+{
+	.name = "PM_VSU1_SQ",
+	.code = 0xb09e,
+	.short_desc = "Store Vector Issued on Pipe1",
+	.long_desc = "Store Vector Issued on Pipe1",
+},
+{
+	.name = "PM_L2_LD_DISP",
+	.code = 0x36180,
+	.short_desc = "All successful load dispatches",
+	.long_desc = "All successful load dispatches",
+},
+{
+	.name = "PM_L2_DISP_ALL",
+	.code = 0x46080,
+	.short_desc = "All successful LD/ST dispatches for this thread(i+d)",
+	.long_desc = "All successful LD/ST dispatches for this thread(i+d)",
+},
+{
+	.name = "PM_THRD_GRP_CMPL_BOTH_CYC",
+	.code = 0x10012,
+	.short_desc = "Cycles group completed by both threads",
+	.long_desc = "Cycles that both threads completed.",
+},
+{
+	.name = "PM_VSU_FSQRT_FDIV_DOUBLE",
+	.code = 0xa894,
+	.short_desc = "DP vector versions of fdiv,fsqrt ",
+	.long_desc = "DP vector versions of fdiv,fsqrt ",
+},
+{
+	.name = "PM_BR_MPRED",
+	.code = 0x400f6,
+	.short_desc = "Number of Branch Mispredicts",
+	.long_desc = "A branch instruction was incorrectly predicted. This could have been a target prediction, a condition prediction, or both",
+},
+{
+	.name = "PM_INST_PTEG_FROM_DL2L3_SHR",
+	.code = 0x3e054,
+	.short_desc = "Instruction PTEG loaded from remote L2 or L3 shared",
+	.long_desc = "Instruction PTEG loaded from remote L2 or L3 shared",
+},
+{
+	.name = "PM_VSU_1FLOP",
+	.code = 0xa880,
+	.short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished",
+	.long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished",
+},
+{
+	.name = "PM_HV_CYC",
+	.code = 0x2000a,
+	.short_desc = "cycles in hypervisor mode ",
+	.long_desc = "Cycles when the processor is executing in Hypervisor (MSR[HV] = 1 and MSR[PR]=0)",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_SHR",
+	.code = 0x1d04c,
+	.short_desc = "Marked data loaded from remote L2 or L3 shared",
+	.long_desc = "The processor's Data Cache was reloaded with shared (T or SL) data from an L2 or L3 on a remote module due to a marked load",
+},
+{
+	.name = "PM_DTLB_MISS_16M",
+	.code = 0x4c05e,
+	.short_desc = "Data TLB miss for 16M page",
+	.long_desc = "Data TLB references to 16MB pages that missed the TLB. Page size is determined at TLB reload time.",
+},
+{
+	.name = "PM_MRK_LSU_FIN",
+	.code = 0x40032,
+	.short_desc = "Marked LSU instruction finished",
+	.long_desc = "One of the Load/Store Units finished a marked instruction. Instructions that finish may not necessary complete",
+},
+{
+	.name = "PM_LSU1_LMQ_LHR_MERGE",
+	.code = 0xd09a,
+	.short_desc = "LS1 Load Merge with another cacheline request",
+	.long_desc = "LS1 Load Merge with another cacheline request",
+},
+{
+	.name = "PM_IFU_FIN",
+	.code = 0x40066,
+	.short_desc = "IFU Finished a (non-branch) instruction",
+	.long_desc = "The Instruction Fetch Unit finished an instruction",
+},
+{
+	.name = "PM_1THRD_CON_RUN_INSTR",
+	.code = 0x30062,
+	.short_desc = "1 thread Concurrent Run Instructions",
+	.long_desc = "1 thread Concurrent Run Instructions",
+},
+{
+	.name = "PM_CMPLU_STALL_COUNT",
+	.code = 0x4000B,
+	.short_desc = "Marked LSU instruction finished",
+	.long_desc = "One of the Load/Store Units finished a marked instruction. Instructions that finish may not necessary complete",
+},
+{
+	.name = "PM_MEM0_PB_RD_CL",
+	.code = 0x30083,
+	.short_desc = "Nest events (MC0/MC1/PB/GX), Pair2 Bit1",
+	.long_desc = "Nest events (MC0/MC1/PB/GX), Pair2 Bit1",
+},
+{
+	.name = "PM_THRD_1_RUN_CYC",
+	.code = 0x10060,
+	.short_desc = "1 thread in Run Cycles",
+	.long_desc = "At least one thread has set its run latch. Operating systems use the run latch to indicate when they are doing useful work.  The run latch is typically cleared in the OS idle loop. This event does not respect FCWAIT.",
+},
+{
+	.name = "PM_THRD_2_CONC_RUN_INSTR",
+	.code = 0x40062,
+	.short_desc = "2 thread Concurrent Run Instructions",
+	.long_desc = "2 thread Concurrent Run Instructions",
+},
+{
+	.name = "PM_THRD_2_RUN_CYC",
+	.code = 0x20060,
+	.short_desc = "2 thread in Run Cycles",
+	.long_desc = "2 thread in Run Cycles",
+},
+{
+	.name = "PM_THRD_3_CONC_RUN_INST",
+	.code = 0x10062,
+	.short_desc = "3 thread in Run Cycles",
+	.long_desc = "3 thread in Run Cycles",
+},
+{
+	.name = "PM_THRD_3_RUN_CYC",
+	.code = 0x30060,
+	.short_desc = "3 thread in Run Cycles",
+	.long_desc = "3 thread in Run Cycles",
+},
+{
+	.name = "PM_THRD_4_CONC_RUN_INST",
+	.code = 0x20062,
+	.short_desc = "4 thread in Run Cycles",
+	.long_desc = "4 thread in Run Cycles",
+},
+{
+	.name = "PM_THRD_4_RUN_CYC",
+	.code = 0x40060,
+	.short_desc = "4 thread in Run Cycles",
+	.long_desc = "4 thread in Run Cycles",
+},
+/* Terminating entry required */
+{
+	.name = NULL,
+	.code = 0,
+	.short_desc = NULL,
+	.long_desc = NULL,
+}
+};
+#endif
+
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC][PATCH 2/4] perf: Create a table of Power8 PMU events
  2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 1/4] perf: Create a table of Power7 PMU events Sukadev Bhattiprolu
@ 2015-05-01  7:05 ` Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 3/4] perf/powerpc: Move mfspr and friends to header file Sukadev Bhattiprolu
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-01  7:05 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

This table will be used in a follow-on patch to allow specifying
Power8 events by name rather than by their raw codes.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 tools/perf/arch/powerpc/util/power8-events.h | 6408 ++++++++++++++++++++++++++
 1 file changed, 6408 insertions(+)
 create mode 100644 tools/perf/arch/powerpc/util/power8-events.h

diff --git a/tools/perf/arch/powerpc/util/power8-events.h b/tools/perf/arch/powerpc/util/power8-events.h
new file mode 100644
index 0000000..34432bd
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/power8-events.h
@@ -0,0 +1,6408 @@
+#ifndef __POWER8_EVENTS_H__
+#define __POWER8_EVENTS_H__
+
+/*
+* File:    power8_events.h
+* CVS:
+* Author:  Carl Love
+*          carll.ibm.com
+* Mods:    Sukadev Bhattiprolu
+*          sukadev@linux.vnet.ibm.com
+*
+* (C) Copyright IBM Corporation, 2013.  All Rights Reserved.
+*
+* Note: This code was generated based on power8-events.h in libpfm4.
+*
+* Documentation on the PMU events will be published at:
+*
+* 	http://www.power.org/documentation
+*/
+
+static const struct perf_pmu_event power8_pmu_events[] = {
+{
+	.name = "PM_1LPAR_CYC",
+	.code = 0x1f05e,
+	.short_desc = "Number of cycles in single lpar mode. All threads in the core are assigned to the same lpar",
+	.long_desc = "Number of cycles in single lpar mode.",
+},
+{
+	.name = "PM_1PLUS_PPC_CMPL",
+	.code = 0x100f2,
+	.short_desc = "1 or more ppc insts finished",
+	.long_desc = "1 or more ppc insts finished (completed).",
+},
+{
+	.name = "PM_1PLUS_PPC_DISP",
+	.code = 0x400f2,
+	.short_desc = "Cycles at least one Instr Dispatched",
+	.long_desc = "Cycles at least one Instr Dispatched. Could be a group with only microcode. Issue HW016521",
+},
+{
+	.name = "PM_2LPAR_CYC",
+	.code = 0x2006e,
+	.short_desc = "Cycles in 2-lpar mode. Threads 0-3 belong to Lpar0 and threads 4-7 belong to Lpar1",
+	.long_desc = "Number of cycles in 2 lpar mode.",
+},
+{
+	.name = "PM_4LPAR_CYC",
+	.code = 0x4e05e,
+	.short_desc = "Number of cycles in 4 LPAR mode. Threads 0-1 belong to lpar0, threads 2-3 belong to lpar1, threads 4-5 belong to lpar2, and threads 6-7 belong to lpar3",
+	.long_desc = "Number of cycles in 4 LPAR mode.",
+},
+{
+	.name = "PM_ALL_CHIP_PUMP_CPRED",
+	.code = 0x610050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d)",
+},
+{
+	.name = "PM_ALL_GRP_PUMP_CPRED",
+	.code = 0x520050,
+	.short_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ALL_GRP_PUMP_MPRED",
+	.code = 0x620052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_ALL_GRP_PUMP_MPRED_RTY",
+	.code = 0x610052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ALL_PUMP_CPRED",
+	.code = 0x610054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ALL_PUMP_MPRED",
+	.code = 0x640052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ALL_SYS_PUMP_CPRED",
+	.code = 0x630050,
+	.short_desc = "Initial and Final Pump Scope was system pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ALL_SYS_PUMP_MPRED",
+	.code = 0x630052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_ALL_SYS_PUMP_MPRED_RTY",
+	.code = 0x640050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+},
+{
+	.name = "PM_ANY_THRD_RUN_CYC",
+	.code = 0x100fa,
+	.short_desc = "One of threads in run_cycles",
+	.long_desc = "Any thread in run_cycles (was one thread in run_cycles).",
+},
+{
+	.name = "PM_BACK_BR_CMPL",
+	.code = 0x2505e,
+	.short_desc = "Branch instruction completed with a target address less than current instruction address",
+	.long_desc = "Branch instruction completed with a target address less than current instruction address.",
+},
+{
+	.name = "PM_BANK_CONFLICT",
+	.code = 0x4082,
+	.short_desc = "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.",
+	.long_desc = "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.",
+},
+{
+	.name = "PM_BRU_FIN",
+	.code = 0x10068,
+	.short_desc = "Branch Instruction Finished",
+	.long_desc = "Branch Instruction Finished .",
+},
+{
+	.name = "PM_BR_2PATH",
+	.code = 0x20036,
+	.short_desc = "two path branch",
+	.long_desc = "two path branch.",
+},
+{
+	.name = "PM_BR_BC_8",
+	.code = 0x5086,
+	.short_desc = "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline",
+	.long_desc = "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline",
+},
+{
+	.name = "PM_BR_BC_8_CONV",
+	.code = 0x5084,
+	.short_desc = "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.",
+	.long_desc = "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.",
+},
+{
+	.name = "PM_BR_CMPL",
+	.code = 0x40060,
+	.short_desc = "Branch Instruction completed",
+	.long_desc = "Branch Instruction completed.",
+},
+{
+	.name = "PM_BR_MPRED_CCACHE",
+	.code = 0x40ac,
+	.short_desc = "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction",
+	.long_desc = "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction",
+},
+{
+	.name = "PM_BR_MPRED_CMPL",
+	.code = 0x400f6,
+	.short_desc = "Number of Branch Mispredicts",
+	.long_desc = "Number of Branch Mispredicts.",
+},
+{
+	.name = "PM_BR_MPRED_CR",
+	.code = 0x40b8,
+	.short_desc = "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).",
+	.long_desc = "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).",
+},
+{
+	.name = "PM_BR_MPRED_LSTACK",
+	.code = 0x40ae,
+	.short_desc = "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction",
+	.long_desc = "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction",
+},
+{
+	.name = "PM_BR_MPRED_TA",
+	.code = 0x40ba,
+	.short_desc = "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.",
+	.long_desc = "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.",
+},
+{
+	.name = "PM_BR_MRK_2PATH",
+	.code = 0x10138,
+	.short_desc = "marked two path branch",
+	.long_desc = "marked two path branch.",
+},
+{
+	.name = "PM_BR_PRED_BR0",
+	.code = 0x409c,
+	.short_desc = "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target",
+	.long_desc = "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target",
+},
+{
+	.name = "PM_BR_PRED_BR1",
+	.code = 0x409e,
+	.short_desc = "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.",
+	.long_desc = "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.",
+},
+{
+	.name = "PM_BR_PRED_BR_CMPL",
+	.code = 0x489c,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) OR if_pc_br0_br_pred(1).",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_BR_PRED_CCACHE_BR0",
+	.code = 0x40a4,
+	.short_desc = "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction",
+	.long_desc = "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction",
+},
+{
+	.name = "PM_BR_PRED_CCACHE_BR1",
+	.code = 0x40a6,
+	.short_desc = "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction",
+	.long_desc = "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction",
+},
+{
+	.name = "PM_BR_PRED_CCACHE_CMPL",
+	.code = 0x48a4,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND if_pc_br0_pred_type.",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_BR_PRED_CR_BR0",
+	.code = 0x40b0,
+	.short_desc = "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches",
+	.long_desc = "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra",
+},
+{
+	.name = "PM_BR_PRED_CR_BR1",
+	.code = 0x40b2,
+	.short_desc = "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches",
+	.long_desc = "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra",
+},
+{
+	.name = "PM_BR_PRED_CR_CMPL",
+	.code = 0x48b0,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(1)='1'.",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_BR_PRED_LSTACK_BR0",
+	.code = 0x40a8,
+	.short_desc = "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction",
+	.long_desc = "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction",
+},
+{
+	.name = "PM_BR_PRED_LSTACK_BR1",
+	.code = 0x40aa,
+	.short_desc = "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction",
+	.long_desc = "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction",
+},
+{
+	.name = "PM_BR_PRED_LSTACK_CMPL",
+	.code = 0x48a8,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND (not if_pc_br0_pred_type).",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_BR_PRED_TA_BR0",
+	.code = 0x40b4,
+	.short_desc = "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.",
+	.long_desc = "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.",
+},
+{
+	.name = "PM_BR_PRED_TA_BR1",
+	.code = 0x40b6,
+	.short_desc = "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.",
+	.long_desc = "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.",
+},
+{
+	.name = "PM_BR_PRED_TA_CMPL",
+	.code = 0x48b4,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0)='1'.",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_BR_TAKEN_CMPL",
+	.code = 0x200fa,
+	.short_desc = "New event for Branch Taken",
+	.long_desc = "Branch Taken.",
+},
+{
+	.name = "PM_BR_UNCOND_BR0",
+	.code = 0x40a0,
+	.short_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.",
+	.long_desc = "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.",
+},
+{
+	.name = "PM_BR_UNCOND_BR1",
+	.code = 0x40a2,
+	.short_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.",
+	.long_desc = "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.",
+},
+{
+	.name = "PM_BR_UNCOND_CMPL",
+	.code = 0x48a0,
+	.short_desc = "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred=00 AND if_pc_br0_completed.",
+	.long_desc = "IFU",
+},
+{
+	.name = "PM_CASTOUT_ISSUED",
+	.code = 0x3094,
+	.short_desc = "Castouts issued",
+	.long_desc = "Castouts issued",
+},
+{
+	.name = "PM_CASTOUT_ISSUED_GPR",
+	.code = 0x3096,
+	.short_desc = "Castouts issued GPR",
+	.long_desc = "Castouts issued GPR",
+},
+{
+	.name = "PM_CHIP_PUMP_CPRED",
+	.code = 0x10050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d).",
+},
+{
+	.name = "PM_CLB_HELD",
+	.code = 0x2090,
+	.short_desc = "CLB Hold: Any Reason",
+	.long_desc = "CLB Hold: Any Reason",
+},
+{
+	.name = "PM_CMPLU_STALL",
+	.code = 0x4000a,
+	.short_desc = "Completion stall",
+	.long_desc = "Completion stall.",
+},
+{
+	.name = "PM_CMPLU_STALL_BRU",
+	.code = 0x4d018,
+	.short_desc = "Completion stall due to a Branch Unit",
+	.long_desc = "Completion stall due to a Branch Unit.",
+},
+{
+	.name = "PM_CMPLU_STALL_BRU_CRU",
+	.code = 0x2d018,
+	.short_desc = "Completion stall due to IFU",
+	.long_desc = "Completion stall due to IFU.",
+},
+{
+	.name = "PM_CMPLU_STALL_COQ_FULL",
+	.code = 0x30026,
+	.short_desc = "Completion stall due to CO q full",
+	.long_desc = "Completion stall due to CO q full.",
+},
+{
+	.name = "PM_CMPLU_STALL_DCACHE_MISS",
+	.code = 0x2c012,
+	.short_desc = "Completion stall by Dcache miss",
+	.long_desc = "Completion stall by Dcache miss.",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_L21_L31",
+	.code = 0x2c018,
+	.short_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3)",
+	.long_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_L2L3",
+	.code = 0x2c016,
+	.short_desc = "Completion stall by Dcache miss which resolved in L2/L3",
+	.long_desc = "Completion stall by Dcache miss which resolved in L2/L3.",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_L2L3_CONFLICT",
+	.code = 0x4c016,
+	.short_desc = "Completion stall due to cache miss that resolves in the L2 or L3 with a conflict",
+	.long_desc = "Completion stall due to cache miss resolving in core's L2/L3 with a conflict.",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_L3MISS",
+	.code = 0x4c01a,
+	.short_desc = "Completion stall due to cache miss resolving missed the L3",
+	.long_desc = "Completion stall due to cache miss resolving missed the L3.",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_LMEM",
+	.code = 0x4c018,
+	.short_desc = "Completion stall due to cache miss that resolves in local memory",
+	.long_desc = "Completion stall due to cache miss resolving in core's Local Memory.",
+},
+{
+	.name = "PM_CMPLU_STALL_DMISS_REMOTE",
+	.code = 0x2c01c,
+	.short_desc = "Completion stall by Dcache miss which resolved from remote chip (cache or memory)",
+	.long_desc = "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).",
+},
+{
+	.name = "PM_CMPLU_STALL_ERAT_MISS",
+	.code = 0x4c012,
+	.short_desc = "Completion stall due to LSU reject ERAT miss",
+	.long_desc = "Completion stall due to LSU reject ERAT miss.",
+},
+{
+	.name = "PM_CMPLU_STALL_FLUSH",
+	.code = 0x30038,
+	.short_desc = "completion stall due to flush by own thread",
+	.long_desc = "completion stall due to flush by own thread.",
+},
+{
+	.name = "PM_CMPLU_STALL_FXLONG",
+	.code = 0x4d016,
+	.short_desc = "Completion stall due to a long latency fixed point instruction",
+	.long_desc = "Completion stall due to a long latency fixed point instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_FXU",
+	.code = 0x2d016,
+	.short_desc = "Completion stall due to FXU",
+	.long_desc = "Completion stall due to FXU.",
+},
+{
+	.name = "PM_CMPLU_STALL_HWSYNC",
+	.code = 0x30036,
+	.short_desc = "completion stall due to hwsync",
+	.long_desc = "completion stall due to hwsync.",
+},
+{
+	.name = "PM_CMPLU_STALL_LOAD_FINISH",
+	.code = 0x4d014,
+	.short_desc = "Completion stall due to a Load finish",
+	.long_desc = "Completion stall due to a Load finish.",
+},
+{
+	.name = "PM_CMPLU_STALL_LSU",
+	.code = 0x2c010,
+	.short_desc = "Completion stall by LSU instruction",
+	.long_desc = "Completion stall by LSU instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_LWSYNC",
+	.code = 0x10036,
+	.short_desc = "completion stall due to isync/lwsync",
+	.long_desc = "completion stall due to isync/lwsync.",
+},
+{
+	.name = "PM_CMPLU_STALL_MEM_ECC_DELAY",
+	.code = 0x30028,
+	.short_desc = "Completion stall due to mem ECC delay",
+	.long_desc = "Completion stall due to mem ECC delay.",
+},
+{
+	.name = "PM_CMPLU_STALL_NO_NTF",
+	.code = 0x2e01c,
+	.short_desc = "Completion stall due to nop",
+	.long_desc = "Completion stall due to nop.",
+},
+{
+	.name = "PM_CMPLU_STALL_NTCG_FLUSH",
+	.code = 0x2e01e,
+	.short_desc = "Completion stall due to ntcg flush",
+	.long_desc = "Completion stall due to reject (load hit store).",
+},
+{
+	.name = "PM_CMPLU_STALL_OTHER_CMPL",
+	.code = 0x30006,
+	.short_desc = "Instructions core completed while this tread was stalled",
+	.long_desc = "Instructions core completed while this thread was stalled.",
+},
+{
+	.name = "PM_CMPLU_STALL_REJECT",
+	.code = 0x4c010,
+	.short_desc = "Completion stall due to LSU reject",
+	.long_desc = "Completion stall due to LSU reject.",
+},
+{
+	.name = "PM_CMPLU_STALL_REJECT_LHS",
+	.code = 0x2c01a,
+	.short_desc = "Completion stall due to reject (load hit store)",
+	.long_desc = "Completion stall due to reject (load hit store).",
+},
+{
+	.name = "PM_CMPLU_STALL_REJ_LMQ_FULL",
+	.code = 0x4c014,
+	.short_desc = "Completion stall due to LSU reject LMQ full",
+	.long_desc = "Completion stall due to LSU reject LMQ full.",
+},
+{
+	.name = "PM_CMPLU_STALL_SCALAR",
+	.code = 0x4d010,
+	.short_desc = "Completion stall due to VSU scalar instruction",
+	.long_desc = "Completion stall due to VSU scalar instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_SCALAR_LONG",
+	.code = 0x2d010,
+	.short_desc = "Completion stall due to VSU scalar long latency instruction",
+	.long_desc = "Completion stall due to VSU scalar long latency instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_STORE",
+	.code = 0x2c014,
+	.short_desc = "Completion stall by stores this includes store agen finishes in pipe LS0/LS1 and store data finishes in LS2/LS3",
+	.long_desc = "Completion stall by stores.",
+},
+{
+	.name = "PM_CMPLU_STALL_ST_FWD",
+	.code = 0x4c01c,
+	.short_desc = "Completion stall due to store forward",
+	.long_desc = "Completion stall due to store forward.",
+},
+{
+	.name = "PM_CMPLU_STALL_THRD",
+	.code = 0x1001c,
+	.short_desc = "Completion Stalled due to thread conflict. Group ready to complete but it was another thread's turn",
+	.long_desc = "Completion stall due to thread conflict.",
+},
+{
+	.name = "PM_CMPLU_STALL_VECTOR",
+	.code = 0x2d014,
+	.short_desc = "Completion stall due to VSU vector instruction",
+	.long_desc = "Completion stall due to VSU vector instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_VECTOR_LONG",
+	.code = 0x4d012,
+	.short_desc = "Completion stall due to VSU vector long instruction",
+	.long_desc = "Completion stall due to VSU vector long instruction.",
+},
+{
+	.name = "PM_CMPLU_STALL_VSU",
+	.code = 0x2d012,
+	.short_desc = "Completion stall due to VSU instruction",
+	.long_desc = "Completion stall due to VSU instruction.",
+},
+{
+	.name = "PM_CO0_ALLOC",
+	.code = 0x16083,
+	.short_desc = "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_CO0_BUSY",
+	.code = 0x16082,
+	.short_desc = "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+},
+{
+	.name = "PM_CO_DISP_FAIL",
+	.code = 0x517082,
+	.short_desc = "CO dispatch failed due to all CO machines being busy",
+	.long_desc = "CO dispatch failed due to all CO machines being busy",
+},
+{
+	.name = "PM_CO_TM_SC_FOOTPRINT",
+	.code = 0x527084,
+	.short_desc = "L2 did a cleanifdirty CO to the L3 (ie created an SC line in the L3)",
+	.long_desc = "L2 did a cleanifdirty CO to the L3 (ie created an SC line in the L3)",
+},
+{
+	.name = "PM_CO_USAGE",
+	.code = 0x3608a,
+	.short_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+	.long_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+},
+{
+	.name = "PM_CRU_FIN",
+	.code = 0x40066,
+	.short_desc = "IFU Finished a (non-branch) instruction",
+	.long_desc = "IFU Finished a (non-branch) instruction.",
+},
+{
+	.name = "PM_CYC",
+	.code = 0x1e,
+	.short_desc = "Cycles",
+	.long_desc = "Cycles .",
+},
+{
+	.name = "PM_DATA_ALL_CHIP_PUMP_CPRED",
+	.code = 0x61c050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for either demand loads or data prefetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load",
+},
+{
+	.name = "PM_DATA_ALL_FROM_DL2L3_MOD",
+	.code = 0x64c048,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_DL2L3_SHR",
+	.code = 0x63c048,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_DL4",
+	.code = 0x63c04c,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_DMEM",
+	.code = 0x64c04c,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2",
+	.code = 0x61c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L21_MOD",
+	.code = 0x64c046,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L21_SHR",
+	.code = 0x63c046,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2MISS_MOD",
+	.code = 0x61c04e,
+	.short_desc = "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x63c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x64c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2_MEPF",
+	.code = 0x62c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L2_NO_CONFLICT",
+	.code = 0x61c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L3",
+	.code = 0x64c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L31_ECO_MOD",
+	.code = 0x64c044,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L31_ECO_SHR",
+	.code = 0x63c044,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L31_MOD",
+	.code = 0x62c044,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L31_SHR",
+	.code = 0x61c046,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L3MISS_MOD",
+	.code = 0x64c04e,
+	.short_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L3_DISP_CONFLICT",
+	.code = 0x63c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L3_MEPF",
+	.code = 0x62c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_L3_NO_CONFLICT",
+	.code = 0x61c044,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_LL4",
+	.code = 0x61c04c,
+	.short_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_LMEM",
+	.code = 0x62c048,
+	.short_desc = "The processor's data cache was reloaded from the local chip's Memory due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_MEMORY",
+	.code = 0x62c04c,
+	.short_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_OFF_CHIP_CACHE",
+	.code = 0x64c04a,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_ON_CHIP_CACHE",
+	.code = 0x61c048,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_RL2L3_MOD",
+	.code = 0x62c046,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_RL2L3_SHR",
+	.code = 0x61c04a,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_RL4",
+	.code = 0x62c04a,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_FROM_RMEM",
+	.code = 0x63c04a,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either demand loads or data prefetch",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1",
+},
+{
+	.name = "PM_DATA_ALL_GRP_PUMP_CPRED",
+	.code = 0x62c050,
+	.short_desc = "Initial and Final Pump Scope was group pump (prediction=correct) for either demand loads or data prefetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load",
+},
+{
+	.name = "PM_DATA_ALL_GRP_PUMP_MPRED",
+	.code = 0x62c052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for either demand loads or data prefetch",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_DATA_ALL_GRP_PUMP_MPRED_RTY",
+	.code = 0x61c052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for either demand loads or data prefetch",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load",
+},
+{
+	.name = "PM_DATA_ALL_PUMP_CPRED",
+	.code = 0x61c054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for either demand loads or data prefetch",
+	.long_desc = "Pump prediction correct. Counts across all types of pumps for a demand load",
+},
+{
+	.name = "PM_DATA_ALL_PUMP_MPRED",
+	.code = 0x64c052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for either demand loads or data prefetch",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor a demand load",
+},
+{
+	.name = "PM_DATA_ALL_SYS_PUMP_CPRED",
+	.code = 0x63c050,
+	.short_desc = "Initial and Final Pump Scope was system pump (prediction=correct) for either demand loads or data prefetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load",
+},
+{
+	.name = "PM_DATA_ALL_SYS_PUMP_MPRED",
+	.code = 0x63c052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for either demand loads or data prefetch",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_DATA_ALL_SYS_PUMP_MPRED_RTY",
+	.code = 0x64c050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for either demand loads or data prefetch",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load",
+},
+{
+	.name = "PM_DATA_CHIP_PUMP_CPRED",
+	.code = 0x1c050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for a demand load",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load.",
+},
+{
+	.name = "PM_DATA_FROM_DL2L3_MOD",
+	.code = 0x4c048,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_DL2L3_SHR",
+	.code = 0x3c048,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_DL4",
+	.code = 0x3c04c,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_DMEM",
+	.code = 0x4c04c,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2",
+	.code = 0x1c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L21_MOD",
+	.code = 0x4c046,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L21_SHR",
+	.code = 0x3c046,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2MISS",
+	.code = 0x200fe,
+	.short_desc = "Demand LD - L2 Miss (not L2 hit)",
+	.long_desc = "Demand LD - L2 Miss (not L2 hit).",
+},
+{
+	.name = "PM_DATA_FROM_L2MISS_MOD",
+	.code = 0x1c04e,
+	.short_desc = "The processor's data cache was reloaded from a localtion other than the local core's L2 due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x3c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x4c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2_MEPF",
+	.code = 0x2c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L2_NO_CONFLICT",
+	.code = 0x1c040,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1 .",
+},
+{
+	.name = "PM_DATA_FROM_L3",
+	.code = 0x4c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L31_ECO_MOD",
+	.code = 0x4c044,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L31_ECO_SHR",
+	.code = 0x3c044,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L31_MOD",
+	.code = 0x2c044,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L31_SHR",
+	.code = 0x1c046,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L3MISS",
+	.code = 0x300fe,
+	.short_desc = "Demand LD - L3 Miss (not L2 hit and not L3 hit)",
+	.long_desc = "Demand LD - L3 Miss (not L2 hit and not L3 hit).",
+},
+{
+	.name = "PM_DATA_FROM_L3MISS_MOD",
+	.code = 0x4c04e,
+	.short_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L3_DISP_CONFLICT",
+	.code = 0x3c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L3_MEPF",
+	.code = 0x2c042,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_L3_NO_CONFLICT",
+	.code = 0x1c044,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_LL4",
+	.code = 0x1c04c,
+	.short_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_LMEM",
+	.code = 0x2c048,
+	.short_desc = "The processor's data cache was reloaded from the local chip's Memory due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_MEM",
+	.code = 0x400fe,
+	.short_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load",
+	.long_desc = "Data cache reload from memory (including L4).",
+},
+{
+	.name = "PM_DATA_FROM_MEMORY",
+	.code = 0x2c04c,
+	.short_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_OFF_CHIP_CACHE",
+	.code = 0x4c04a,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a demand load",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_ON_CHIP_CACHE",
+	.code = 0x1c048,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_RL2L3_MOD",
+	.code = 0x2c046,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_RL2L3_SHR",
+	.code = 0x1c04a,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_RL4",
+	.code = 0x2c04a,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_FROM_RMEM",
+	.code = 0x3c04a,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.",
+},
+{
+	.name = "PM_DATA_GRP_PUMP_CPRED",
+	.code = 0x2c050,
+	.short_desc = "Initial and Final Pump Scope was group pump (prediction=correct) for a demand load",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load.",
+},
+{
+	.name = "PM_DATA_GRP_PUMP_MPRED",
+	.code = 0x2c052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for a demand load",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_DATA_GRP_PUMP_MPRED_RTY",
+	.code = 0x1c052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for a demand load",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load.",
+},
+{
+	.name = "PM_DATA_PUMP_CPRED",
+	.code = 0x1c054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for a demand load",
+	.long_desc = "Pump prediction correct. Counts across all types of pumps for a demand load.",
+},
+{
+	.name = "PM_DATA_PUMP_MPRED",
+	.code = 0x4c052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for a demand load",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor a demand load.",
+},
+{
+	.name = "PM_DATA_SYS_PUMP_CPRED",
+	.code = 0x3c050,
+	.short_desc = "Initial and Final Pump Scope was system pump (prediction=correct) for a demand load",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load.",
+},
+{
+	.name = "PM_DATA_SYS_PUMP_MPRED",
+	.code = 0x3c052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for a demand load",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_DATA_SYS_PUMP_MPRED_RTY",
+	.code = 0x4c050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for a demand load",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load.",
+},
+{
+	.name = "PM_DATA_TABLEWALK_CYC",
+	.code = 0x3001a,
+	.short_desc = "Tablwalk Cycles (could be 1 or 2 active)",
+	.long_desc = "Data Tablewalk Active.",
+},
+{
+	.name = "PM_DC_COLLISIONS",
+	.code = 0xe0bc,
+	.short_desc = "DATA Cache collisions",
+	.long_desc = "DATA Cache collisions42",
+},
+{
+	.name = "PM_DC_PREF_STREAM_ALLOC",
+	.code = 0x1e050,
+	.short_desc = "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1",
+	.long_desc = "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1.",
+},
+{
+	.name = "PM_DC_PREF_STREAM_CONF",
+	.code = 0x2e050,
+	.short_desc = "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down",
+	.long_desc = "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down.",
+},
+{
+	.name = "PM_DC_PREF_STREAM_FUZZY_CONF",
+	.code = 0x4e050,
+	.short_desc = "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up)",
+	.long_desc = "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up).",
+},
+{
+	.name = "PM_DC_PREF_STREAM_STRIDED_CONF",
+	.code = 0x3e050,
+	.short_desc = "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.",
+	.long_desc = "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software..",
+},
+{
+	.name = "PM_DERAT_MISS_16G",
+	.code = 0x4c054,
+	.short_desc = "Data ERAT Miss (Data TLB Access) page size 16G",
+	.long_desc = "Data ERAT Miss (Data TLB Access) page size 16G.",
+},
+{
+	.name = "PM_DERAT_MISS_16M",
+	.code = 0x3c054,
+	.short_desc = "Data ERAT Miss (Data TLB Access) page size 16M",
+	.long_desc = "Data ERAT Miss (Data TLB Access) page size 16M.",
+},
+{
+	.name = "PM_DERAT_MISS_4K",
+	.code = 0x1c056,
+	.short_desc = "Data ERAT Miss (Data TLB Access) page size 4K",
+	.long_desc = "Data ERAT Miss (Data TLB Access) page size 4K.",
+},
+{
+	.name = "PM_DERAT_MISS_64K",
+	.code = 0x2c054,
+	.short_desc = "Data ERAT Miss (Data TLB Access) page size 64K",
+	.long_desc = "Data ERAT Miss (Data TLB Access) page size 64K.",
+},
+{
+	.name = "PM_DFU",
+	.code = 0xb0ba,
+	.short_desc = "Finish DFU (all finish)",
+	.long_desc = "Finish DFU (all finish)",
+},
+{
+	.name = "PM_DFU_DCFFIX",
+	.code = 0xb0be,
+	.short_desc = "Convert from fixed opcode finish (dcffix,dcffixq)",
+	.long_desc = "Convert from fixed opcode finish (dcffix,dcffixq)",
+},
+{
+	.name = "PM_DFU_DENBCD",
+	.code = 0xb0bc,
+	.short_desc = "BCD->DPD opcode finish (denbcd, denbcdq)",
+	.long_desc = "BCD->DPD opcode finish (denbcd, denbcdq)",
+},
+{
+	.name = "PM_DFU_MC",
+	.code = 0xb0b8,
+	.short_desc = "Finish DFU multicycle",
+	.long_desc = "Finish DFU multicycle",
+},
+{
+	.name = "PM_DISP_CLB_HELD_BAL",
+	.code = 0x2092,
+	.short_desc = "Dispatch/CLB Hold: Balance",
+	.long_desc = "Dispatch/CLB Hold: Balance",
+},
+{
+	.name = "PM_DISP_CLB_HELD_RES",
+	.code = 0x2094,
+	.short_desc = "Dispatch/CLB Hold: Resource",
+	.long_desc = "Dispatch/CLB Hold: Resource",
+},
+{
+	.name = "PM_DISP_CLB_HELD_SB",
+	.code = 0x20a8,
+	.short_desc = "Dispatch/CLB Hold: Scoreboard",
+	.long_desc = "Dispatch/CLB Hold: Scoreboard",
+},
+{
+	.name = "PM_DISP_CLB_HELD_SYNC",
+	.code = 0x2098,
+	.short_desc = "Dispatch/CLB Hold: Sync type instruction",
+	.long_desc = "Dispatch/CLB Hold: Sync type instruction",
+},
+{
+	.name = "PM_DISP_CLB_HELD_TLBIE",
+	.code = 0x2096,
+	.short_desc = "Dispatch Hold: Due to TLBIE",
+	.long_desc = "Dispatch Hold: Due to TLBIE",
+},
+{
+	.name = "PM_DISP_HELD",
+	.code = 0x10006,
+	.short_desc = "Dispatch Held",
+	.long_desc = "Dispatch Held.",
+},
+{
+	.name = "PM_DISP_HELD_IQ_FULL",
+	.code = 0x20006,
+	.short_desc = "Dispatch held due to Issue q full",
+	.long_desc = "Dispatch held due to Issue q full.",
+},
+{
+	.name = "PM_DISP_HELD_MAP_FULL",
+	.code = 0x1002a,
+	.short_desc = "Dispatch for this thread was held because the Mappers were full",
+	.long_desc = "Dispatch held due to Mapper full.",
+},
+{
+	.name = "PM_DISP_HELD_SRQ_FULL",
+	.code = 0x30018,
+	.short_desc = "Dispatch held due SRQ no room",
+	.long_desc = "Dispatch held due SRQ no room.",
+},
+{
+	.name = "PM_DISP_HELD_SYNC_HOLD",
+	.code = 0x4003c,
+	.short_desc = "Dispatch held due to SYNC hold",
+	.long_desc = "Dispatch held due to SYNC hold.",
+},
+{
+	.name = "PM_DISP_HOLD_GCT_FULL",
+	.code = 0x30a6,
+	.short_desc = "Dispatch Hold Due to no space in the GCT",
+	.long_desc = "Dispatch Hold Due to no space in the GCT",
+},
+{
+	.name = "PM_DISP_WT",
+	.code = 0x30008,
+	.short_desc = "Dispatched Starved",
+	.long_desc = "Dispatched Starved (not held, nothing to dispatch).",
+},
+{
+	.name = "PM_DPTEG_FROM_DL2L3_MOD",
+	.code = 0x4e048,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_DL2L3_SHR",
+	.code = 0x3e048,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_DL4",
+	.code = 0x3e04c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_DMEM",
+	.code = 0x4e04c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2",
+	.code = 0x1e042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L21_MOD",
+	.code = 0x4e046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L21_SHR",
+	.code = 0x3e046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2MISS",
+	.code = 0x1e04e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x3e040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x4e040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2_MEPF",
+	.code = 0x2e040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L2_NO_CONFLICT",
+	.code = 0x1e040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L3",
+	.code = 0x4e042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L31_ECO_MOD",
+	.code = 0x4e044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L31_ECO_SHR",
+	.code = 0x3e044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L31_MOD",
+	.code = 0x2e044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L31_SHR",
+	.code = 0x1e046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L3MISS",
+	.code = 0x4e04e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L3_DISP_CONFLICT",
+	.code = 0x3e042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L3_MEPF",
+	.code = 0x2e042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_L3_NO_CONFLICT",
+	.code = 0x1e044,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_LL4",
+	.code = 0x1e04c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_LMEM",
+	.code = 0x2e048,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_MEMORY",
+	.code = 0x2e04c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_OFF_CHIP_CACHE",
+	.code = 0x4e04a,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_ON_CHIP_CACHE",
+	.code = 0x1e048,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_RL2L3_MOD",
+	.code = 0x2e046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_RL2L3_SHR",
+	.code = 0x1e04a,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_RL4",
+	.code = 0x2e04a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request.",
+},
+{
+	.name = "PM_DPTEG_FROM_RMEM",
+	.code = 0x3e04a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request.",
+},
+{
+	.name = "PM_DSLB_MISS",
+	.code = 0xd094,
+	.short_desc = "Data SLB Miss - Total of all segment sizes",
+	.long_desc = "Data SLB Miss - Total of all segment sizesData SLB misses",
+},
+{
+	.name = "PM_DTLB_MISS",
+	.code = 0x300fc,
+	.short_desc = "Data PTEG reload",
+	.long_desc = "Data PTEG Reloaded (DTLB Miss).",
+},
+{
+	.name = "PM_DTLB_MISS_16G",
+	.code = 0x1c058,
+	.short_desc = "Data TLB Miss page size 16G",
+	.long_desc = "Data TLB Miss page size 16G.",
+},
+{
+	.name = "PM_DTLB_MISS_16M",
+	.code = 0x4c056,
+	.short_desc = "Data TLB Miss page size 16M",
+	.long_desc = "Data TLB Miss page size 16M.",
+},
+{
+	.name = "PM_DTLB_MISS_4K",
+	.code = 0x2c056,
+	.short_desc = "Data TLB Miss page size 4k",
+	.long_desc = "Data TLB Miss page size 4k.",
+},
+{
+	.name = "PM_DTLB_MISS_64K",
+	.code = 0x3c056,
+	.short_desc = "Data TLB Miss page size 64K",
+	.long_desc = "Data TLB Miss page size 64K.",
+},
+{
+	.name = "PM_EAT_FORCE_MISPRED",
+	.code = 0x50a8,
+	.short_desc = "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is issue",
+	.long_desc = "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is",
+},
+{
+	.name = "PM_EAT_FULL_CYC",
+	.code = 0x4084,
+	.short_desc = "Cycles No room in EAT",
+	.long_desc = "Cycles No room in EATSet on bank conflict and case where no ibuffers available.",
+},
+{
+	.name = "PM_EE_OFF_EXT_INT",
+	.code = 0x2080,
+	.short_desc = "Ee off and external interrupt",
+	.long_desc = "Ee off and external interrupt",
+},
+{
+	.name = "PM_EXT_INT",
+	.code = 0x200f8,
+	.short_desc = "external interrupt",
+	.long_desc = "external interrupt.",
+},
+{
+	.name = "PM_FAV_TBEGIN",
+	.code = 0x20b4,
+	.short_desc = "Dispatch time Favored tbegin",
+	.long_desc = "Dispatch time Favored tbegin",
+},
+{
+	.name = "PM_FLOP",
+	.code = 0x100f4,
+	.short_desc = "Floating Point Operation Finished",
+	.long_desc = "Floating Point Operations Finished.",
+},
+{
+	.name = "PM_FLOP_SUM_SCALAR",
+	.code = 0xa0ae,
+	.short_desc = "flops summary scalar instructions",
+	.long_desc = "flops summary scalar instructions",
+},
+{
+	.name = "PM_FLOP_SUM_VEC",
+	.code = 0xa0ac,
+	.short_desc = "flops summary vector instructions",
+	.long_desc = "flops summary vector instructions",
+},
+{
+	.name = "PM_FLUSH",
+	.code = 0x400f8,
+	.short_desc = "Flush (any type)",
+	.long_desc = "Flush (any type).",
+},
+{
+	.name = "PM_FLUSH_BR_MPRED",
+	.code = 0x2084,
+	.short_desc = "Flush caused by branch mispredict",
+	.long_desc = "Flush caused by branch mispredict",
+},
+{
+	.name = "PM_FLUSH_COMPLETION",
+	.code = 0x30012,
+	.short_desc = "Completion Flush",
+	.long_desc = "Completion Flush.",
+},
+{
+	.name = "PM_FLUSH_DISP",
+	.code = 0x2082,
+	.short_desc = "Dispatch flush",
+	.long_desc = "Dispatch flush",
+},
+{
+	.name = "PM_FLUSH_DISP_SB",
+	.code = 0x208c,
+	.short_desc = "Dispatch Flush: Scoreboard",
+	.long_desc = "Dispatch Flush: Scoreboard",
+},
+{
+	.name = "PM_FLUSH_DISP_SYNC",
+	.code = 0x2088,
+	.short_desc = "Dispatch Flush: Sync",
+	.long_desc = "Dispatch Flush: Sync",
+},
+{
+	.name = "PM_FLUSH_DISP_TLBIE",
+	.code = 0x208a,
+	.short_desc = "Dispatch Flush: TLBIE",
+	.long_desc = "Dispatch Flush: TLBIE",
+},
+{
+	.name = "PM_FLUSH_LSU",
+	.code = 0x208e,
+	.short_desc = "Flush initiated by LSU",
+	.long_desc = "Flush initiated by LSU",
+},
+{
+	.name = "PM_FLUSH_PARTIAL",
+	.code = 0x2086,
+	.short_desc = "Partial flush",
+	.long_desc = "Partial flush",
+},
+{
+	.name = "PM_FPU0_FCONV",
+	.code = 0xa0b0,
+	.short_desc = "Convert instruction executed",
+	.long_desc = "Convert instruction executed",
+},
+{
+	.name = "PM_FPU0_FEST",
+	.code = 0xa0b8,
+	.short_desc = "Estimate instruction executed",
+	.long_desc = "Estimate instruction executed",
+},
+{
+	.name = "PM_FPU0_FRSP",
+	.code = 0xa0b4,
+	.short_desc = "Round to single precision instruction executed",
+	.long_desc = "Round to single precision instruction executed",
+},
+{
+	.name = "PM_FPU1_FCONV",
+	.code = 0xa0b2,
+	.short_desc = "Convert instruction executed",
+	.long_desc = "Convert instruction executed",
+},
+{
+	.name = "PM_FPU1_FEST",
+	.code = 0xa0ba,
+	.short_desc = "Estimate instruction executed",
+	.long_desc = "Estimate instruction executed",
+},
+{
+	.name = "PM_FPU1_FRSP",
+	.code = 0xa0b6,
+	.short_desc = "Round to single precision instruction executed",
+	.long_desc = "Round to single precision instruction executed",
+},
+{
+	.name = "PM_FREQ_DOWN",
+	.code = 0x3000c,
+	.short_desc = "Power Management: Below Threshold B",
+	.long_desc = "Frequency is being slewed down due to Power Management.",
+},
+{
+	.name = "PM_FREQ_UP",
+	.code = 0x4000c,
+	.short_desc = "Power Management: Above Threshold A",
+	.long_desc = "Frequency is being slewed up due to Power Management.",
+},
+{
+	.name = "PM_FUSION_TOC_GRP0_1",
+	.code = 0x50b0,
+	.short_desc = "One pair of instructions fused with TOC in Group0",
+	.long_desc = "One pair of instructions fused with TOC in Group0",
+},
+{
+	.name = "PM_FUSION_TOC_GRP0_2",
+	.code = 0x50ae,
+	.short_desc = "Two pairs of instructions fused with TOCin Group0",
+	.long_desc = "Two pairs of instructions fused with TOCin Group0",
+},
+{
+	.name = "PM_FUSION_TOC_GRP0_3",
+	.code = 0x50ac,
+	.short_desc = "Three pairs of instructions fused with TOC in Group0",
+	.long_desc = "Three pairs of instructions fused with TOC in Group0",
+},
+{
+	.name = "PM_FUSION_TOC_GRP1_1",
+	.code = 0x50b2,
+	.short_desc = "One pair of instructions fused with TOX in Group1",
+	.long_desc = "One pair of instructions fused with TOX in Group1",
+},
+{
+	.name = "PM_FUSION_VSX_GRP0_1",
+	.code = 0x50b8,
+	.short_desc = "One pair of instructions fused with VSX in Group0",
+	.long_desc = "One pair of instructions fused with VSX in Group0",
+},
+{
+	.name = "PM_FUSION_VSX_GRP0_2",
+	.code = 0x50b6,
+	.short_desc = "Two pairs of instructions fused with VSX in Group0",
+	.long_desc = "Two pairs of instructions fused with VSX in Group0",
+},
+{
+	.name = "PM_FUSION_VSX_GRP0_3",
+	.code = 0x50b4,
+	.short_desc = "Three pairs of instructions fused with VSX in Group0",
+	.long_desc = "Three pairs of instructions fused with VSX in Group0",
+},
+{
+	.name = "PM_FUSION_VSX_GRP1_1",
+	.code = 0x50ba,
+	.short_desc = "One pair of instructions fused with VSX in Group1",
+	.long_desc = "One pair of instructions fused with VSX in Group1",
+},
+{
+	.name = "PM_FXU0_BUSY_FXU1_IDLE",
+	.code = 0x3000e,
+	.short_desc = "fxu0 busy and fxu1 idle",
+	.long_desc = "fxu0 busy and fxu1 idle.",
+},
+{
+	.name = "PM_FXU0_FIN",
+	.code = 0x10004,
+	.short_desc = "The fixed point unit Unit 0 finished an instruction. Instructions that finish may not necessary complete.",
+	.long_desc = "FXU0 Finished.",
+},
+{
+	.name = "PM_FXU1_BUSY_FXU0_IDLE",
+	.code = 0x4000e,
+	.short_desc = "fxu0 idle and fxu1 busy.",
+	.long_desc = "fxu0 idle and fxu1 busy. .",
+},
+{
+	.name = "PM_FXU1_FIN",
+	.code = 0x40004,
+	.short_desc = "FXU1 Finished",
+	.long_desc = "FXU1 Finished.",
+},
+{
+	.name = "PM_FXU_BUSY",
+	.code = 0x2000e,
+	.short_desc = "fxu0 busy and fxu1 busy.",
+	.long_desc = "fxu0 busy and fxu1 busy..",
+},
+{
+	.name = "PM_FXU_IDLE",
+	.code = 0x1000e,
+	.short_desc = "fxu0 idle and fxu1 idle",
+	.long_desc = "fxu0 idle and fxu1 idle.",
+},
+{
+	.name = "PM_GCT_EMPTY_CYC",
+	.code = 0x20008,
+	.short_desc = "No itags assigned either thread (GCT Empty)",
+	.long_desc = "No itags assigned either thread (GCT Empty).",
+},
+{
+	.name = "PM_GCT_MERGE",
+	.code = 0x30a4,
+	.short_desc = "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread",
+	.long_desc = "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread",
+},
+{
+	.name = "PM_GCT_NOSLOT_BR_MPRED",
+	.code = 0x4d01e,
+	.short_desc = "Gct empty for this thread due to branch mispred",
+	.long_desc = "Gct empty for this thread due to branch mispred.",
+},
+{
+	.name = "PM_GCT_NOSLOT_BR_MPRED_ICMISS",
+	.code = 0x4d01a,
+	.short_desc = "Gct empty for this thread due to Icache Miss and branch mispred",
+	.long_desc = "Gct empty for this thread due to Icache Miss and branch mispred.",
+},
+{
+	.name = "PM_GCT_NOSLOT_CYC",
+	.code = 0x100f8,
+	.short_desc = "No itags assigned",
+	.long_desc = "Pipeline empty (No itags assigned , no GCT slots used).",
+},
+{
+	.name = "PM_GCT_NOSLOT_DISP_HELD_ISSQ",
+	.code = 0x2d01e,
+	.short_desc = "Gct empty for this thread due to dispatch hold on this thread due to Issue q full",
+	.long_desc = "Gct empty for this thread due to dispatch hold on this thread due to Issue q full.",
+},
+{
+	.name = "PM_GCT_NOSLOT_DISP_HELD_MAP",
+	.code = 0x4d01c,
+	.short_desc = "Gct empty for this thread due to dispatch hold on this thread due to Mapper full",
+	.long_desc = "Gct empty for this thread due to dispatch hold on this thread due to Mapper full.",
+},
+{
+	.name = "PM_GCT_NOSLOT_DISP_HELD_OTHER",
+	.code = 0x2e010,
+	.short_desc = "Gct empty for this thread due to dispatch hold on this thread due to sync",
+	.long_desc = "Gct empty for this thread due to dispatch hold on this thread due to sync.",
+},
+{
+	.name = "PM_GCT_NOSLOT_DISP_HELD_SRQ",
+	.code = 0x2d01c,
+	.short_desc = "Gct empty for this thread due to dispatch hold on this thread due to SRQ full",
+	.long_desc = "Gct empty for this thread due to dispatch hold on this thread due to SRQ full.",
+},
+{
+	.name = "PM_GCT_NOSLOT_IC_L3MISS",
+	.code = 0x4e010,
+	.short_desc = "Gct empty for this thread due to icach l3 miss",
+	.long_desc = "Gct empty for this thread due to icach l3 miss.",
+},
+{
+	.name = "PM_GCT_NOSLOT_IC_MISS",
+	.code = 0x2d01a,
+	.short_desc = "Gct empty for this thread due to Icache Miss",
+	.long_desc = "Gct empty for this thread due to Icache Miss.",
+},
+{
+	.name = "PM_GCT_UTIL_11_14_ENTRIES",
+	.code = 0x20a2,
+	.short_desc = "GCT Utilization 11-14 entries",
+	.long_desc = "GCT Utilization 11-14 entries",
+},
+{
+	.name = "PM_GCT_UTIL_15_17_ENTRIES",
+	.code = 0x20a4,
+	.short_desc = "GCT Utilization 15-17 entries",
+	.long_desc = "GCT Utilization 15-17 entries",
+},
+{
+	.name = "PM_GCT_UTIL_18_ENTRIES",
+	.code = 0x20a6,
+	.short_desc = "GCT Utilization 18+ entries",
+	.long_desc = "GCT Utilization 18+ entries",
+},
+{
+	.name = "PM_GCT_UTIL_1_2_ENTRIES",
+	.code = 0x209c,
+	.short_desc = "GCT Utilization 1-2 entries",
+	.long_desc = "GCT Utilization 1-2 entries",
+},
+{
+	.name = "PM_GCT_UTIL_3_6_ENTRIES",
+	.code = 0x209e,
+	.short_desc = "GCT Utilization 3-6 entries",
+	.long_desc = "GCT Utilization 3-6 entries",
+},
+{
+	.name = "PM_GCT_UTIL_7_10_ENTRIES",
+	.code = 0x20a0,
+	.short_desc = "GCT Utilization 7-10 entries",
+	.long_desc = "GCT Utilization 7-10 entries",
+},
+{
+	.name = "PM_GRP_BR_MPRED_NONSPEC",
+	.code = 0x1000a,
+	.short_desc = "Group experienced non-speculative branch redirect",
+	.long_desc = "Group experienced Non-speculative br mispredicct.",
+},
+{
+	.name = "PM_GRP_CMPL",
+	.code = 0x30004,
+	.short_desc = "group completed",
+	.long_desc = "group completed.",
+},
+{
+	.name = "PM_GRP_DISP",
+	.code = 0x3000a,
+	.short_desc = "group dispatch",
+	.long_desc = "dispatch_success (Group Dispatched).",
+},
+{
+	.name = "PM_GRP_IC_MISS_NONSPEC",
+	.code = 0x1000c,
+	.short_desc = "Group experienced non-speculative I cache miss",
+	.long_desc = "Group experi enced Non-specu lative I cache miss.",
+},
+{
+	.name = "PM_GRP_MRK",
+	.code = 0x10130,
+	.short_desc = "Instruction Marked",
+	.long_desc = "Instruction marked in idu.",
+},
+{
+	.name = "PM_GRP_NON_FULL_GROUP",
+	.code = 0x509c,
+	.short_desc = "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches",
+	.long_desc = "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches",
+},
+{
+	.name = "PM_GRP_PUMP_CPRED",
+	.code = 0x20050,
+	.short_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_GRP_PUMP_MPRED",
+	.code = 0x20052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_GRP_PUMP_MPRED_RTY",
+	.code = 0x10052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_GRP_TERM_2ND_BRANCH",
+	.code = 0x50a4,
+	.short_desc = "There were enough instructions in the Ibuffer, but 2nd branch ends group",
+	.long_desc = "There were enough instructions in the Ibuffer, but 2nd branch ends group",
+},
+{
+	.name = "PM_GRP_TERM_FPU_AFTER_BR",
+	.code = 0x50a6,
+	.short_desc = "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes",
+	.long_desc = "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes",
+},
+{
+	.name = "PM_GRP_TERM_NOINST",
+	.code = 0x509e,
+	.short_desc = "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).",
+	.long_desc = "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).",
+},
+{
+	.name = "PM_GRP_TERM_OTHER",
+	.code = 0x50a0,
+	.short_desc = "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.",
+	.long_desc = "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.",
+},
+{
+	.name = "PM_GRP_TERM_SLOT_LIMIT",
+	.code = 0x50a2,
+	.short_desc = "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination",
+	.long_desc = "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination",
+},
+{
+	.name = "PM_HV_CYC",
+	.code = 0x2000a,
+	.short_desc = "Cycles in which msr_hv is high. Note that this event does not take msr_pr into consideration",
+	.long_desc = "cycles in hypervisor mode .",
+},
+{
+	.name = "PM_IBUF_FULL_CYC",
+	.code = 0x4086,
+	.short_desc = "Cycles No room in ibuff",
+	.long_desc = "Cycles No room in ibufffully qualified tranfer (if5 valid).",
+},
+{
+	.name = "PM_IC_DEMAND_CYC",
+	.code = 0x10018,
+	.short_desc = "Cycles when a demand ifetch was pending",
+	.long_desc = "Demand ifetch pending.",
+},
+{
+	.name = "PM_IC_DEMAND_L2_BHT_REDIRECT",
+	.code = 0x4098,
+	.short_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)",
+	.long_desc = "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles)",
+},
+{
+	.name = "PM_IC_DEMAND_L2_BR_REDIRECT",
+	.code = 0x409a,
+	.short_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)",
+	.long_desc = "L2 I cache demand request due to branch Mispredict ( 15 cycle path)",
+},
+{
+	.name = "PM_IC_DEMAND_REQ",
+	.code = 0x4088,
+	.short_desc = "Demand Instruction fetch request",
+	.long_desc = "Demand Instruction fetch request",
+},
+{
+	.name = "PM_IC_INVALIDATE",
+	.code = 0x508a,
+	.short_desc = "Ic line invalidated",
+	.long_desc = "Ic line invalidated",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_HIT",
+	.code = 0x4092,
+	.short_desc = "Prefetch Canceled due to icache hit",
+	.long_desc = "Prefetch Canceled due to icache hit",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_L2",
+	.code = 0x4094,
+	.short_desc = "L2 Squashed request",
+	.long_desc = "L2 Squashed request",
+},
+{
+	.name = "PM_IC_PREF_CANCEL_PAGE",
+	.code = 0x4090,
+	.short_desc = "Prefetch Canceled due to page boundary",
+	.long_desc = "Prefetch Canceled due to page boundary",
+},
+{
+	.name = "PM_IC_PREF_REQ",
+	.code = 0x408a,
+	.short_desc = "Instruction prefetch requests",
+	.long_desc = "Instruction prefetch requests",
+},
+{
+	.name = "PM_IC_PREF_WRITE",
+	.code = 0x408e,
+	.short_desc = "Instruction prefetch written into IL1",
+	.long_desc = "Instruction prefetch written into IL1",
+},
+{
+	.name = "PM_IC_RELOAD_PRIVATE",
+	.code = 0x4096,
+	.short_desc = "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was invalidat",
+	.long_desc = "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was inv",
+},
+{
+	.name = "PM_IERAT_RELOAD",
+	.code = 0x100f6,
+	.short_desc = "Number of I-ERAT reloads",
+	.long_desc = "IERAT Reloaded (Miss).",
+},
+{
+	.name = "PM_IERAT_RELOAD_16M",
+	.code = 0x4006a,
+	.short_desc = "IERAT Reloaded (Miss) for a 16M page",
+	.long_desc = "IERAT Reloaded (Miss) for a 16M page.",
+},
+{
+	.name = "PM_IERAT_RELOAD_4K",
+	.code = 0x20064,
+	.short_desc = "IERAT Miss (Not implemented as DI on POWER6)",
+	.long_desc = "IERAT Reloaded (Miss) for a 4k page.",
+},
+{
+	.name = "PM_IERAT_RELOAD_64K",
+	.code = 0x3006a,
+	.short_desc = "IERAT Reloaded (Miss) for a 64k page",
+	.long_desc = "IERAT Reloaded (Miss) for a 64k page.",
+},
+{
+	.name = "PM_IFETCH_THROTTLE",
+	.code = 0x3405e,
+	.short_desc = "Cycles in which Instruction fetch throttle was active",
+	.long_desc = "Cycles instruction fecth was throttled in IFU.",
+},
+{
+	.name = "PM_IFU_L2_TOUCH",
+	.code = 0x5088,
+	.short_desc = "L2 touch to update MRU on a line",
+	.long_desc = "L2 touch to update MRU on a line",
+},
+{
+	.name = "PM_INST_ALL_CHIP_PUMP_CPRED",
+	.code = 0x514050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for instruction fetches and prefetches",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_FROM_DL2L3_MOD",
+	.code = 0x544048,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_DL2L3_SHR",
+	.code = 0x534048,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_DL4",
+	.code = 0x53404c,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_DMEM",
+	.code = 0x54404c,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2",
+	.code = 0x514042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L21_MOD",
+	.code = 0x544046,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L21_SHR",
+	.code = 0x534046,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2MISS",
+	.code = 0x51404e,
+	.short_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x534040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x544040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2_MEPF",
+	.code = 0x524040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L2_NO_CONFLICT",
+	.code = 0x514040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 without conflict due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L3",
+	.code = 0x544042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L31_ECO_MOD",
+	.code = 0x544044,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L31_ECO_SHR",
+	.code = 0x534044,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L31_MOD",
+	.code = 0x524044,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L31_SHR",
+	.code = 0x514046,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L3MISS_MOD",
+	.code = 0x54404e,
+	.short_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch",
+	.long_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L3_DISP_CONFLICT",
+	.code = 0x534042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L3_MEPF",
+	.code = 0x524042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_L3_NO_CONFLICT",
+	.code = 0x514044,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 without conflict due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_LL4",
+	.code = 0x51404c,
+	.short_desc = "The processor's Instruction cache was reloaded from the local chip's L4 cache due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_LMEM",
+	.code = 0x524048,
+	.short_desc = "The processor's Instruction cache was reloaded from the local chip's Memory due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_MEMORY",
+	.code = 0x52404c,
+	.short_desc = "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_OFF_CHIP_CACHE",
+	.code = 0x54404a,
+	.short_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_ON_CHIP_CACHE",
+	.code = 0x514048,
+	.short_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_RL2L3_MOD",
+	.code = 0x524046,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_RL2L3_SHR",
+	.code = 0x51404a,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_RL4",
+	.code = 0x52404a,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_FROM_RMEM",
+	.code = 0x53404a,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to instruction fetches and prefetches",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1",
+},
+{
+	.name = "PM_INST_ALL_GRP_PUMP_CPRED",
+	.code = 0x524050,
+	.short_desc = "Initial and Final Pump Scope was group pump (prediction=correct) for instruction fetches and prefetches",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_GRP_PUMP_MPRED",
+	.code = 0x524052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for instruction fetches and prefetches",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_INST_ALL_GRP_PUMP_MPRED_RTY",
+	.code = 0x514052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for instruction fetches and prefetches",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_PUMP_CPRED",
+	.code = 0x514054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for instruction fetches and prefetches",
+	.long_desc = "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_PUMP_MPRED",
+	.code = 0x544052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for instruction fetches and prefetches",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_SYS_PUMP_CPRED",
+	.code = 0x534050,
+	.short_desc = "Initial and Final Pump Scope was system pump (prediction=correct) for instruction fetches and prefetches",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch",
+},
+{
+	.name = "PM_INST_ALL_SYS_PUMP_MPRED",
+	.code = 0x534052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for instruction fetches and prefetches",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_INST_ALL_SYS_PUMP_MPRED_RTY",
+	.code = 0x544050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for instruction fetches and prefetches",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch",
+},
+{
+	.name = "PM_INST_CHIP_PUMP_CPRED",
+	.code = 0x14050,
+	.short_desc = "Initial and Final Pump Scope was chip pump (prediction=correct) for an instruction fetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch.",
+},
+{
+	.name = "PM_INST_CMPL",
+	.code = 0x2,
+	.short_desc = "Number of PowerPC Instructions that completed.",
+	.long_desc = "PPC Instructions Finished (completed).",
+},
+{
+	.name = "PM_INST_DISP",
+	.code = 0x200f2,
+	.short_desc = "PPC Dispatched",
+	.long_desc = "PPC Dispatched.",
+},
+{
+	.name = "PM_INST_FROM_DL2L3_MOD",
+	.code = 0x44048,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_DL2L3_SHR",
+	.code = 0x34048,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_DL4",
+	.code = 0x3404c,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_DMEM",
+	.code = 0x4404c,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L1",
+	.code = 0x4080,
+	.short_desc = "Instruction fetches from L1",
+	.long_desc = "Instruction fetches from L1",
+},
+{
+	.name = "PM_INST_FROM_L2",
+	.code = 0x14042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L21_MOD",
+	.code = 0x44046,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L21_SHR",
+	.code = 0x34046,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L2MISS",
+	.code = 0x1404e,
+	.short_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x34040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x44040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L2_MEPF",
+	.code = 0x24040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L2_NO_CONFLICT",
+	.code = 0x14040,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L2 without conflict due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L3",
+	.code = 0x44042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L31_ECO_MOD",
+	.code = 0x44044,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L31_ECO_SHR",
+	.code = 0x34044,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L31_MOD",
+	.code = 0x24044,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L31_SHR",
+	.code = 0x14046,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L3MISS",
+	.code = 0x300fa,
+	.short_desc = "Marked instruction was reloaded from a location beyond the local chiplet",
+	.long_desc = "Inst from L3 miss.",
+},
+{
+	.name = "PM_INST_FROM_L3MISS_MOD",
+	.code = 0x4404e,
+	.short_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch",
+	.long_desc = "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L3_DISP_CONFLICT",
+	.code = 0x34042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L3_MEPF",
+	.code = 0x24042,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_L3_NO_CONFLICT",
+	.code = 0x14044,
+	.short_desc = "The processor's Instruction cache was reloaded from local core's L3 without conflict due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_LL4",
+	.code = 0x1404c,
+	.short_desc = "The processor's Instruction cache was reloaded from the local chip's L4 cache due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_LMEM",
+	.code = 0x24048,
+	.short_desc = "The processor's Instruction cache was reloaded from the local chip's Memory due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_MEMORY",
+	.code = 0x2404c,
+	.short_desc = "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_OFF_CHIP_CACHE",
+	.code = 0x4404a,
+	.short_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_ON_CHIP_CACHE",
+	.code = 0x14048,
+	.short_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_RL2L3_MOD",
+	.code = 0x24046,
+	.short_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_RL2L3_SHR",
+	.code = 0x1404a,
+	.short_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_RL4",
+	.code = 0x2404a,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_FROM_RMEM",
+	.code = 0x3404a,
+	.short_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch)",
+	.long_desc = "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .",
+},
+{
+	.name = "PM_INST_GRP_PUMP_CPRED",
+	.code = 0x24050,
+	.short_desc = "Initial and Final Pump Scope was group pump (prediction=correct) for an instruction fetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch.",
+},
+{
+	.name = "PM_INST_GRP_PUMP_MPRED",
+	.code = 0x24052,
+	.short_desc = "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for an instruction fetch",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro",
+},
+{
+	.name = "PM_INST_GRP_PUMP_MPRED_RTY",
+	.code = 0x14052,
+	.short_desc = "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for an instruction fetch",
+	.long_desc = "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch.",
+},
+{
+	.name = "PM_INST_IMC_MATCH_CMPL",
+	.code = 0x1003a,
+	.short_desc = "IMC Match Count ( Not architected in P8)",
+	.long_desc = "IMC Match Count.",
+},
+{
+	.name = "PM_INST_IMC_MATCH_DISP",
+	.code = 0x30016,
+	.short_desc = "Matched Instructions Dispatched",
+	.long_desc = "IMC Matches dispatched.",
+},
+{
+	.name = "PM_INST_PUMP_CPRED",
+	.code = 0x14054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for an instruction fetch",
+	.long_desc = "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch.",
+},
+{
+	.name = "PM_INST_PUMP_MPRED",
+	.code = 0x44052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for an instruction fetch",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch.",
+},
+{
+	.name = "PM_INST_SYS_PUMP_CPRED",
+	.code = 0x34050,
+	.short_desc = "Initial and Final Pump Scope was system pump (prediction=correct) for an instruction fetch",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch.",
+},
+{
+	.name = "PM_INST_SYS_PUMP_MPRED",
+	.code = 0x34052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for an instruction fetch",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_INST_SYS_PUMP_MPRED_RTY",
+	.code = 0x44050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for an instruction fetch",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch.",
+},
+{
+	.name = "PM_IOPS_CMPL",
+	.code = 0x10014,
+	.short_desc = "Internal Operations completed",
+	.long_desc = "IOPS Completed.",
+},
+{
+	.name = "PM_IOPS_DISP",
+	.code = 0x30014,
+	.short_desc = "Internal Operations dispatched",
+	.long_desc = "IOPS dispatched.",
+},
+{
+	.name = "PM_IPTEG_FROM_DL2L3_MOD",
+	.code = 0x45048,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_DL2L3_SHR",
+	.code = 0x35048,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_DL4",
+	.code = 0x3504c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_DMEM",
+	.code = 0x4504c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2",
+	.code = 0x15042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L21_MOD",
+	.code = 0x45046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L21_SHR",
+	.code = 0x35046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2MISS",
+	.code = 0x1504e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x35040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x45040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2_MEPF",
+	.code = 0x25040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L2_NO_CONFLICT",
+	.code = 0x15040,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L3",
+	.code = 0x45042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L31_ECO_MOD",
+	.code = 0x45044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L31_ECO_SHR",
+	.code = 0x35044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L31_MOD",
+	.code = 0x25044,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L31_SHR",
+	.code = 0x15046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L3MISS",
+	.code = 0x4504e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L3_DISP_CONFLICT",
+	.code = 0x35042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L3_MEPF",
+	.code = 0x25042,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_L3_NO_CONFLICT",
+	.code = 0x15044,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_LL4",
+	.code = 0x1504c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_LMEM",
+	.code = 0x25048,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_MEMORY",
+	.code = 0x2504c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_OFF_CHIP_CACHE",
+	.code = 0x4504a,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_ON_CHIP_CACHE",
+	.code = 0x15048,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_RL2L3_MOD",
+	.code = 0x25046,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_RL2L3_SHR",
+	.code = 0x1504a,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_RL4",
+	.code = 0x2504a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request.",
+},
+{
+	.name = "PM_IPTEG_FROM_RMEM",
+	.code = 0x3504a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request.",
+},
+{
+	.name = "PM_ISIDE_DISP",
+	.code = 0x617082,
+	.short_desc = "All i-side dispatch attempts",
+	.long_desc = "All i-side dispatch attempts",
+},
+{
+	.name = "PM_ISIDE_DISP_FAIL",
+	.code = 0x627084,
+	.short_desc = "All i-side dispatch attempts that failed due to a addr collision with another machine",
+	.long_desc = "All i-side dispatch attempts that failed due to a addr collision with another machine",
+},
+{
+	.name = "PM_ISIDE_DISP_FAIL_OTHER",
+	.code = 0x627086,
+	.short_desc = "All i-side dispatch attempts that failed due to a reason other than addrs collision",
+	.long_desc = "All i-side dispatch attempts that failed due to a reason other than addrs collision",
+},
+{
+	.name = "PM_ISIDE_L2MEMACC",
+	.code = 0x4608e,
+	.short_desc = "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4)",
+	.long_desc = "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4)",
+},
+{
+	.name = "PM_ISIDE_MRU_TOUCH",
+	.code = 0x44608e,
+	.short_desc = "Iside L2 MRU touch",
+	.long_desc = "Iside L2 MRU touch",
+},
+{
+	.name = "PM_ISLB_MISS",
+	.code = 0xd096,
+	.short_desc = "I SLB Miss.",
+	.long_desc = "I SLB Miss.",
+},
+{
+	.name = "PM_ISU_REF_FX0",
+	.code = 0x30ac,
+	.short_desc = "FX0 ISU reject",
+	.long_desc = "FX0 ISU reject",
+},
+{
+	.name = "PM_ISU_REF_FX1",
+	.code = 0x30ae,
+	.short_desc = "FX1 ISU reject",
+	.long_desc = "FX1 ISU reject",
+},
+{
+	.name = "PM_ISU_REF_FXU",
+	.code = 0x38ac,
+	.short_desc = "FXU ISU reject from either pipe",
+	.long_desc = "ISU",
+},
+{
+	.name = "PM_ISU_REF_LS0",
+	.code = 0x30b0,
+	.short_desc = "LS0 ISU reject",
+	.long_desc = "LS0 ISU reject",
+},
+{
+	.name = "PM_ISU_REF_LS1",
+	.code = 0x30b2,
+	.short_desc = "LS1 ISU reject",
+	.long_desc = "LS1 ISU reject",
+},
+{
+	.name = "PM_ISU_REF_LS2",
+	.code = 0x30b4,
+	.short_desc = "LS2 ISU reject",
+	.long_desc = "LS2 ISU reject",
+},
+{
+	.name = "PM_ISU_REF_LS3",
+	.code = 0x30b6,
+	.short_desc = "LS3 ISU reject",
+	.long_desc = "LS3 ISU reject",
+},
+{
+	.name = "PM_ISU_REJECTS_ALL",
+	.code = 0x309c,
+	.short_desc = "All isu rejects could be more than 1 per cycle",
+	.long_desc = "All isu rejects could be more than 1 per cycle",
+},
+{
+	.name = "PM_ISU_REJECT_RES_NA",
+	.code = 0x30a2,
+	.short_desc = "ISU reject due to resource not available",
+	.long_desc = "ISU reject due to resource not available",
+},
+{
+	.name = "PM_ISU_REJECT_SAR_BYPASS",
+	.code = 0x309e,
+	.short_desc = "Reject because of SAR bypass",
+	.long_desc = "Reject because of SAR bypass",
+},
+{
+	.name = "PM_ISU_REJECT_SRC_NA",
+	.code = 0x30a0,
+	.short_desc = "ISU reject due to source not available",
+	.long_desc = "ISU reject due to source not available",
+},
+{
+	.name = "PM_ISU_REJ_VS0",
+	.code = 0x30a8,
+	.short_desc = "VS0 ISU reject",
+	.long_desc = "VS0 ISU reject",
+},
+{
+	.name = "PM_ISU_REJ_VS1",
+	.code = 0x30aa,
+	.short_desc = "VS1 ISU reject",
+	.long_desc = "VS1 ISU reject",
+},
+{
+	.name = "PM_ISU_REJ_VSU",
+	.code = 0x38a8,
+	.short_desc = "VSU ISU reject from either pipe",
+	.long_desc = "ISU",
+},
+{
+	.name = "PM_ISYNC",
+	.code = 0x30b8,
+	.short_desc = "Isync count per thread",
+	.long_desc = "Isync count per thread",
+},
+{
+	.name = "PM_ITLB_MISS",
+	.code = 0x400fc,
+	.short_desc = "ITLB Reloaded (always zero on POWER6)",
+	.long_desc = "ITLB Reloaded.",
+},
+{
+	.name = "PM_L1MISS_LAT_EXC_1024",
+	.code = 0x200301ea,
+	.short_desc = "L1 misses that took longer than 1024 cyles to resolve (miss to reload)",
+	.long_desc = "Reload latency exceeded 1024 cyc",
+},
+{
+	.name = "PM_L1MISS_LAT_EXC_2048",
+	.code = 0x200401ec,
+	.short_desc = "L1 misses that took longer than 2048 cyles to resolve (miss to reload)",
+	.long_desc = "Reload latency exceeded 2048 cyc",
+},
+{
+	.name = "PM_L1MISS_LAT_EXC_256",
+	.code = 0x200101e8,
+	.short_desc = "L1 misses that took longer than 256 cyles to resolve (miss to reload)",
+	.long_desc = "Reload latency exceeded 256 cyc",
+},
+{
+	.name = "PM_L1MISS_LAT_EXC_32",
+	.code = 0x200201e6,
+	.short_desc = "L1 misses that took longer than 32 cyles to resolve (miss to reload)",
+	.long_desc = "Reload latency exceeded 32 cyc",
+},
+{
+	.name = "PM_L1PF_L2MEMACC",
+	.code = 0x26086,
+	.short_desc = "valid when first beat of data comes in for an L1pref where data came from mem(or L4)",
+	.long_desc = "valid when first beat of data comes in for an L1pref where data came from mem(or L4)",
+},
+{
+	.name = "PM_L1_DCACHE_RELOADED_ALL",
+	.code = 0x1002c,
+	.short_desc = "L1 data cache reloaded for demand or prefetch",
+	.long_desc = "L1 data cache reloaded for demand or prefetch .",
+},
+{
+	.name = "PM_L1_DCACHE_RELOAD_VALID",
+	.code = 0x300f6,
+	.short_desc = "DL1 reloaded due to Demand Load",
+	.long_desc = "DL1 reloaded due to Demand Load .",
+},
+{
+	.name = "PM_L1_DEMAND_WRITE",
+	.code = 0x408c,
+	.short_desc = "Instruction Demand sectors wriittent into IL1",
+	.long_desc = "Instruction Demand sectors wriittent into IL1",
+},
+{
+	.name = "PM_L1_ICACHE_MISS",
+	.code = 0x200fd,
+	.short_desc = "Demand iCache Miss",
+	.long_desc = "Demand iCache Miss.",
+},
+{
+	.name = "PM_L1_ICACHE_RELOADED_ALL",
+	.code = 0x40012,
+	.short_desc = "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch",
+	.long_desc = "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch.",
+},
+{
+	.name = "PM_L1_ICACHE_RELOADED_PREF",
+	.code = 0x30068,
+	.short_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch)",
+	.long_desc = "Counts all Icache prefetch reloads ( includes demand turned into prefetch).",
+},
+{
+	.name = "PM_L2_CASTOUT_MOD",
+	.code = 0x417080,
+	.short_desc = "L2 Castouts - Modified (M, Mu, Me)",
+	.long_desc = "L2 Castouts - Modified (M, Mu, Me)",
+},
+{
+	.name = "PM_L2_CASTOUT_SHR",
+	.code = 0x417082,
+	.short_desc = "L2 Castouts - Shared (T, Te, Si, S)",
+	.long_desc = "L2 Castouts - Shared (T, Te, Si, S)",
+},
+{
+	.name = "PM_L2_CHIP_PUMP",
+	.code = 0x27084,
+	.short_desc = "RC requests that were local on chip pump attempts",
+	.long_desc = "RC requests that were local on chip pump attempts",
+},
+{
+	.name = "PM_L2_DC_INV",
+	.code = 0x427086,
+	.short_desc = "Dcache invalidates from L2",
+	.long_desc = "Dcache invalidates from L2",
+},
+{
+	.name = "PM_L2_DISP_ALL_L2MISS",
+	.code = 0x44608c,
+	.short_desc = "All successful Ld/St dispatches for this thread that were an L2miss.",
+	.long_desc = "All successful Ld/St dispatches for this thread that were an L2miss.",
+},
+{
+	.name = "PM_L2_GROUP_PUMP",
+	.code = 0x27086,
+	.short_desc = "RC requests that were on Node Pump attempts",
+	.long_desc = "RC requests that were on Node Pump attempts",
+},
+{
+	.name = "PM_L2_GRP_GUESS_CORRECT",
+	.code = 0x626084,
+	.short_desc = "L2 guess grp and guess was correct (data intra-6chip AND ^on-chip)",
+	.long_desc = "L2 guess grp and guess was correct (data intra-6chip AND ^on-chip)",
+},
+{
+	.name = "PM_L2_GRP_GUESS_WRONG",
+	.code = 0x626086,
+	.short_desc = "L2 guess grp and guess was not correct (ie data on-chip OR beyond-6chip)",
+	.long_desc = "L2 guess grp and guess was not correct (ie data on-chip OR beyond-6chip)",
+},
+{
+	.name = "PM_L2_IC_INV",
+	.code = 0x427084,
+	.short_desc = "Icache Invalidates from L2",
+	.long_desc = "Icache Invalidates from L2",
+},
+{
+	.name = "PM_L2_INST",
+	.code = 0x436088,
+	.short_desc = "All successful I-side dispatches for this thread (excludes i_l2mru_tch reqs)",
+	.long_desc = "All successful I-side dispatches for this thread (excludes i_l2mru_tch reqs)",
+},
+{
+	.name = "PM_L2_INST_MISS",
+	.code = 0x43608a,
+	.short_desc = "All successful i-side dispatches that were an L2miss for this thread (excludes i_l2mru_tch reqs)",
+	.long_desc = "All successful i-side dispatches that were an L2miss for this thread (excludes i_l2mru_tch reqs)",
+},
+{
+	.name = "PM_L2_LD",
+	.code = 0x416080,
+	.short_desc = "All successful D-side Load dispatches for this thread",
+	.long_desc = "All successful D-side Load dispatches for this thread",
+},
+{
+	.name = "PM_L2_LD_DISP",
+	.code = 0x437088,
+	.short_desc = "All successful load dispatches",
+	.long_desc = "All successful load dispatches",
+},
+{
+	.name = "PM_L2_LD_HIT",
+	.code = 0x43708a,
+	.short_desc = "All successful load dispatches that were L2 hits",
+	.long_desc = "All successful load dispatches that were L2 hits",
+},
+{
+	.name = "PM_L2_LD_MISS",
+	.code = 0x426084,
+	.short_desc = "All successful D-Side Load dispatches that were an L2miss for this thread",
+	.long_desc = "All successful D-Side Load dispatches that were an L2miss for this thread",
+},
+{
+	.name = "PM_L2_LOC_GUESS_CORRECT",
+	.code = 0x616080,
+	.short_desc = "L2 guess loc and guess was correct (ie data local)",
+	.long_desc = "L2 guess loc and guess was correct (ie data local)",
+},
+{
+	.name = "PM_L2_LOC_GUESS_WRONG",
+	.code = 0x616082,
+	.short_desc = "L2 guess loc and guess was not correct (ie data not on chip)",
+	.long_desc = "L2 guess loc and guess was not correct (ie data not on chip)",
+},
+{
+	.name = "PM_L2_RCLD_DISP",
+	.code = 0x516080,
+	.short_desc = "L2 RC load dispatch attempt",
+	.long_desc = "L2 RC load dispatch attempt",
+},
+{
+	.name = "PM_L2_RCLD_DISP_FAIL_ADDR",
+	.code = 0x516082,
+	.short_desc = "L2 RC load dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+	.long_desc = "L2 RC load dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+},
+{
+	.name = "PM_L2_RCLD_DISP_FAIL_OTHER",
+	.code = 0x526084,
+	.short_desc = "L2 RC load dispatch attempt failed due to other reasons",
+	.long_desc = "L2 RC load dispatch attempt failed due to other reasons",
+},
+{
+	.name = "PM_L2_RCST_DISP",
+	.code = 0x536088,
+	.short_desc = "L2 RC store dispatch attempt",
+	.long_desc = "L2 RC store dispatch attempt",
+},
+{
+	.name = "PM_L2_RCST_DISP_FAIL_ADDR",
+	.code = 0x53608a,
+	.short_desc = "L2 RC store dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+	.long_desc = "L2 RC store dispatch attempt failed due to address collision with RC/CO/SN/SQ",
+},
+{
+	.name = "PM_L2_RCST_DISP_FAIL_OTHER",
+	.code = 0x54608c,
+	.short_desc = "L2 RC store dispatch attempt failed due to other reasons",
+	.long_desc = "L2 RC store dispatch attempt failed due to other reasons",
+},
+{
+	.name = "PM_L2_RC_ST_DONE",
+	.code = 0x537088,
+	.short_desc = "RC did st to line that was Tx or Sx",
+	.long_desc = "RC did st to line that was Tx or Sx",
+},
+{
+	.name = "PM_L2_RTY_LD",
+	.code = 0x63708a,
+	.short_desc = "RC retries on PB for any load from core",
+	.long_desc = "RC retries on PB for any load from core",
+},
+{
+	.name = "PM_L2_RTY_ST",
+	.code = 0x3708a,
+	.short_desc = "RC retries on PB for any store from core",
+	.long_desc = "RC retries on PB for any store from core",
+},
+{
+	.name = "PM_L2_SN_M_RD_DONE",
+	.code = 0x54708c,
+	.short_desc = "SNP dispatched for a read and was M",
+	.long_desc = "SNP dispatched for a read and was M",
+},
+{
+	.name = "PM_L2_SN_M_WR_DONE",
+	.code = 0x54708e,
+	.short_desc = "SNP dispatched for a write and was M",
+	.long_desc = "SNP dispatched for a write and was M",
+},
+{
+	.name = "PM_L2_SN_SX_I_DONE",
+	.code = 0x53708a,
+	.short_desc = "SNP dispatched and went from Sx or Tx to Ix",
+	.long_desc = "SNP dispatched and went from Sx or Tx to Ix",
+},
+{
+	.name = "PM_L2_ST",
+	.code = 0x17080,
+	.short_desc = "All successful D-side store dispatches for this thread",
+	.long_desc = "All successful D-side store dispatches for this thread",
+},
+{
+	.name = "PM_L2_ST_DISP",
+	.code = 0x44708c,
+	.short_desc = "All successful store dispatches",
+	.long_desc = "All successful store dispatches",
+},
+{
+	.name = "PM_L2_ST_HIT",
+	.code = 0x44708e,
+	.short_desc = "All successful store dispatches that were L2Hits",
+	.long_desc = "All successful store dispatches that were L2Hits",
+},
+{
+	.name = "PM_L2_ST_MISS",
+	.code = 0x17082,
+	.short_desc = "All successful D-side store dispatches for this thread that were L2 Miss",
+	.long_desc = "All successful D-side store dispatches for this thread that were L2 Miss",
+},
+{
+	.name = "PM_L2_SYS_GUESS_CORRECT",
+	.code = 0x636088,
+	.short_desc = "L2 guess sys and guess was correct (ie data beyond-6chip)",
+	.long_desc = "L2 guess sys and guess was correct (ie data beyond-6chip)",
+},
+{
+	.name = "PM_L2_SYS_GUESS_WRONG",
+	.code = 0x63608a,
+	.short_desc = "L2 guess sys and guess was not correct (ie data ^beyond-6chip)",
+	.long_desc = "L2 guess sys and guess was not correct (ie data ^beyond-6chip)",
+},
+{
+	.name = "PM_L2_SYS_PUMP",
+	.code = 0x617080,
+	.short_desc = "RC requests that were system pump attempts",
+	.long_desc = "RC requests that were system pump attempts",
+},
+{
+	.name = "PM_L2_TM_REQ_ABORT",
+	.code = 0x1e05e,
+	.short_desc = "TM abort",
+	.long_desc = "TM abort.",
+},
+{
+	.name = "PM_L2_TM_ST_ABORT_SISTER",
+	.code = 0x3e05c,
+	.short_desc = "TM marked store abort",
+	.long_desc = "TM marked store abort.",
+},
+{
+	.name = "PM_L3_CINJ",
+	.code = 0x23808a,
+	.short_desc = "l3 ci of cache inject",
+	.long_desc = "l3 ci of cache inject",
+},
+{
+	.name = "PM_L3_CI_HIT",
+	.code = 0x128084,
+	.short_desc = "L3 Castins Hit (total count",
+	.long_desc = "L3 Castins Hit (total count",
+},
+{
+	.name = "PM_L3_CI_MISS",
+	.code = 0x128086,
+	.short_desc = "L3 castins miss (total count",
+	.long_desc = "L3 castins miss (total count",
+},
+{
+	.name = "PM_L3_CI_USAGE",
+	.code = 0x819082,
+	.short_desc = "rotating sample of 16 CI or CO actives",
+	.long_desc = "rotating sample of 16 CI or CO actives",
+},
+{
+	.name = "PM_L3_CO",
+	.code = 0x438088,
+	.short_desc = "l3 castout occuring ( does not include casthrough or log writes (cinj/dmaw)",
+	.long_desc = "l3 castout occuring ( does not include casthrough or log writes (cinj/dmaw)",
+},
+{
+	.name = "PM_L3_CO0_ALLOC",
+	.code = 0x83908b,
+	.short_desc = "lifetime, sample of CO machine 0 valid",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_L3_CO0_BUSY",
+	.code = 0x83908a,
+	.short_desc = "lifetime, sample of CO machine 0 valid",
+	.long_desc = "lifetime, sample of CO machine 0 valid",
+},
+{
+	.name = "PM_L3_CO_L31",
+	.code = 0x28086,
+	.short_desc = "L3 CO to L3.1 OR of port 0 and 1 ( lossy)",
+	.long_desc = "L3 CO to L3.1 OR of port 0 and 1 ( lossy)",
+},
+{
+	.name = "PM_L3_CO_LCO",
+	.code = 0x238088,
+	.short_desc = "Total L3 castouts occurred on LCO",
+	.long_desc = "Total L3 castouts occurred on LCO",
+},
+{
+	.name = "PM_L3_CO_MEM",
+	.code = 0x28084,
+	.short_desc = "L3 CO to memory OR of port 0 and 1 ( lossy)",
+	.long_desc = "L3 CO to memory OR of port 0 and 1 ( lossy)",
+},
+{
+	.name = "PM_L3_CO_MEPF",
+	.code = 0x18082,
+	.short_desc = "L3 CO of line in Mep state ( includes casthrough",
+	.long_desc = "L3 CO of line in Mep state ( includes casthrough",
+},
+{
+	.name = "PM_L3_GRP_GUESS_CORRECT",
+	.code = 0xb19082,
+	.short_desc = "Initial scope=group and data from same group (near) (pred successful)",
+	.long_desc = "Initial scope=group and data from same group (near) (pred successful)",
+},
+{
+	.name = "PM_L3_GRP_GUESS_WRONG_HIGH",
+	.code = 0xb3908a,
+	.short_desc = "Initial scope=group but data from local node. Predition too high",
+	.long_desc = "Initial scope=group but data from local node. Predition too high",
+},
+{
+	.name = "PM_L3_GRP_GUESS_WRONG_LOW",
+	.code = 0xb39088,
+	.short_desc = "Initial scope=group but data from outside group (far or rem). Prediction too Low",
+	.long_desc = "Initial scope=group but data from outside group (far or rem). Prediction too Low",
+},
+{
+	.name = "PM_L3_HIT",
+	.code = 0x218080,
+	.short_desc = "L3 Hits",
+	.long_desc = "L3 Hits",
+},
+{
+	.name = "PM_L3_L2_CO_HIT",
+	.code = 0x138088,
+	.short_desc = "L2 castout hits",
+	.long_desc = "L2 castout hits",
+},
+{
+	.name = "PM_L3_L2_CO_MISS",
+	.code = 0x13808a,
+	.short_desc = "L2 castout miss",
+	.long_desc = "L2 castout miss",
+},
+{
+	.name = "PM_L3_LAT_CI_HIT",
+	.code = 0x14808c,
+	.short_desc = "L3 Lateral Castins Hit",
+	.long_desc = "L3 Lateral Castins Hit",
+},
+{
+	.name = "PM_L3_LAT_CI_MISS",
+	.code = 0x14808e,
+	.short_desc = "L3 Lateral Castins Miss",
+	.long_desc = "L3 Lateral Castins Miss",
+},
+{
+	.name = "PM_L3_LD_HIT",
+	.code = 0x228084,
+	.short_desc = "L3 demand LD Hits",
+	.long_desc = "L3 demand LD Hits",
+},
+{
+	.name = "PM_L3_LD_MISS",
+	.code = 0x228086,
+	.short_desc = "L3 demand LD Miss",
+	.long_desc = "L3 demand LD Miss",
+},
+{
+	.name = "PM_L3_LD_PREF",
+	.code = 0x1e052,
+	.short_desc = "L3 Load Prefetches",
+	.long_desc = "L3 Load Prefetches.",
+},
+{
+	.name = "PM_L3_LOC_GUESS_CORRECT",
+	.code = 0xb19080,
+	.short_desc = "initial scope=node/chip and data from local node (local) (pred successful)",
+	.long_desc = "initial scope=node/chip and data from local node (local) (pred successful)",
+},
+{
+	.name = "PM_L3_LOC_GUESS_WRONG",
+	.code = 0xb29086,
+	.short_desc = "Initial scope=node but data from out side local node (near or far or rem). Prediction too Low",
+	.long_desc = "Initial scope=node but data from out side local node (near or far or rem). Prediction too Low",
+},
+{
+	.name = "PM_L3_MISS",
+	.code = 0x218082,
+	.short_desc = "L3 Misses",
+	.long_desc = "L3 Misses",
+},
+{
+	.name = "PM_L3_P0_CO_L31",
+	.code = 0x54808c,
+	.short_desc = "l3 CO to L3.1 (lco) port 0",
+	.long_desc = "l3 CO to L3.1 (lco) port 0",
+},
+{
+	.name = "PM_L3_P0_CO_MEM",
+	.code = 0x538088,
+	.short_desc = "l3 CO to memory port 0",
+	.long_desc = "l3 CO to memory port 0",
+},
+{
+	.name = "PM_L3_P0_CO_RTY",
+	.code = 0x929084,
+	.short_desc = "L3 CO received retry port 0",
+	.long_desc = "L3 CO received retry port 0",
+},
+{
+	.name = "PM_L3_P0_GRP_PUMP",
+	.code = 0xa29084,
+	.short_desc = "L3 pf sent with grp scope port 0",
+	.long_desc = "L3 pf sent with grp scope port 0",
+},
+{
+	.name = "PM_L3_P0_LCO_DATA",
+	.code = 0x528084,
+	.short_desc = "lco sent with data port 0",
+	.long_desc = "lco sent with data port 0",
+},
+{
+	.name = "PM_L3_P0_LCO_NO_DATA",
+	.code = 0x518080,
+	.short_desc = "dataless l3 lco sent port 0",
+	.long_desc = "dataless l3 lco sent port 0",
+},
+{
+	.name = "PM_L3_P0_LCO_RTY",
+	.code = 0xa4908c,
+	.short_desc = "L3 LCO received retry port 0",
+	.long_desc = "L3 LCO received retry port 0",
+},
+{
+	.name = "PM_L3_P0_NODE_PUMP",
+	.code = 0xa19080,
+	.short_desc = "L3 pf sent with nodal scope port 0",
+	.long_desc = "L3 pf sent with nodal scope port 0",
+},
+{
+	.name = "PM_L3_P0_PF_RTY",
+	.code = 0x919080,
+	.short_desc = "L3 PF received retry port 0",
+	.long_desc = "L3 PF received retry port 0",
+},
+{
+	.name = "PM_L3_P0_SN_HIT",
+	.code = 0x939088,
+	.short_desc = "L3 snoop hit port 0",
+	.long_desc = "L3 snoop hit port 0",
+},
+{
+	.name = "PM_L3_P0_SN_INV",
+	.code = 0x118080,
+	.short_desc = "Port0 snooper detects someone doing a store to a line thats Sx",
+	.long_desc = "Port0 snooper detects someone doing a store to a line thats Sx",
+},
+{
+	.name = "PM_L3_P0_SN_MISS",
+	.code = 0x94908c,
+	.short_desc = "L3 snoop miss port 0",
+	.long_desc = "L3 snoop miss port 0",
+},
+{
+	.name = "PM_L3_P0_SYS_PUMP",
+	.code = 0xa39088,
+	.short_desc = "L3 pf sent with sys scope port 0",
+	.long_desc = "L3 pf sent with sys scope port 0",
+},
+{
+	.name = "PM_L3_P1_CO_L31",
+	.code = 0x54808e,
+	.short_desc = "l3 CO to L3.1 (lco) port 1",
+	.long_desc = "l3 CO to L3.1 (lco) port 1",
+},
+{
+	.name = "PM_L3_P1_CO_MEM",
+	.code = 0x53808a,
+	.short_desc = "l3 CO to memory port 1",
+	.long_desc = "l3 CO to memory port 1",
+},
+{
+	.name = "PM_L3_P1_CO_RTY",
+	.code = 0x929086,
+	.short_desc = "L3 CO received retry port 1",
+	.long_desc = "L3 CO received retry port 1",
+},
+{
+	.name = "PM_L3_P1_GRP_PUMP",
+	.code = 0xa29086,
+	.short_desc = "L3 pf sent with grp scope port 1",
+	.long_desc = "L3 pf sent with grp scope port 1",
+},
+{
+	.name = "PM_L3_P1_LCO_DATA",
+	.code = 0x528086,
+	.short_desc = "lco sent with data port 1",
+	.long_desc = "lco sent with data port 1",
+},
+{
+	.name = "PM_L3_P1_LCO_NO_DATA",
+	.code = 0x518082,
+	.short_desc = "dataless l3 lco sent port 1",
+	.long_desc = "dataless l3 lco sent port 1",
+},
+{
+	.name = "PM_L3_P1_LCO_RTY",
+	.code = 0xa4908e,
+	.short_desc = "L3 LCO received retry port 1",
+	.long_desc = "L3 LCO received retry port 1",
+},
+{
+	.name = "PM_L3_P1_NODE_PUMP",
+	.code = 0xa19082,
+	.short_desc = "L3 pf sent with nodal scope port 1",
+	.long_desc = "L3 pf sent with nodal scope port 1",
+},
+{
+	.name = "PM_L3_P1_PF_RTY",
+	.code = 0x919082,
+	.short_desc = "L3 PF received retry port 1",
+	.long_desc = "L3 PF received retry port 1",
+},
+{
+	.name = "PM_L3_P1_SN_HIT",
+	.code = 0x93908a,
+	.short_desc = "L3 snoop hit port 1",
+	.long_desc = "L3 snoop hit port 1",
+},
+{
+	.name = "PM_L3_P1_SN_INV",
+	.code = 0x118082,
+	.short_desc = "Port1 snooper detects someone doing a store to a line thats Sx",
+	.long_desc = "Port1 snooper detects someone doing a store to a line thats Sx",
+},
+{
+	.name = "PM_L3_P1_SN_MISS",
+	.code = 0x94908e,
+	.short_desc = "L3 snoop miss port 1",
+	.long_desc = "L3 snoop miss port 1",
+},
+{
+	.name = "PM_L3_P1_SYS_PUMP",
+	.code = 0xa3908a,
+	.short_desc = "L3 pf sent with sys scope port 1",
+	.long_desc = "L3 pf sent with sys scope port 1",
+},
+{
+	.name = "PM_L3_PF0_ALLOC",
+	.code = 0x84908d,
+	.short_desc = "lifetime, sample of PF machine 0 valid",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_L3_PF0_BUSY",
+	.code = 0x84908c,
+	.short_desc = "lifetime, sample of PF machine 0 valid",
+	.long_desc = "lifetime, sample of PF machine 0 valid",
+},
+{
+	.name = "PM_L3_PF_HIT_L3",
+	.code = 0x428084,
+	.short_desc = "l3 pf hit in l3",
+	.long_desc = "l3 pf hit in l3",
+},
+{
+	.name = "PM_L3_PF_MISS_L3",
+	.code = 0x18080,
+	.short_desc = "L3 Prefetch missed in L3",
+	.long_desc = "L3 Prefetch missed in L3",
+},
+{
+	.name = "PM_L3_PF_OFF_CHIP_CACHE",
+	.code = 0x3808a,
+	.short_desc = "L3 Prefetch from Off chip cache",
+	.long_desc = "L3 Prefetch from Off chip cache",
+},
+{
+	.name = "PM_L3_PF_OFF_CHIP_MEM",
+	.code = 0x4808e,
+	.short_desc = "L3 Prefetch from Off chip memory",
+	.long_desc = "L3 Prefetch from Off chip memory",
+},
+{
+	.name = "PM_L3_PF_ON_CHIP_CACHE",
+	.code = 0x38088,
+	.short_desc = "L3 Prefetch from On chip cache",
+	.long_desc = "L3 Prefetch from On chip cache",
+},
+{
+	.name = "PM_L3_PF_ON_CHIP_MEM",
+	.code = 0x4808c,
+	.short_desc = "L3 Prefetch from On chip memory",
+	.long_desc = "L3 Prefetch from On chip memory",
+},
+{
+	.name = "PM_L3_PF_USAGE",
+	.code = 0x829084,
+	.short_desc = "rotating sample of 32 PF actives",
+	.long_desc = "rotating sample of 32 PF actives",
+},
+{
+	.name = "PM_L3_PREF_ALL",
+	.code = 0x4e052,
+	.short_desc = "Total HW L3 prefetches(Load+store)",
+	.long_desc = "Total HW L3 prefetches(Load+store).",
+},
+{
+	.name = "PM_L3_RD0_ALLOC",
+	.code = 0x84908f,
+	.short_desc = "lifetime, sample of RD machine 0 valid",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_L3_RD0_BUSY",
+	.code = 0x84908e,
+	.short_desc = "lifetime, sample of RD machine 0 valid",
+	.long_desc = "lifetime, sample of RD machine 0 valid",
+},
+{
+	.name = "PM_L3_RD_USAGE",
+	.code = 0x829086,
+	.short_desc = "rotating sample of 16 RD actives",
+	.long_desc = "rotating sample of 16 RD actives",
+},
+{
+	.name = "PM_L3_SN0_ALLOC",
+	.code = 0x839089,
+	.short_desc = "lifetime, sample of snooper machine 0 valid",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_L3_SN0_BUSY",
+	.code = 0x839088,
+	.short_desc = "lifetime, sample of snooper machine 0 valid",
+	.long_desc = "lifetime, sample of snooper machine 0 valid",
+},
+{
+	.name = "PM_L3_SN_USAGE",
+	.code = 0x819080,
+	.short_desc = "rotating sample of 8 snoop valids",
+	.long_desc = "rotating sample of 8 snoop valids",
+},
+{
+	.name = "PM_L3_ST_PREF",
+	.code = 0x2e052,
+	.short_desc = "L3 store Prefetches",
+	.long_desc = "L3 store Prefetches.",
+},
+{
+	.name = "PM_L3_SW_PREF",
+	.code = 0x3e052,
+	.short_desc = "Data stream touchto L3",
+	.long_desc = "Data stream touchto L3.",
+},
+{
+	.name = "PM_L3_SYS_GUESS_CORRECT",
+	.code = 0xb29084,
+	.short_desc = "Initial scope=system and data from outside group (far or rem)(pred successful)",
+	.long_desc = "Initial scope=system and data from outside group (far or rem)(pred successful)",
+},
+{
+	.name = "PM_L3_SYS_GUESS_WRONG",
+	.code = 0xb4908c,
+	.short_desc = "Initial scope=system but data from local or near. Predction too high",
+	.long_desc = "Initial scope=system but data from local or near. Predction too high",
+},
+{
+	.name = "PM_L3_TRANS_PF",
+	.code = 0x24808e,
+	.short_desc = "L3 Transient prefetch",
+	.long_desc = "L3 Transient prefetch",
+},
+{
+	.name = "PM_L3_WI0_ALLOC",
+	.code = 0x18081,
+	.short_desc = "lifetime, sample of Write Inject machine 0 valid",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_L3_WI0_BUSY",
+	.code = 0x418080,
+	.short_desc = "lifetime, sample of Write Inject machine 0 valid",
+	.long_desc = "lifetime, sample of Write Inject machine 0 valid",
+},
+{
+	.name = "PM_L3_WI_USAGE",
+	.code = 0x418082,
+	.short_desc = "rotating sample of 8 WI actives",
+	.long_desc = "rotating sample of 8 WI actives",
+},
+{
+	.name = "PM_LARX_FIN",
+	.code = 0x3c058,
+	.short_desc = "Larx finished",
+	.long_desc = "Larx finished .",
+},
+{
+	.name = "PM_LD_CMPL",
+	.code = 0x1002e,
+	.short_desc = "count of Loads completed",
+	.long_desc = "count of Loads completed.",
+},
+{
+	.name = "PM_LD_L3MISS_PEND_CYC",
+	.code = 0x10062,
+	.short_desc = "Cycles L3 miss was pending for this thread",
+	.long_desc = "Cycles L3 miss was pending for this thread.",
+},
+{
+	.name = "PM_LD_MISS_L1",
+	.code = 0x3e054,
+	.short_desc = "Load Missed L1",
+	.long_desc = "Load Missed L1.",
+},
+{
+	.name = "PM_LD_REF_L1",
+	.code = 0x100ee,
+	.short_desc = "All L1 D cache load references counted at finish, gated by reject",
+	.long_desc = "Load Ref count combined for all units.",
+},
+{
+	.name = "PM_LD_REF_L1_LSU0",
+	.code = 0xc080,
+	.short_desc = "LS0 L1 D cache load references counted at finish, gated by reject",
+	.long_desc = "LS0 L1 D cache load references counted at finish, gated by rejectLSU0 L1 D cache load references",
+},
+{
+	.name = "PM_LD_REF_L1_LSU1",
+	.code = 0xc082,
+	.short_desc = "LS1 L1 D cache load references counted at finish, gated by reject",
+	.long_desc = "LS1 L1 D cache load references counted at finish, gated by rejectLSU1 L1 D cache load references",
+},
+{
+	.name = "PM_LD_REF_L1_LSU2",
+	.code = 0xc094,
+	.short_desc = "LS2 L1 D cache load references counted at finish, gated by reject",
+	.long_desc = "LS2 L1 D cache load references counted at finish, gated by reject42",
+},
+{
+	.name = "PM_LD_REF_L1_LSU3",
+	.code = 0xc096,
+	.short_desc = "LS3 L1 D cache load references counted at finish, gated by reject",
+	.long_desc = "LS3 L1 D cache load references counted at finish, gated by reject42",
+},
+{
+	.name = "PM_LINK_STACK_INVALID_PTR",
+	.code = 0x509a,
+	.short_desc = "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops",
+	.long_desc = "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops",
+},
+{
+	.name = "PM_LINK_STACK_WRONG_ADD_PRED",
+	.code = 0x5098,
+	.short_desc = "Link stack predicts wrong address, because of link stack design limitation.",
+	.long_desc = "Link stack predicts wrong address, because of link stack design limitation.",
+},
+{
+	.name = "PM_LS0_ERAT_MISS_PREF",
+	.code = 0xe080,
+	.short_desc = "LS0 Erat miss due to prefetch",
+	.long_desc = "LS0 Erat miss due to prefetch42",
+},
+{
+	.name = "PM_LS0_L1_PREF",
+	.code = 0xd0b8,
+	.short_desc = "LS0 L1 cache data prefetches",
+	.long_desc = "LS0 L1 cache data prefetches42",
+},
+{
+	.name = "PM_LS0_L1_SW_PREF",
+	.code = 0xc098,
+	.short_desc = "Software L1 Prefetches, including SW Transient Prefetches",
+	.long_desc = "Software L1 Prefetches, including SW Transient Prefetches42",
+},
+{
+	.name = "PM_LS1_ERAT_MISS_PREF",
+	.code = 0xe082,
+	.short_desc = "LS1 Erat miss due to prefetch",
+	.long_desc = "LS1 Erat miss due to prefetch42",
+},
+{
+	.name = "PM_LS1_L1_PREF",
+	.code = 0xd0ba,
+	.short_desc = "LS1 L1 cache data prefetches",
+	.long_desc = "LS1 L1 cache data prefetches42",
+},
+{
+	.name = "PM_LS1_L1_SW_PREF",
+	.code = 0xc09a,
+	.short_desc = "Software L1 Prefetches, including SW Transient Prefetches",
+	.long_desc = "Software L1 Prefetches, including SW Transient Prefetches42",
+},
+{
+	.name = "PM_LSU0_FLUSH_LRQ",
+	.code = 0xc0b0,
+	.short_desc = "LS0 Flush: LRQ",
+	.long_desc = "LS0 Flush: LRQLSU0 LRQ flushes",
+},
+{
+	.name = "PM_LSU0_FLUSH_SRQ",
+	.code = 0xc0b8,
+	.short_desc = "LS0 Flush: SRQ",
+	.long_desc = "LS0 Flush: SRQLSU0 SRQ lhs flushes",
+},
+{
+	.name = "PM_LSU0_FLUSH_ULD",
+	.code = 0xc0a4,
+	.short_desc = "LS0 Flush: Unaligned Load",
+	.long_desc = "LS0 Flush: Unaligned LoadLSU0 unaligned load flushes",
+},
+{
+	.name = "PM_LSU0_FLUSH_UST",
+	.code = 0xc0ac,
+	.short_desc = "LS0 Flush: Unaligned Store",
+	.long_desc = "LS0 Flush: Unaligned StoreLSU0 unaligned store flushes",
+},
+{
+	.name = "PM_LSU0_L1_CAM_CANCEL",
+	.code = 0xf088,
+	.short_desc = "ls0 l1 tm cam cancel",
+	.long_desc = "ls0 l1 tm cam cancel42",
+},
+{
+	.name = "PM_LSU0_LARX_FIN",
+	.code = 0x1e056,
+	.short_desc = "Larx finished in LSU pipe0",
+	.long_desc = ".",
+},
+{
+	.name = "PM_LSU0_LMQ_LHR_MERGE",
+	.code = 0xd08c,
+	.short_desc = "LS0 Load Merged with another cacheline request",
+	.long_desc = "LS0 Load Merged with another cacheline request42",
+},
+{
+	.name = "PM_LSU0_NCLD",
+	.code = 0xc08c,
+	.short_desc = "LS0 Non-cachable Loads counted at finish",
+	.long_desc = "LS0 Non-cachable Loads counted at finishLSU0 non-cacheable loads",
+},
+{
+	.name = "PM_LSU0_PRIMARY_ERAT_HIT",
+	.code = 0xe090,
+	.short_desc = "Primary ERAT hit",
+	.long_desc = "Primary ERAT hit42",
+},
+{
+	.name = "PM_LSU0_REJECT",
+	.code = 0x1e05a,
+	.short_desc = "LSU0 reject",
+	.long_desc = "LSU0 reject .",
+},
+{
+	.name = "PM_LSU0_SRQ_STFWD",
+	.code = 0xc09c,
+	.short_desc = "LS0 SRQ forwarded data to a load",
+	.long_desc = "LS0 SRQ forwarded data to a loadLSU0 SRQ store forwarded",
+},
+{
+	.name = "PM_LSU0_STORE_REJECT",
+	.code = 0xf084,
+	.short_desc = "ls0 store reject",
+	.long_desc = "ls0 store reject42",
+},
+{
+	.name = "PM_LSU0_TMA_REQ_L2",
+	.code = 0xe0a8,
+	.short_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding",
+	.long_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42",
+},
+{
+	.name = "PM_LSU0_TM_L1_HIT",
+	.code = 0xe098,
+	.short_desc = "Load tm hit in L1",
+	.long_desc = "Load tm hit in L142",
+},
+{
+	.name = "PM_LSU0_TM_L1_MISS",
+	.code = 0xe0a0,
+	.short_desc = "Load tm L1 miss",
+	.long_desc = "Load tm L1 miss42",
+},
+{
+	.name = "PM_LSU1_FLUSH_LRQ",
+	.code = 0xc0b2,
+	.short_desc = "LS1 Flush: LRQ",
+	.long_desc = "LS1 Flush: LRQLSU1 LRQ flushes",
+},
+{
+	.name = "PM_LSU1_FLUSH_SRQ",
+	.code = 0xc0ba,
+	.short_desc = "LS1 Flush: SRQ",
+	.long_desc = "LS1 Flush: SRQLSU1 SRQ lhs flushes",
+},
+{
+	.name = "PM_LSU1_FLUSH_ULD",
+	.code = 0xc0a6,
+	.short_desc = "LS 1 Flush: Unaligned Load",
+	.long_desc = "LS 1 Flush: Unaligned LoadLSU1 unaligned load flushes",
+},
+{
+	.name = "PM_LSU1_FLUSH_UST",
+	.code = 0xc0ae,
+	.short_desc = "LS1 Flush: Unaligned Store",
+	.long_desc = "LS1 Flush: Unaligned StoreLSU1 unaligned store flushes",
+},
+{
+	.name = "PM_LSU1_L1_CAM_CANCEL",
+	.code = 0xf08a,
+	.short_desc = "ls1 l1 tm cam cancel",
+	.long_desc = "ls1 l1 tm cam cancel42",
+},
+{
+	.name = "PM_LSU1_LARX_FIN",
+	.code = 0x2e056,
+	.short_desc = "Larx finished in LSU pipe1",
+	.long_desc = "Larx finished in LSU pipe1.",
+},
+{
+	.name = "PM_LSU1_LMQ_LHR_MERGE",
+	.code = 0xd08e,
+	.short_desc = "LS1 Load Merge with another cacheline request",
+	.long_desc = "LS1 Load Merge with another cacheline request42",
+},
+{
+	.name = "PM_LSU1_NCLD",
+	.code = 0xc08e,
+	.short_desc = "LS1 Non-cachable Loads counted at finish",
+	.long_desc = "LS1 Non-cachable Loads counted at finishLSU1 non-cacheable loads",
+},
+{
+	.name = "PM_LSU1_PRIMARY_ERAT_HIT",
+	.code = 0xe092,
+	.short_desc = "Primary ERAT hit",
+	.long_desc = "Primary ERAT hit42",
+},
+{
+	.name = "PM_LSU1_REJECT",
+	.code = 0x2e05a,
+	.short_desc = "LSU1 reject",
+	.long_desc = "LSU1 reject .",
+},
+{
+	.name = "PM_LSU1_SRQ_STFWD",
+	.code = 0xc09e,
+	.short_desc = "LS1 SRQ forwarded data to a load",
+	.long_desc = "LS1 SRQ forwarded data to a loadLSU1 SRQ store forwarded",
+},
+{
+	.name = "PM_LSU1_STORE_REJECT",
+	.code = 0xf086,
+	.short_desc = "ls1 store reject",
+	.long_desc = "ls1 store reject42",
+},
+{
+	.name = "PM_LSU1_TMA_REQ_L2",
+	.code = 0xe0aa,
+	.short_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding",
+	.long_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42",
+},
+{
+	.name = "PM_LSU1_TM_L1_HIT",
+	.code = 0xe09a,
+	.short_desc = "Load tm hit in L1",
+	.long_desc = "Load tm hit in L142",
+},
+{
+	.name = "PM_LSU1_TM_L1_MISS",
+	.code = 0xe0a2,
+	.short_desc = "Load tm L1 miss",
+	.long_desc = "Load tm L1 miss42",
+},
+{
+	.name = "PM_LSU2_FLUSH_LRQ",
+	.code = 0xc0b4,
+	.short_desc = "LS02Flush: LRQ",
+	.long_desc = "LS02Flush: LRQ42",
+},
+{
+	.name = "PM_LSU2_FLUSH_SRQ",
+	.code = 0xc0bc,
+	.short_desc = "LS2 Flush: SRQ",
+	.long_desc = "LS2 Flush: SRQ42",
+},
+{
+	.name = "PM_LSU2_FLUSH_ULD",
+	.code = 0xc0a8,
+	.short_desc = "LS3 Flush: Unaligned Load",
+	.long_desc = "LS3 Flush: Unaligned Load42",
+},
+{
+	.name = "PM_LSU2_L1_CAM_CANCEL",
+	.code = 0xf08c,
+	.short_desc = "ls2 l1 tm cam cancel",
+	.long_desc = "ls2 l1 tm cam cancel42",
+},
+{
+	.name = "PM_LSU2_LARX_FIN",
+	.code = 0x3e056,
+	.short_desc = "Larx finished in LSU pipe2",
+	.long_desc = "Larx finished in LSU pipe2.",
+},
+{
+	.name = "PM_LSU2_LDF",
+	.code = 0xc084,
+	.short_desc = "LS2 Scalar Loads",
+	.long_desc = "LS2 Scalar Loads42",
+},
+{
+	.name = "PM_LSU2_LDX",
+	.code = 0xc088,
+	.short_desc = "LS0 Vector Loads",
+	.long_desc = "LS0 Vector Loads42",
+},
+{
+	.name = "PM_LSU2_LMQ_LHR_MERGE",
+	.code = 0xd090,
+	.short_desc = "LS0 Load Merged with another cacheline request",
+	.long_desc = "LS0 Load Merged with another cacheline request42",
+},
+{
+	.name = "PM_LSU2_PRIMARY_ERAT_HIT",
+	.code = 0xe094,
+	.short_desc = "Primary ERAT hit",
+	.long_desc = "Primary ERAT hit42",
+},
+{
+	.name = "PM_LSU2_REJECT",
+	.code = 0x3e05a,
+	.short_desc = "LSU2 reject",
+	.long_desc = "LSU2 reject .",
+},
+{
+	.name = "PM_LSU2_SRQ_STFWD",
+	.code = 0xc0a0,
+	.short_desc = "LS2 SRQ forwarded data to a load",
+	.long_desc = "LS2 SRQ forwarded data to a load42",
+},
+{
+	.name = "PM_LSU2_TMA_REQ_L2",
+	.code = 0xe0ac,
+	.short_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding",
+	.long_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42",
+},
+{
+	.name = "PM_LSU2_TM_L1_HIT",
+	.code = 0xe09c,
+	.short_desc = "Load tm hit in L1",
+	.long_desc = "Load tm hit in L142",
+},
+{
+	.name = "PM_LSU2_TM_L1_MISS",
+	.code = 0xe0a4,
+	.short_desc = "Load tm L1 miss",
+	.long_desc = "Load tm L1 miss42",
+},
+{
+	.name = "PM_LSU3_FLUSH_LRQ",
+	.code = 0xc0b6,
+	.short_desc = "LS3 Flush: LRQ",
+	.long_desc = "LS3 Flush: LRQ42",
+},
+{
+	.name = "PM_LSU3_FLUSH_SRQ",
+	.code = 0xc0be,
+	.short_desc = "LS13 Flush: SRQ",
+	.long_desc = "LS13 Flush: SRQ42",
+},
+{
+	.name = "PM_LSU3_FLUSH_ULD",
+	.code = 0xc0aa,
+	.short_desc = "LS 14Flush: Unaligned Load",
+	.long_desc = "LS 14Flush: Unaligned Load42",
+},
+{
+	.name = "PM_LSU3_L1_CAM_CANCEL",
+	.code = 0xf08e,
+	.short_desc = "ls3 l1 tm cam cancel",
+	.long_desc = "ls3 l1 tm cam cancel42",
+},
+{
+	.name = "PM_LSU3_LARX_FIN",
+	.code = 0x4e056,
+	.short_desc = "Larx finished in LSU pipe3",
+	.long_desc = "Larx finished in LSU pipe3.",
+},
+{
+	.name = "PM_LSU3_LDF",
+	.code = 0xc086,
+	.short_desc = "LS3 Scalar Loads",
+	.long_desc = "LS3 Scalar Loads 42",
+},
+{
+	.name = "PM_LSU3_LDX",
+	.code = 0xc08a,
+	.short_desc = "LS1 Vector Loads",
+	.long_desc = "LS1 Vector Loads42",
+},
+{
+	.name = "PM_LSU3_LMQ_LHR_MERGE",
+	.code = 0xd092,
+	.short_desc = "LS1 Load Merge with another cacheline request",
+	.long_desc = "LS1 Load Merge with another cacheline request42",
+},
+{
+	.name = "PM_LSU3_PRIMARY_ERAT_HIT",
+	.code = 0xe096,
+	.short_desc = "Primary ERAT hit",
+	.long_desc = "Primary ERAT hit42",
+},
+{
+	.name = "PM_LSU3_REJECT",
+	.code = 0x4e05a,
+	.short_desc = "LSU3 reject",
+	.long_desc = "LSU3 reject .",
+},
+{
+	.name = "PM_LSU3_SRQ_STFWD",
+	.code = 0xc0a2,
+	.short_desc = "LS3 SRQ forwarded data to a load",
+	.long_desc = "LS3 SRQ forwarded data to a load42",
+},
+{
+	.name = "PM_LSU3_TMA_REQ_L2",
+	.code = 0xe0ae,
+	.short_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding",
+	.long_desc = "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42",
+},
+{
+	.name = "PM_LSU3_TM_L1_HIT",
+	.code = 0xe09e,
+	.short_desc = "Load tm hit in L1",
+	.long_desc = "Load tm hit in L142",
+},
+{
+	.name = "PM_LSU3_TM_L1_MISS",
+	.code = 0xe0a6,
+	.short_desc = "Load tm L1 miss",
+	.long_desc = "Load tm L1 miss42",
+},
+{
+	.name = "PM_LSU_DERAT_MISS",
+	.code = 0x200f6,
+	.short_desc = "DERAT Reloaded due to a DERAT miss",
+	.long_desc = "DERAT Reloaded (Miss).",
+},
+{
+	.name = "PM_LSU_ERAT_MISS_PREF",
+	.code = 0xe880,
+	.short_desc = "Erat miss due to prefetch, on either pipe",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_FIN",
+	.code = 0x30066,
+	.short_desc = "LSU Finished an instruction (up to 2 per cycle)",
+	.long_desc = "LSU Finished an instruction (up to 2 per cycle).",
+},
+{
+	.name = "PM_LSU_FLUSH_UST",
+	.code = 0xc8ac,
+	.short_desc = "Unaligned Store Flush on either pipe",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_FOUR_TABLEWALK_CYC",
+	.code = 0xd0a4,
+	.short_desc = "Cycles when four tablewalks pending on this thread",
+	.long_desc = "Cycles when four tablewalks pending on this thread42",
+},
+{
+	.name = "PM_LSU_FX_FIN",
+	.code = 0x10066,
+	.short_desc = "LSU Finished a FX operation (up to 2 per cycle",
+	.long_desc = "LSU Finished a FX operation (up to 2 per cycle.",
+},
+{
+	.name = "PM_LSU_L1_PREF",
+	.code = 0xd8b8,
+	.short_desc = "hw initiated , include sw streaming forms as well , include sw streams as a separate event",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_L1_SW_PREF",
+	.code = 0xc898,
+	.short_desc = "Software L1 Prefetches, including SW Transient Prefetches, on both pipes",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_LDF",
+	.code = 0xc884,
+	.short_desc = "FPU loads only on LS2/LS3 ie LU0/LU1",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_LDX",
+	.code = 0xc888,
+	.short_desc = "Vector loads can issue only on LS2/LS3",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_LMQ_FULL_CYC",
+	.code = 0xd0a2,
+	.short_desc = "LMQ full",
+	.long_desc = "LMQ fullCycles LMQ full,",
+},
+{
+	.name = "PM_LSU_LMQ_S0_ALLOC",
+	.code = 0xd0a1,
+	.short_desc = "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_LMQ_S0_VALID",
+	.code = 0xd0a0,
+	.short_desc = "Slot 0 of LMQ valid",
+	.long_desc = "Slot 0 of LMQ validLMQ slot 0 valid",
+},
+{
+	.name = "PM_LSU_LMQ_SRQ_EMPTY_ALL_CYC",
+	.code = 0x3001c,
+	.short_desc = "ALL threads lsu empty (lmq and srq empty)",
+	.long_desc = "ALL threads lsu empty (lmq and srq empty). Issue HW016541",
+},
+{
+	.name = "PM_LSU_LMQ_SRQ_EMPTY_CYC",
+	.code = 0x2003e,
+	.short_desc = "LSU empty (lmq and srq empty)",
+	.long_desc = "LSU empty (lmq and srq empty).",
+},
+{
+	.name = "PM_LSU_LRQ_S0_ALLOC",
+	.code = 0xd09f,
+	.short_desc = "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_LRQ_S0_VALID",
+	.code = 0xd09e,
+	.short_desc = "Slot 0 of LRQ valid",
+	.long_desc = "Slot 0 of LRQ validLRQ slot 0 valid",
+},
+{
+	.name = "PM_LSU_LRQ_S43_ALLOC",
+	.code = 0xf091,
+	.short_desc = "LRQ slot 43 was released",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_LRQ_S43_VALID",
+	.code = 0xf090,
+	.short_desc = "LRQ slot 43 was busy",
+	.long_desc = "LRQ slot 43 was busy42",
+},
+{
+	.name = "PM_LSU_MRK_DERAT_MISS",
+	.code = 0x30162,
+	.short_desc = "DERAT Reloaded (Miss)",
+	.long_desc = "DERAT Reloaded (Miss).",
+},
+{
+	.name = "PM_LSU_NCLD",
+	.code = 0xc88c,
+	.short_desc = "count at finish so can return only on ls0 or ls1",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_NCST",
+	.code = 0xc092,
+	.short_desc = "Non-cachable Stores sent to nest",
+	.long_desc = "Non-cachable Stores sent to nest42",
+},
+{
+	.name = "PM_LSU_REJECT",
+	.code = 0x10064,
+	.short_desc = "LSU Reject (up to 4 per cycle)",
+	.long_desc = "LSU Reject (up to 4 per cycle).",
+},
+{
+	.name = "PM_LSU_REJECT_ERAT_MISS",
+	.code = 0x2e05c,
+	.short_desc = "LSU Reject due to ERAT (up to 4 per cycles)",
+	.long_desc = "LSU Reject due to ERAT (up to 4 per cycles).",
+},
+{
+	.name = "PM_LSU_REJECT_LHS",
+	.code = 0x4e05c,
+	.short_desc = "LSU Reject due to LHS (up to 4 per cycle)",
+	.long_desc = "LSU Reject due to LHS (up to 4 per cycle).",
+},
+{
+	.name = "PM_LSU_REJECT_LMQ_FULL",
+	.code = 0x1e05c,
+	.short_desc = "LSU reject due to LMQ full ( 4 per cycle)",
+	.long_desc = "LSU reject due to LMQ full ( 4 per cycle).",
+},
+{
+	.name = "PM_LSU_SET_MPRED",
+	.code = 0xd082,
+	.short_desc = "Line already in cache at reload time",
+	.long_desc = "Line already in cache at reload time42",
+},
+{
+	.name = "PM_LSU_SRQ_EMPTY_CYC",
+	.code = 0x40008,
+	.short_desc = "ALL threads srq empty",
+	.long_desc = "All threads srq empty.",
+},
+{
+	.name = "PM_LSU_SRQ_FULL_CYC",
+	.code = 0x1001a,
+	.short_desc = "Storage Queue is full and is blocking dispatch",
+	.long_desc = "SRQ is Full.",
+},
+{
+	.name = "PM_LSU_SRQ_S0_ALLOC",
+	.code = 0xd09d,
+	.short_desc = "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_SRQ_S0_VALID",
+	.code = 0xd09c,
+	.short_desc = "Slot 0 of SRQ valid",
+	.long_desc = "Slot 0 of SRQ validSRQ slot 0 valid",
+},
+{
+	.name = "PM_LSU_SRQ_S39_ALLOC",
+	.code = 0xf093,
+	.short_desc = "SRQ slot 39 was released",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_SRQ_S39_VALID",
+	.code = 0xf092,
+	.short_desc = "SRQ slot 39 was busy",
+	.long_desc = "SRQ slot 39 was busy42",
+},
+{
+	.name = "PM_LSU_SRQ_SYNC",
+	.code = 0xd09b,
+	.short_desc = "A sync in the SRQ ended",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_LSU_SRQ_SYNC_CYC",
+	.code = 0xd09a,
+	.short_desc = "A sync is in the SRQ (edge detect to count)",
+	.long_desc = "A sync is in the SRQ (edge detect to count)SRQ sync duration",
+},
+{
+	.name = "PM_LSU_STORE_REJECT",
+	.code = 0xf084,
+	.short_desc = "Store reject on either pipe",
+	.long_desc = "LSU",
+},
+{
+	.name = "PM_LSU_TWO_TABLEWALK_CYC",
+	.code = 0xd0a6,
+	.short_desc = "Cycles when two tablewalks pending on this thread",
+	.long_desc = "Cycles when two tablewalks pending on this thread42",
+},
+{
+	.name = "PM_LWSYNC",
+	.code = 0x5094,
+	.short_desc = "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out",
+	.long_desc = "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out",
+},
+{
+	.name = "PM_LWSYNC_HELD",
+	.code = 0x209a,
+	.short_desc = "LWSYNC held at dispatch",
+	.long_desc = "LWSYNC held at dispatch",
+},
+{
+	.name = "PM_MEM_CO",
+	.code = 0x4c058,
+	.short_desc = "Memory castouts from this lpar",
+	.long_desc = "Memory castouts from this lpar.",
+},
+{
+	.name = "PM_MEM_LOC_THRESH_IFU",
+	.code = 0x10058,
+	.short_desc = "Local Memory above threshold for IFU speculation control",
+	.long_desc = "Local Memory above threshold for IFU speculation control.",
+},
+{
+	.name = "PM_MEM_LOC_THRESH_LSU_HIGH",
+	.code = 0x40056,
+	.short_desc = "Local memory above threshold for LSU medium",
+	.long_desc = "Local memory above threshold for LSU medium.",
+},
+{
+	.name = "PM_MEM_LOC_THRESH_LSU_MED",
+	.code = 0x1c05e,
+	.short_desc = "Local memory above theshold for data prefetch",
+	.long_desc = "Local memory above theshold for data prefetch.",
+},
+{
+	.name = "PM_MEM_PREF",
+	.code = 0x2c058,
+	.short_desc = "Memory prefetch for this lpar. Includes L4",
+	.long_desc = "Memory prefetch for this lpar.",
+},
+{
+	.name = "PM_MEM_READ",
+	.code = 0x10056,
+	.short_desc = "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch). Includes L4",
+	.long_desc = "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch).",
+},
+{
+	.name = "PM_MEM_RWITM",
+	.code = 0x3c05e,
+	.short_desc = "Memory rwitm for this lpar",
+	.long_desc = "Memory rwitm for this lpar.",
+},
+{
+	.name = "PM_MRK_BACK_BR_CMPL",
+	.code = 0x3515e,
+	.short_desc = "Marked branch instruction completed with a target address less than current instruction address",
+	.long_desc = "Marked branch instruction completed with a target address less than current instruction address.",
+},
+{
+	.name = "PM_MRK_BRU_FIN",
+	.code = 0x2013a,
+	.short_desc = "bru marked instr finish",
+	.long_desc = "bru marked instr finish.",
+},
+{
+	.name = "PM_MRK_BR_CMPL",
+	.code = 0x1016e,
+	.short_desc = "Branch Instruction completed",
+	.long_desc = "Branch Instruction completed.",
+},
+{
+	.name = "PM_MRK_BR_MPRED_CMPL",
+	.code = 0x301e4,
+	.short_desc = "Marked Branch Mispredicted",
+	.long_desc = "Marked Branch Mispredicted.",
+},
+{
+	.name = "PM_MRK_BR_TAKEN_CMPL",
+	.code = 0x101e2,
+	.short_desc = "Marked Branch Taken completed",
+	.long_desc = "Marked Branch Taken.",
+},
+{
+	.name = "PM_MRK_CRU_FIN",
+	.code = 0x3013a,
+	.short_desc = "IFU non-branch finished",
+	.long_desc = "IFU non-branch marked instruction finished.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_MOD",
+	.code = 0x4d148,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_MOD_CYC",
+	.code = 0x2d128,
+	.short_desc = "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_SHR",
+	.code = 0x3d148,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL2L3_SHR_CYC",
+	.code = 0x2c128,
+	.short_desc = "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL4",
+	.code = 0x3d14c,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DL4_CYC",
+	.code = 0x2c12c,
+	.short_desc = "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load",
+	.long_desc = "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DMEM",
+	.code = 0x4d14c,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_DMEM_CYC",
+	.code = 0x2d12c,
+	.short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load",
+	.long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2",
+	.code = 0x1d142,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_MOD",
+	.code = 0x4d146,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_MOD_CYC",
+	.code = 0x2d126,
+	.short_desc = "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_SHR",
+	.code = 0x3d146,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L21_SHR_CYC",
+	.code = 0x2c126,
+	.short_desc = "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2MISS",
+	.code = 0x1d14e,
+	.short_desc = "Data cache reload L2 miss",
+	.long_desc = "Data cache reload L2 miss.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2MISS_CYC",
+	.code = 0x4c12e,
+	.short_desc = "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load",
+	.long_desc = "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_CYC",
+	.code = 0x4c122,
+	.short_desc = "Duration in cycles to reload from local core's L2 due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L2 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x3d140,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST_CYC",
+	.code = 0x2c120,
+	.short_desc = "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x4d140,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER_CYC",
+	.code = 0x2d120,
+	.short_desc = "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_MEPF",
+	.code = 0x2d140,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_MEPF_CYC",
+	.code = 0x4d120,
+	.short_desc = "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_NO_CONFLICT",
+	.code = 0x1d140,
+	.short_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L2_NO_CONFLICT_CYC",
+	.code = 0x4c120,
+	.short_desc = "Duration in cycles to reload from local core's L2 without conflict due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L2 without conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3",
+	.code = 0x4d142,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_ECO_MOD",
+	.code = 0x4d144,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_ECO_MOD_CYC",
+	.code = 0x2d124,
+	.short_desc = "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_ECO_SHR",
+	.code = 0x3d144,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_ECO_SHR_CYC",
+	.code = 0x2c124,
+	.short_desc = "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_MOD",
+	.code = 0x2d144,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_MOD_CYC",
+	.code = 0x4d124,
+	.short_desc = "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_SHR",
+	.code = 0x1d146,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L31_SHR_CYC",
+	.code = 0x4c126,
+	.short_desc = "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3MISS",
+	.code = 0x201e4,
+	.short_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3MISS_CYC",
+	.code = 0x2d12e,
+	.short_desc = "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load",
+	.long_desc = "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_CYC",
+	.code = 0x2d122,
+	.short_desc = "Duration in cycles to reload from local core's L3 due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L3 due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_DISP_CONFLICT",
+	.code = 0x3d142,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_DISP_CONFLICT_CYC",
+	.code = 0x2c122,
+	.short_desc = "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_MEPF",
+	.code = 0x2d142,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_MEPF_CYC",
+	.code = 0x4d122,
+	.short_desc = "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_NO_CONFLICT",
+	.code = 0x1d144,
+	.short_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_L3_NO_CONFLICT_CYC",
+	.code = 0x4c124,
+	.short_desc = "Duration in cycles to reload from local core's L3 without conflict due to a marked load",
+	.long_desc = "Duration in cycles to reload from local core's L3 without conflict due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LL4",
+	.code = 0x1d14c,
+	.short_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LL4_CYC",
+	.code = 0x4c12c,
+	.short_desc = "Duration in cycles to reload from the local chip's L4 cache due to a marked load",
+	.long_desc = "Duration in cycles to reload from the local chip's L4 cache due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LMEM",
+	.code = 0x2d148,
+	.short_desc = "The processor's data cache was reloaded from the local chip's Memory due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from the local chip's Memory due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_LMEM_CYC",
+	.code = 0x4d128,
+	.short_desc = "Duration in cycles to reload from the local chip's Memory due to a marked load",
+	.long_desc = "Duration in cycles to reload from the local chip's Memory due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_MEM",
+	.code = 0x201e0,
+	.short_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_MEMORY",
+	.code = 0x2d14c,
+	.short_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_MEMORY_CYC",
+	.code = 0x4d12c,
+	.short_desc = "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load",
+	.long_desc = "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_OFF_CHIP_CACHE",
+	.code = 0x4d14a,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_OFF_CHIP_CACHE_CYC",
+	.code = 0x2d12a,
+	.short_desc = "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load",
+	.long_desc = "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_ON_CHIP_CACHE",
+	.code = 0x1d148,
+	.short_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_ON_CHIP_CACHE_CYC",
+	.code = 0x4c128,
+	.short_desc = "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load",
+	.long_desc = "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_MOD",
+	.code = 0x2d146,
+	.short_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_MOD_CYC",
+	.code = 0x4d126,
+	.short_desc = "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_SHR",
+	.code = 0x1d14a,
+	.short_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load",
+	.long_desc = "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL2L3_SHR_CYC",
+	.code = 0x4c12a,
+	.short_desc = "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load",
+	.long_desc = "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL4",
+	.code = 0x2d14a,
+	.short_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RL4_CYC",
+	.code = 0x4d12a,
+	.short_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load",
+	.long_desc = "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RMEM",
+	.code = 0x3d14a,
+	.short_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load",
+	.long_desc = "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DATA_FROM_RMEM_CYC",
+	.code = 0x2c12a,
+	.short_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load",
+	.long_desc = "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load.",
+},
+{
+	.name = "PM_MRK_DCACHE_RELOAD_INTV",
+	.code = 0x40118,
+	.short_desc = "Combined Intervention event",
+	.long_desc = "Combined Intervention event.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS",
+	.code = 0x301e6,
+	.short_desc = "Erat Miss (TLB Access) All page sizes",
+	.long_desc = "Erat Miss (TLB Access) All page sizes.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_16G",
+	.code = 0x4d154,
+	.short_desc = "Marked Data ERAT Miss (Data TLB Access) page size 16G",
+	.long_desc = "Marked Data ERAT Miss (Data TLB Access) page size 16G.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_16M",
+	.code = 0x3d154,
+	.short_desc = "Marked Data ERAT Miss (Data TLB Access) page size 16M",
+	.long_desc = "Marked Data ERAT Miss (Data TLB Access) page size 16M.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_4K",
+	.code = 0x1d156,
+	.short_desc = "Marked Data ERAT Miss (Data TLB Access) page size 4K",
+	.long_desc = "Marked Data ERAT Miss (Data TLB Access) page size 4K.",
+},
+{
+	.name = "PM_MRK_DERAT_MISS_64K",
+	.code = 0x2d154,
+	.short_desc = "Marked Data ERAT Miss (Data TLB Access) page size 64K",
+	.long_desc = "Marked Data ERAT Miss (Data TLB Access) page size 64K.",
+},
+{
+	.name = "PM_MRK_DFU_FIN",
+	.code = 0x20132,
+	.short_desc = "Decimal Unit marked Instruction Finish",
+	.long_desc = "Decimal Unit marked Instruction Finish.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_DL2L3_MOD",
+	.code = 0x4f148,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_DL2L3_SHR",
+	.code = 0x3f148,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_DL4",
+	.code = 0x3f14c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_DMEM",
+	.code = 0x4f14c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2",
+	.code = 0x1f142,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L21_MOD",
+	.code = 0x4f146,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L21_SHR",
+	.code = 0x3f146,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2MISS",
+	.code = 0x1f14e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+	.code = 0x3f140,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+	.code = 0x4f140,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2_MEPF",
+	.code = 0x2f140,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L2_NO_CONFLICT",
+	.code = 0x1f140,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L3",
+	.code = 0x4f142,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L31_ECO_MOD",
+	.code = 0x4f144,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L31_ECO_SHR",
+	.code = 0x3f144,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L31_MOD",
+	.code = 0x2f144,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L31_SHR",
+	.code = 0x1f146,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L3MISS",
+	.code = 0x4f14e,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L3_DISP_CONFLICT",
+	.code = 0x3f142,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L3_MEPF",
+	.code = 0x2f142,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_L3_NO_CONFLICT",
+	.code = 0x1f144,
+	.short_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_LL4",
+	.code = 0x1f14c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_LMEM",
+	.code = 0x2f148,
+	.short_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_MEMORY",
+	.code = 0x2f14c,
+	.short_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_OFF_CHIP_CACHE",
+	.code = 0x4f14a,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_ON_CHIP_CACHE",
+	.code = 0x1f148,
+	.short_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_RL2L3_MOD",
+	.code = 0x2f146,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_RL2L3_SHR",
+	.code = 0x1f14a,
+	.short_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_RL4",
+	.code = 0x2f14a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DPTEG_FROM_RMEM",
+	.code = 0x3f14a,
+	.short_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request",
+	.long_desc = "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS",
+	.code = 0x401e4,
+	.short_desc = "Marked dtlb miss",
+	.long_desc = "Marked dtlb miss.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_16G",
+	.code = 0x1d158,
+	.short_desc = "Marked Data TLB Miss page size 16G",
+	.long_desc = "Marked Data TLB Miss page size 16G.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_16M",
+	.code = 0x4d156,
+	.short_desc = "Marked Data TLB Miss page size 16M",
+	.long_desc = "Marked Data TLB Miss page size 16M.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_4K",
+	.code = 0x2d156,
+	.short_desc = "Marked Data TLB Miss page size 4k",
+	.long_desc = "Marked Data TLB Miss page size 4k.",
+},
+{
+	.name = "PM_MRK_DTLB_MISS_64K",
+	.code = 0x3d156,
+	.short_desc = "Marked Data TLB Miss page size 64K",
+	.long_desc = "Marked Data TLB Miss page size 64K.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_BKILL",
+	.code = 0x40154,
+	.short_desc = "Marked store had to do a bkill",
+	.long_desc = "Marked store had to do a bkill.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_BKILL_CYC",
+	.code = 0x2f150,
+	.short_desc = "cycles L2 RC took for a bkill",
+	.long_desc = "cycles L2 RC took for a bkill.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_CLAIM_RTY",
+	.code = 0x3015e,
+	.short_desc = "Sampled store did a rwitm and got a rty",
+	.long_desc = "Sampled store did a rwitm and got a rty.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_DCLAIM",
+	.code = 0x30154,
+	.short_desc = "Marked store had to do a dclaim",
+	.long_desc = "Marked store had to do a dclaim.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_DCLAIM_CYC",
+	.code = 0x2f152,
+	.short_desc = "cycles L2 RC took for a dclaim",
+	.long_desc = "cycles L2 RC took for a dclaim.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_MATCH",
+	.code = 0x30156,
+	.short_desc = "ttype and cresp matched as specified in MMCR1",
+	.long_desc = "ttype and cresp matched as specified in MMCR1.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_MATCH_CYC",
+	.code = 0x4f152,
+	.short_desc = "cresp/ttype match cycles",
+	.long_desc = "cresp/ttype match cycles.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_RD_RTY",
+	.code = 0x4015e,
+	.short_desc = "Sampled L2 reads retry count",
+	.long_desc = "Sampled L2 reads retry count.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_RD_T_INTV",
+	.code = 0x1015e,
+	.short_desc = "Sampled Read got a T intervention",
+	.long_desc = "Sampled Read got a T intervention.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_RWITM_CYC",
+	.code = 0x4f150,
+	.short_desc = "cycles L2 RC took for a rwitm",
+	.long_desc = "cycles L2 RC took for a rwitm.",
+},
+{
+	.name = "PM_MRK_FAB_RSP_RWITM_RTY",
+	.code = 0x2015e,
+	.short_desc = "Sampled store did a rwitm and got a rty",
+	.long_desc = "Sampled store did a rwitm and got a rty.",
+},
+{
+	.name = "PM_MRK_FILT_MATCH",
+	.code = 0x2013c,
+	.short_desc = "Marked filter Match",
+	.long_desc = "Marked filter Match.",
+},
+{
+	.name = "PM_MRK_FIN_STALL_CYC",
+	.code = 0x1013c,
+	.short_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count )",
+	.long_desc = "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count #).",
+},
+{
+	.name = "PM_MRK_FXU_FIN",
+	.code = 0x20134,
+	.short_desc = "fxu marked instr finish",
+	.long_desc = "fxu marked instr finish.",
+},
+{
+	.name = "PM_MRK_GRP_CMPL",
+	.code = 0x40130,
+	.short_desc = "marked instruction finished (completed)",
+	.long_desc = "marked instruction finished (completed).",
+},
+{
+	.name = "PM_MRK_GRP_IC_MISS",
+	.code = 0x4013a,
+	.short_desc = "Marked Group experienced I cache miss",
+	.long_desc = "Marked Group experienced I cache miss.",
+},
+{
+	.name = "PM_MRK_GRP_NTC",
+	.code = 0x3013c,
+	.short_desc = "Marked group ntc cycles.",
+	.long_desc = "Marked group ntc cycles.",
+},
+{
+	.name = "PM_MRK_INST_CMPL",
+	.code = 0x401e0,
+	.short_desc = "marked instruction completed",
+	.long_desc = "marked instruction completed.",
+},
+{
+	.name = "PM_MRK_INST_DECODED",
+	.code = 0x20130,
+	.short_desc = "marked instruction decoded",
+	.long_desc = "marked instruction decoded. Name from ISU?",
+},
+{
+	.name = "PM_MRK_INST_DISP",
+	.code = 0x101e0,
+	.short_desc = "The thread has dispatched a randomly sampled marked instruction",
+	.long_desc = "Marked Instruction dispatched.",
+},
+{
+	.name = "PM_MRK_INST_FIN",
+	.code = 0x30130,
+	.short_desc = "marked instruction finished",
+	.long_desc = "marked instr finish any unit .",
+},
+{
+	.name = "PM_MRK_INST_FROM_L3MISS",
+	.code = 0x401e6,
+	.short_desc = "Marked instruction was reloaded from a location beyond the local chiplet",
+	.long_desc = "n/a",
+},
+{
+	.name = "PM_MRK_INST_ISSUED",
+	.code = 0x10132,
+	.short_desc = "Marked instruction issued",
+	.long_desc = "Marked instruction issued.",
+},
+{
+	.name = "PM_MRK_INST_TIMEO",
+	.code = 0x40134,
+	.short_desc = "marked Instruction finish timeout (instruction lost)",
+	.long_desc = "marked Instruction finish timeout (instruction lost).",
+},
+{
+	.name = "PM_MRK_L1_ICACHE_MISS",
+	.code = 0x101e4,
+	.short_desc = "sampled Instruction suffered an icache Miss",
+	.long_desc = "Marked L1 Icache Miss.",
+},
+{
+	.name = "PM_MRK_L1_RELOAD_VALID",
+	.code = 0x101ea,
+	.short_desc = "Marked demand reload",
+	.long_desc = "Marked demand reload.",
+},
+{
+	.name = "PM_MRK_L2_RC_DISP",
+	.code = 0x20114,
+	.short_desc = "Marked Instruction RC dispatched in L2",
+	.long_desc = "Marked Instruction RC dispatched in L2.",
+},
+{
+	.name = "PM_MRK_L2_RC_DONE",
+	.code = 0x3012a,
+	.short_desc = "Marked RC done",
+	.long_desc = "Marked RC done.",
+},
+{
+	.name = "PM_MRK_LARX_FIN",
+	.code = 0x40116,
+	.short_desc = "Larx finished",
+	.long_desc = "Larx finished .",
+},
+{
+	.name = "PM_MRK_LD_MISS_EXPOSED",
+	.code = 0x1013f,
+	.short_desc = "Marked Load exposed Miss (exposed period ended)",
+	.long_desc = "Marked Load exposed Miss (use edge detect to count #)",
+},
+{
+	.name = "PM_MRK_LD_MISS_EXPOSED_CYC",
+	.code = 0x1013e,
+	.short_desc = "Marked Load exposed Miss cycles",
+	.long_desc = "Marked Load exposed Miss (use edge detect to count #).",
+},
+{
+	.name = "PM_MRK_LD_MISS_L1",
+	.code = 0x201e2,
+	.short_desc = "Marked DL1 Demand Miss counted at exec time",
+	.long_desc = "Marked DL1 Demand Miss counted at exec time.",
+},
+{
+	.name = "PM_MRK_LD_MISS_L1_CYC",
+	.code = 0x4013e,
+	.short_desc = "Marked ld latency",
+	.long_desc = "Marked ld latency.",
+},
+{
+	.name = "PM_MRK_LSU_FIN",
+	.code = 0x40132,
+	.short_desc = "lsu marked instr finish",
+	.long_desc = "lsu marked instr finish.",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH",
+	.code = 0xd180,
+	.short_desc = "Flush: (marked) : All Cases",
+	.long_desc = "Flush: (marked) : All Cases42",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_LRQ",
+	.code = 0xd188,
+	.short_desc = "Flush: (marked) LRQ",
+	.long_desc = "Flush: (marked) LRQMarked LRQ flushes",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_SRQ",
+	.code = 0xd18a,
+	.short_desc = "Flush: (marked) SRQ",
+	.long_desc = "Flush: (marked) SRQMarked SRQ lhs flushes",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_ULD",
+	.code = 0xd184,
+	.short_desc = "Flush: (marked) Unaligned Load",
+	.long_desc = "Flush: (marked) Unaligned LoadMarked unaligned load flushes",
+},
+{
+	.name = "PM_MRK_LSU_FLUSH_UST",
+	.code = 0xd186,
+	.short_desc = "Flush: (marked) Unaligned Store",
+	.long_desc = "Flush: (marked) Unaligned StoreMarked unaligned store flushes",
+},
+{
+	.name = "PM_MRK_LSU_REJECT",
+	.code = 0x40164,
+	.short_desc = "LSU marked reject (up to 2 per cycle)",
+	.long_desc = "LSU marked reject (up to 2 per cycle).",
+},
+{
+	.name = "PM_MRK_LSU_REJECT_ERAT_MISS",
+	.code = 0x30164,
+	.short_desc = "LSU marked reject due to ERAT (up to 2 per cycle)",
+	.long_desc = "LSU marked reject due to ERAT (up to 2 per cycle).",
+},
+{
+	.name = "PM_MRK_NTF_FIN",
+	.code = 0x20112,
+	.short_desc = "Marked next to finish instruction finished",
+	.long_desc = "Marked next to finish instruction finished.",
+},
+{
+	.name = "PM_MRK_RUN_CYC",
+	.code = 0x1d15e,
+	.short_desc = "Marked run cycles",
+	.long_desc = "Marked run cycles.",
+},
+{
+	.name = "PM_MRK_SRC_PREF_TRACK_EFF",
+	.code = 0x1d15a,
+	.short_desc = "Marked src pref track was effective",
+	.long_desc = "Marked src pref track was effective.",
+},
+{
+	.name = "PM_MRK_SRC_PREF_TRACK_INEFF",
+	.code = 0x3d15a,
+	.short_desc = "Prefetch tracked was ineffective for marked src",
+	.long_desc = "Prefetch tracked was ineffective for marked src.",
+},
+{
+	.name = "PM_MRK_SRC_PREF_TRACK_MOD",
+	.code = 0x4d15c,
+	.short_desc = "Prefetch tracked was moderate for marked src",
+	.long_desc = "Prefetch tracked was moderate for marked src.",
+},
+{
+	.name = "PM_MRK_SRC_PREF_TRACK_MOD_L2",
+	.code = 0x1d15c,
+	.short_desc = "Marked src Prefetch Tracked was moderate (source L2)",
+	.long_desc = "Marked src Prefetch Tracked was moderate (source L2).",
+},
+{
+	.name = "PM_MRK_SRC_PREF_TRACK_MOD_L3",
+	.code = 0x3d15c,
+	.short_desc = "Prefetch tracked was moderate (L3 hit) for marked src",
+	.long_desc = "Prefetch tracked was moderate (L3 hit) for marked src.",
+},
+{
+	.name = "PM_MRK_STALL_CMPLU_CYC",
+	.code = 0x3013e,
+	.short_desc = "Marked Group completion Stall",
+	.long_desc = "Marked Group Completion Stall cycles (use edge detect to count #).",
+},
+{
+	.name = "PM_MRK_STCX_FAIL",
+	.code = 0x3e158,
+	.short_desc = "marked stcx failed",
+	.long_desc = "marked stcx failed.",
+},
+{
+	.name = "PM_MRK_ST_CMPL",
+	.code = 0x10134,
+	.short_desc = "marked store completed and sent to nest",
+	.long_desc = "Marked store completed.",
+},
+{
+	.name = "PM_MRK_ST_CMPL_INT",
+	.code = 0x30134,
+	.short_desc = "marked store finished with intervention",
+	.long_desc = "marked store complete (data home) with intervention.",
+},
+{
+	.name = "PM_MRK_ST_DRAIN_TO_L2DISP_CYC",
+	.code = 0x3f150,
+	.short_desc = "cycles to drain st from core to L2",
+	.long_desc = "cycles to drain st from core to L2.",
+},
+{
+	.name = "PM_MRK_ST_FWD",
+	.code = 0x3012c,
+	.short_desc = "Marked st forwards",
+	.long_desc = "Marked st forwards.",
+},
+{
+	.name = "PM_MRK_ST_L2DISP_TO_CMPL_CYC",
+	.code = 0x1f150,
+	.short_desc = "cycles from L2 rc disp to l2 rc completion",
+	.long_desc = "cycles from L2 rc disp to l2 rc completion.",
+},
+{
+	.name = "PM_MRK_ST_NEST",
+	.code = 0x20138,
+	.short_desc = "Marked store sent to nest",
+	.long_desc = "Marked store sent to nest.",
+},
+{
+	.name = "PM_MRK_TGT_PREF_TRACK_EFF",
+	.code = 0x1c15a,
+	.short_desc = "Marked target pref track was effective",
+	.long_desc = "Marked target pref track was effective.",
+},
+{
+	.name = "PM_MRK_TGT_PREF_TRACK_INEFF",
+	.code = 0x3c15a,
+	.short_desc = "Prefetch tracked was ineffective for marked target",
+	.long_desc = "Prefetch tracked was ineffective for marked target.",
+},
+{
+	.name = "PM_MRK_TGT_PREF_TRACK_MOD",
+	.code = 0x4c15c,
+	.short_desc = "Prefetch tracked was moderate for marked target",
+	.long_desc = "Prefetch tracked was moderate for marked target.",
+},
+{
+	.name = "PM_MRK_TGT_PREF_TRACK_MOD_L2",
+	.code = 0x1c15c,
+	.short_desc = "Marked target Prefetch Tracked was moderate (source L2)",
+	.long_desc = "Marked target Prefetch Tracked was moderate (source L2).",
+},
+{
+	.name = "PM_MRK_TGT_PREF_TRACK_MOD_L3",
+	.code = 0x3c15c,
+	.short_desc = "Prefetch tracked was moderate (L3 hit) for marked target",
+	.long_desc = "Prefetch tracked was moderate (L3 hit) for marked target.",
+},
+{
+	.name = "PM_MRK_VSU_FIN",
+	.code = 0x30132,
+	.short_desc = "VSU marked instr finish",
+	.long_desc = "vsu (fpu) marked instr finish.",
+},
+{
+	.name = "PM_MULT_MRK",
+	.code = 0x3d15e,
+	.short_desc = "mult marked instr",
+	.long_desc = "mult marked instr.",
+},
+{
+	.name = "PM_NESTED_TEND",
+	.code = 0x20b0,
+	.short_desc = "Completion time nested tend",
+	.long_desc = "Completion time nested tend",
+},
+{
+	.name = "PM_NEST_REF_CLK",
+	.code = 0x3006e,
+	.short_desc = "Multiply by 4 to obtain the number of PB cycles",
+	.long_desc = "Nest reference clocks.",
+},
+{
+	.name = "PM_NON_FAV_TBEGIN",
+	.code = 0x20b6,
+	.short_desc = "Dispatch time non favored tbegin",
+	.long_desc = "Dispatch time non favored tbegin",
+},
+{
+	.name = "PM_NON_TM_RST_SC",
+	.code = 0x328084,
+	.short_desc = "non tm snp rst tm sc",
+	.long_desc = "non tm snp rst tm sc",
+},
+{
+	.name = "PM_NTCG_ALL_FIN",
+	.code = 0x2001a,
+	.short_desc = "Cycles after all instructions have finished to group completed",
+	.long_desc = "Ccycles after all instructions have finished to group completed.",
+},
+{
+	.name = "PM_OUTER_TBEGIN",
+	.code = 0x20ac,
+	.short_desc = "Completion time outer tbegin",
+	.long_desc = "Completion time outer tbegin",
+},
+{
+	.name = "PM_OUTER_TEND",
+	.code = 0x20ae,
+	.short_desc = "Completion time outer tend",
+	.long_desc = "Completion time outer tend",
+},
+{
+	.name = "PM_PMC1_OVERFLOW",
+	.code = 0x20010,
+	.short_desc = "Overflow from counter 1",
+	.long_desc = "Overflow from counter 1.",
+},
+{
+	.name = "PM_PMC2_OVERFLOW",
+	.code = 0x30010,
+	.short_desc = "Overflow from counter 2",
+	.long_desc = "Overflow from counter 2.",
+},
+{
+	.name = "PM_PMC2_REWIND",
+	.code = 0x30020,
+	.short_desc = "PMC2 Rewind Event (did not match condition)",
+	.long_desc = "PMC2 Rewind Event (did not match condition).",
+},
+{
+	.name = "PM_PMC2_SAVED",
+	.code = 0x10022,
+	.short_desc = "PMC2 Rewind Value saved",
+	.long_desc = "PMC2 Rewind Value saved (matched condition).",
+},
+{
+	.name = "PM_PMC3_OVERFLOW",
+	.code = 0x40010,
+	.short_desc = "Overflow from counter 3",
+	.long_desc = "Overflow from counter 3.",
+},
+{
+	.name = "PM_PMC4_OVERFLOW",
+	.code = 0x10010,
+	.short_desc = "Overflow from counter 4",
+	.long_desc = "Overflow from counter 4.",
+},
+{
+	.name = "PM_PMC4_REWIND",
+	.code = 0x10020,
+	.short_desc = "PMC4 Rewind Event",
+	.long_desc = "PMC4 Rewind Event (did not match condition).",
+},
+{
+	.name = "PM_PMC4_SAVED",
+	.code = 0x30022,
+	.short_desc = "PMC4 Rewind Value saved (matched condition)",
+	.long_desc = "PMC4 Rewind Value saved (matched condition).",
+},
+{
+	.name = "PM_PMC5_OVERFLOW",
+	.code = 0x10024,
+	.short_desc = "Overflow from counter 5",
+	.long_desc = "Overflow from counter 5.",
+},
+{
+	.name = "PM_PMC6_OVERFLOW",
+	.code = 0x30024,
+	.short_desc = "Overflow from counter 6",
+	.long_desc = "Overflow from counter 6.",
+},
+{
+	.name = "PM_PREF_TRACKED",
+	.code = 0x2005a,
+	.short_desc = "Total number of Prefetch Operations that were tracked",
+	.long_desc = "Total number of Prefetch Operations that were tracked.",
+},
+{
+	.name = "PM_PREF_TRACK_EFF",
+	.code = 0x1005a,
+	.short_desc = "Prefetch Tracked was effective",
+	.long_desc = "Prefetch Tracked was effective.",
+},
+{
+	.name = "PM_PREF_TRACK_INEFF",
+	.code = 0x3005a,
+	.short_desc = "Prefetch tracked was ineffective",
+	.long_desc = "Prefetch tracked was ineffective.",
+},
+{
+	.name = "PM_PREF_TRACK_MOD",
+	.code = 0x4005a,
+	.short_desc = "Prefetch tracked was moderate",
+	.long_desc = "Prefetch tracked was moderate.",
+},
+{
+	.name = "PM_PREF_TRACK_MOD_L2",
+	.code = 0x1005c,
+	.short_desc = "Prefetch Tracked was moderate (source L2)",
+	.long_desc = "Prefetch Tracked was moderate (source L2).",
+},
+{
+	.name = "PM_PREF_TRACK_MOD_L3",
+	.code = 0x3005c,
+	.short_desc = "Prefetch tracked was moderate (L3)",
+	.long_desc = "Prefetch tracked was moderate (L3).",
+},
+{
+	.name = "PM_PROBE_NOP_DISP",
+	.code = 0x40014,
+	.short_desc = "ProbeNops dispatched",
+	.long_desc = "ProbeNops dispatched.",
+},
+{
+	.name = "PM_PTE_PREFETCH",
+	.code = 0xe084,
+	.short_desc = "PTE prefetches",
+	.long_desc = "PTE prefetches42",
+},
+{
+	.name = "PM_PUMP_CPRED",
+	.code = 0x10054,
+	.short_desc = "Pump prediction correct. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_PUMP_MPRED",
+	.code = 0x40052,
+	.short_desc = "Pump misprediction. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_RC0_ALLOC",
+	.code = 0x16081,
+	.short_desc = "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_RC0_BUSY",
+	.code = 0x16080,
+	.short_desc = "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+},
+{
+	.name = "PM_RC_LIFETIME_EXC_1024",
+	.code = 0x200301ea,
+	.short_desc = "Number of times the RC machine for a sampled instruction was active for more than 1024 cycles",
+	.long_desc = "Reload latency exceeded 1024 cyc",
+},
+{
+	.name = "PM_RC_LIFETIME_EXC_2048",
+	.code = 0x200401ec,
+	.short_desc = "Number of times the RC machine for a sampled instruction was active for more than 2048 cycles",
+	.long_desc = "Threshold counter exceeded a value of 2048",
+},
+{
+	.name = "PM_RC_LIFETIME_EXC_256",
+	.code = 0x200101e8,
+	.short_desc = "Number of times the RC machine for a sampled instruction was active for more than 256 cycles",
+	.long_desc = "Threshold counter exceed a count of 256",
+},
+{
+	.name = "PM_RC_LIFETIME_EXC_32",
+	.code = 0x200201e6,
+	.short_desc = "Number of times the RC machine for a sampled instruction was active for more than 32 cycles",
+	.long_desc = "Reload latency exceeded 32 cyc",
+},
+{
+	.name = "PM_RC_USAGE",
+	.code = 0x36088,
+	.short_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+	.long_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+},
+{
+	.name = "PM_RD_CLEARING_SC",
+	.code = 0x34808e,
+	.short_desc = "rd clearing sc",
+	.long_desc = "rd clearing sc",
+},
+{
+	.name = "PM_RD_FORMING_SC",
+	.code = 0x34808c,
+	.short_desc = "rd forming sc",
+	.long_desc = "rd forming sc",
+},
+{
+	.name = "PM_RD_HIT_PF",
+	.code = 0x428086,
+	.short_desc = "rd machine hit l3 pf machine",
+	.long_desc = "rd machine hit l3 pf machine",
+},
+{
+	.name = "PM_REAL_SRQ_FULL",
+	.code = 0x20004,
+	.short_desc = "Out of real srq entries",
+	.long_desc = "Out of real srq entries.",
+},
+{
+	.name = "PM_RUN_CYC",
+	.code = 0x600f4,
+	.short_desc = "Run_cycles",
+	.long_desc = "Run_cycles.",
+},
+{
+	.name = "PM_RUN_CYC_SMT2_MODE",
+	.code = 0x3006c,
+	.short_desc = "Cycles run latch is set and core is in SMT2 mode",
+	.long_desc = "Cycles run latch is set and core is in SMT2 mode.",
+},
+{
+	.name = "PM_RUN_CYC_SMT2_SHRD_MODE",
+	.code = 0x2006a,
+	.short_desc = "cycles this threads run latch is set and the core is in SMT2 shared mode",
+	.long_desc = "Cycles run latch is set and core is in SMT2-shared mode.",
+},
+{
+	.name = "PM_RUN_CYC_SMT2_SPLIT_MODE",
+	.code = 0x1006a,
+	.short_desc = "Cycles run latch is set and core is in SMT2-split mode",
+	.long_desc = "Cycles run latch is set and core is in SMT2-split mode.",
+},
+{
+	.name = "PM_RUN_CYC_SMT4_MODE",
+	.code = 0x2006c,
+	.short_desc = "cycles this threads run latch is set and the core is in SMT4 mode",
+	.long_desc = "Cycles run latch is set and core is in SMT4 mode.",
+},
+{
+	.name = "PM_RUN_CYC_SMT8_MODE",
+	.code = 0x4006c,
+	.short_desc = "Cycles run latch is set and core is in SMT8 mode",
+	.long_desc = "Cycles run latch is set and core is in SMT8 mode.",
+},
+{
+	.name = "PM_RUN_CYC_ST_MODE",
+	.code = 0x1006c,
+	.short_desc = "Cycles run latch is set and core is in ST mode",
+	.long_desc = "Cycles run latch is set and core is in ST mode.",
+},
+{
+	.name = "PM_RUN_INST_CMPL",
+	.code = 0x500fa,
+	.short_desc = "Run_Instructions",
+	.long_desc = "Run_Instructions.",
+},
+{
+	.name = "PM_RUN_PURR",
+	.code = 0x400f4,
+	.short_desc = "Run_PURR",
+	.long_desc = "Run_PURR.",
+},
+{
+	.name = "PM_RUN_SPURR",
+	.code = 0x10008,
+	.short_desc = "Run SPURR",
+	.long_desc = "Run SPURR.",
+},
+{
+	.name = "PM_SEC_ERAT_HIT",
+	.code = 0xf082,
+	.short_desc = "secondary ERAT Hit",
+	.long_desc = "secondary ERAT Hit42",
+},
+{
+	.name = "PM_SHL_CREATED",
+	.code = 0x508c,
+	.short_desc = "Store-Hit-Load Table Entry Created",
+	.long_desc = "Store-Hit-Load Table Entry Created",
+},
+{
+	.name = "PM_SHL_ST_CONVERT",
+	.code = 0x508e,
+	.short_desc = "Store-Hit-Load Table Read Hit with entry Enabled",
+	.long_desc = "Store-Hit-Load Table Read Hit with entry Enabled",
+},
+{
+	.name = "PM_SHL_ST_DISABLE",
+	.code = 0x5090,
+	.short_desc = "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush)",
+	.long_desc = "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush)",
+},
+{
+	.name = "PM_SN0_ALLOC",
+	.code = 0x26085,
+	.short_desc = "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "0.0",
+},
+{
+	.name = "PM_SN0_BUSY",
+	.code = 0x26084,
+	.short_desc = "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+	.long_desc = "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point)",
+},
+{
+	.name = "PM_SNOOP_TLBIE",
+	.code = 0xd0b2,
+	.short_desc = "TLBIE snoop",
+	.long_desc = "TLBIE snoopSnoop TLBIE",
+},
+{
+	.name = "PM_SNP_TM_HIT_M",
+	.code = 0x338088,
+	.short_desc = "snp tm st hit m mu",
+	.long_desc = "snp tm st hit m mu",
+},
+{
+	.name = "PM_SNP_TM_HIT_T",
+	.code = 0x33808a,
+	.short_desc = "snp tm_st_hit t tn te",
+	.long_desc = "snp tm_st_hit t tn te",
+},
+{
+	.name = "PM_SN_USAGE",
+	.code = 0x4608c,
+	.short_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+	.long_desc = "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running",
+},
+{
+	.name = "PM_STALL_END_GCT_EMPTY",
+	.code = 0x10028,
+	.short_desc = "Count ended because GCT went empty",
+	.long_desc = "Count ended because GCT went empty.",
+},
+{
+	.name = "PM_STCX_FAIL",
+	.code = 0x1e058,
+	.short_desc = "stcx failed",
+	.long_desc = "stcx failed .",
+},
+{
+	.name = "PM_STCX_LSU",
+	.code = 0xc090,
+	.short_desc = "STCX executed reported at sent to nest",
+	.long_desc = "STCX executed reported at sent to nest42",
+},
+{
+	.name = "PM_ST_CAUSED_FAIL",
+	.code = 0x717080,
+	.short_desc = "Non TM St caused any thread to fail",
+	.long_desc = "Non TM St caused any thread to fail",
+},
+{
+	.name = "PM_ST_CMPL",
+	.code = 0x20016,
+	.short_desc = "Store completion count",
+	.long_desc = "Store completion count.",
+},
+{
+	.name = "PM_ST_FIN",
+	.code = 0x200f0,
+	.short_desc = "Store Instructions Finished",
+	.long_desc = "Store Instructions Finished (store sent to nest).",
+},
+{
+	.name = "PM_ST_FWD",
+	.code = 0x20018,
+	.short_desc = "Store forwards that finished",
+	.long_desc = "Store forwards that finished.",
+},
+{
+	.name = "PM_ST_MISS_L1",
+	.code = 0x300f0,
+	.short_desc = "Store Missed L1",
+	.long_desc = "Store Missed L1.",
+},
+{
+	.name = "PM_SUSPENDED",
+	.code = 0x0,
+	.short_desc = "Counter OFF",
+	.long_desc = "Counter OFF.",
+},
+{
+	.name = "PM_SWAP_CANCEL",
+	.code = 0x3090,
+	.short_desc = "SWAP cancel , rtag not available",
+	.long_desc = "SWAP cancel , rtag not available",
+},
+{
+	.name = "PM_SWAP_CANCEL_GPR",
+	.code = 0x3092,
+	.short_desc = "SWAP cancel , rtag not available for gpr",
+	.long_desc = "SWAP cancel , rtag not available for gpr",
+},
+{
+	.name = "PM_SWAP_COMPLETE",
+	.code = 0x308c,
+	.short_desc = "swap cast in completed",
+	.long_desc = "swap cast in completed",
+},
+{
+	.name = "PM_SWAP_COMPLETE_GPR",
+	.code = 0x308e,
+	.short_desc = "swap cast in completed fpr gpr",
+	.long_desc = "swap cast in completed fpr gpr",
+},
+{
+	.name = "PM_SYNC_MRK_BR_LINK",
+	.code = 0x15152,
+	.short_desc = "Marked Branch and link branch that can cause a synchronous interrupt",
+	.long_desc = "Marked Branch and link branch that can cause a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_BR_MPRED",
+	.code = 0x1515c,
+	.short_desc = "Marked Branch mispredict that can cause a synchronous interrupt",
+	.long_desc = "Marked Branch mispredict that can cause a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_FX_DIVIDE",
+	.code = 0x15156,
+	.short_desc = "Marked fixed point divide that can cause a synchronous interrupt",
+	.long_desc = "Marked fixed point divide that can cause a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_L2HIT",
+	.code = 0x15158,
+	.short_desc = "Marked L2 Hits that can throw a synchronous interrupt",
+	.long_desc = "Marked L2 Hits that can throw a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_L2MISS",
+	.code = 0x1515a,
+	.short_desc = "Marked L2 Miss that can throw a synchronous interrupt",
+	.long_desc = "Marked L2 Miss that can throw a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_L3MISS",
+	.code = 0x15154,
+	.short_desc = "Marked L3 misses that can throw a synchronous interrupt",
+	.long_desc = "Marked L3 misses that can throw a synchronous interrupt.",
+},
+{
+	.name = "PM_SYNC_MRK_PROBE_NOP",
+	.code = 0x15150,
+	.short_desc = "Marked probeNops which can cause synchronous interrupts",
+	.long_desc = "Marked probeNops which can cause synchronous interrupts.",
+},
+{
+	.name = "PM_SYS_PUMP_CPRED",
+	.code = 0x30050,
+	.short_desc = "Initial and Final Pump Scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_SYS_PUMP_MPRED",
+	.code = 0x30052,
+	.short_desc = "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or",
+},
+{
+	.name = "PM_SYS_PUMP_MPRED_RTY",
+	.code = 0x40050,
+	.short_desc = "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate)",
+	.long_desc = "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).",
+},
+{
+	.name = "PM_TABLEWALK_CYC",
+	.code = 0x10026,
+	.short_desc = "Cycles when a tablewalk (I or D) is active",
+	.long_desc = "Tablewalk Active.",
+},
+{
+	.name = "PM_TABLEWALK_CYC_PREF",
+	.code = 0xe086,
+	.short_desc = "tablewalk qualified for pte prefetches",
+	.long_desc = "tablewalk qualified for pte prefetches42",
+},
+{
+	.name = "PM_TABORT_TRECLAIM",
+	.code = 0x20b2,
+	.short_desc = "Completion time tabortnoncd, tabortcd, treclaim",
+	.long_desc = "Completion time tabortnoncd, tabortcd, treclaim",
+},
+{
+	.name = "PM_TB_BIT_TRANS",
+	.code = 0x300f8,
+	.short_desc = "timebase event",
+	.long_desc = "timebase event.",
+},
+{
+	.name = "PM_TEND_PEND_CYC",
+	.code = 0xe0ba,
+	.short_desc = "TEND latency per thread",
+	.long_desc = "TEND latency per thread42",
+},
+{
+	.name = "PM_THRD_ALL_RUN_CYC",
+	.code = 0x2000c,
+	.short_desc = "All Threads in Run_cycles (was both threads in run_cycles)",
+	.long_desc = "All Threads in Run_cycles (was both threads in run_cycles).",
+},
+{
+	.name = "PM_THRD_CONC_RUN_INST",
+	.code = 0x300f4,
+	.short_desc = "PPC Instructions Finished when both threads in run_cycles",
+	.long_desc = "Concurrent Run Instructions.",
+},
+{
+	.name = "PM_THRD_GRP_CMPL_BOTH_CYC",
+	.code = 0x10012,
+	.short_desc = "Cycles group completed on both completion slots by any thread",
+	.long_desc = "Two threads finished same cycle (gated by run latch).",
+},
+{
+	.name = "PM_THRD_PRIO_0_1_CYC",
+	.code = 0x40bc,
+	.short_desc = "Cycles thread running at priority level 0 or 1",
+	.long_desc = "Cycles thread running at priority level 0 or 1",
+},
+{
+	.name = "PM_THRD_PRIO_2_3_CYC",
+	.code = 0x40be,
+	.short_desc = "Cycles thread running at priority level 2 or 3",
+	.long_desc = "Cycles thread running at priority level 2 or 3",
+},
+{
+	.name = "PM_THRD_PRIO_4_5_CYC",
+	.code = 0x5080,
+	.short_desc = "Cycles thread running at priority level 4 or 5",
+	.long_desc = "Cycles thread running at priority level 4 or 5",
+},
+{
+	.name = "PM_THRD_PRIO_6_7_CYC",
+	.code = 0x5082,
+	.short_desc = "Cycles thread running at priority level 6 or 7",
+	.long_desc = "Cycles thread running at priority level 6 or 7",
+},
+{
+	.name = "PM_THRD_REBAL_CYC",
+	.code = 0x3098,
+	.short_desc = "cycles rebalance was active",
+	.long_desc = "cycles rebalance was active",
+},
+{
+	.name = "PM_THRESH_EXC_1024",
+	.code = 0x301ea,
+	.short_desc = "Threshold counter exceeded a value of 1024",
+	.long_desc = "Threshold counter exceeded a value of 1024.",
+},
+{
+	.name = "PM_THRESH_EXC_128",
+	.code = 0x401ea,
+	.short_desc = "Threshold counter exceeded a value of 128",
+	.long_desc = "Threshold counter exceeded a value of 128.",
+},
+{
+	.name = "PM_THRESH_EXC_2048",
+	.code = 0x401ec,
+	.short_desc = "Threshold counter exceeded a value of 2048",
+	.long_desc = "Threshold counter exceeded a value of 2048.",
+},
+{
+	.name = "PM_THRESH_EXC_256",
+	.code = 0x101e8,
+	.short_desc = "Threshold counter exceed a count of 256",
+	.long_desc = "Threshold counter exceed a count of 256.",
+},
+{
+	.name = "PM_THRESH_EXC_32",
+	.code = 0x201e6,
+	.short_desc = "Threshold counter exceeded a value of 32",
+	.long_desc = "Threshold counter exceeded a value of 32.",
+},
+{
+	.name = "PM_THRESH_EXC_4096",
+	.code = 0x101e6,
+	.short_desc = "Threshold counter exceed a count of 4096",
+	.long_desc = "Threshold counter exceed a count of 4096.",
+},
+{
+	.name = "PM_THRESH_EXC_512",
+	.code = 0x201e8,
+	.short_desc = "Threshold counter exceeded a value of 512",
+	.long_desc = "Threshold counter exceeded a value of 512.",
+},
+{
+	.name = "PM_THRESH_EXC_64",
+	.code = 0x301e8,
+	.short_desc = "IFU non-branch finished",
+	.long_desc = "Threshold counter exceeded a value of 64.",
+},
+{
+	.name = "PM_THRESH_MET",
+	.code = 0x101ec,
+	.short_desc = "threshold exceeded",
+	.long_desc = "threshold exceeded.",
+},
+{
+	.name = "PM_THRESH_NOT_MET",
+	.code = 0x4016e,
+	.short_desc = "Threshold counter did not meet threshold",
+	.long_desc = "Threshold counter did not meet threshold.",
+},
+{
+	.name = "PM_TLBIE_FIN",
+	.code = 0x30058,
+	.short_desc = "tlbie finished",
+	.long_desc = "tlbie finished.",
+},
+{
+	.name = "PM_TLB_MISS",
+	.code = 0x20066,
+	.short_desc = "TLB Miss (I + D)",
+	.long_desc = "TLB Miss (I + D).",
+},
+{
+	.name = "PM_TM_BEGIN_ALL",
+	.code = 0x20b8,
+	.short_desc = "Tm any tbegin",
+	.long_desc = "Tm any tbegin",
+},
+{
+	.name = "PM_TM_CAM_OVERFLOW",
+	.code = 0x318082,
+	.short_desc = "l3 tm cam overflow during L2 co of SC",
+	.long_desc = "l3 tm cam overflow during L2 co of SC",
+},
+{
+	.name = "PM_TM_CAP_OVERFLOW",
+	.code = 0x74708c,
+	.short_desc = "TM Footprint Capactiy Overflow",
+	.long_desc = "TM Footprint Capactiy Overflow",
+},
+{
+	.name = "PM_TM_END_ALL",
+	.code = 0x20ba,
+	.short_desc = "Tm any tend",
+	.long_desc = "Tm any tend",
+},
+{
+	.name = "PM_TM_FAIL_CONF_NON_TM",
+	.code = 0x3086,
+	.short_desc = "TEXAS fail reason @ completion",
+	.long_desc = "TEXAS fail reason @ completion",
+},
+{
+	.name = "PM_TM_FAIL_CON_TM",
+	.code = 0x3088,
+	.short_desc = "TEXAS fail reason @ completion",
+	.long_desc = "TEXAS fail reason @ completion",
+},
+{
+	.name = "PM_TM_FAIL_DISALLOW",
+	.code = 0xe0b2,
+	.short_desc = "TM fail disallow",
+	.long_desc = "TM fail disallow42",
+},
+{
+	.name = "PM_TM_FAIL_FOOTPRINT_OVERFLOW",
+	.code = 0x3084,
+	.short_desc = "TEXAS fail reason @ completion",
+	.long_desc = "TEXAS fail reason @ completion",
+},
+{
+	.name = "PM_TM_FAIL_NON_TX_CONFLICT",
+	.code = 0xe0b8,
+	.short_desc = "Non transactional conflict from LSU whtver gets repoted to texas",
+	.long_desc = "Non transactional conflict from LSU whtver gets repoted to texas42",
+},
+{
+	.name = "PM_TM_FAIL_SELF",
+	.code = 0x308a,
+	.short_desc = "TEXAS fail reason @ completion",
+	.long_desc = "TEXAS fail reason @ completion",
+},
+{
+	.name = "PM_TM_FAIL_TLBIE",
+	.code = 0xe0b4,
+	.short_desc = "TLBIE hit bloom filter",
+	.long_desc = "TLBIE hit bloom filter42",
+},
+{
+	.name = "PM_TM_FAIL_TX_CONFLICT",
+	.code = 0xe0b6,
+	.short_desc = "Transactional conflict from LSU, whatever gets reported to texas",
+	.long_desc = "Transactional conflict from LSU, whatever gets reported to texas 42",
+},
+{
+	.name = "PM_TM_FAV_CAUSED_FAIL",
+	.code = 0x727086,
+	.short_desc = "TM Load (fav) caused another thread to fail",
+	.long_desc = "TM Load (fav) caused another thread to fail",
+},
+{
+	.name = "PM_TM_LD_CAUSED_FAIL",
+	.code = 0x717082,
+	.short_desc = "Non TM Ld caused any thread to fail",
+	.long_desc = "Non TM Ld caused any thread to fail",
+},
+{
+	.name = "PM_TM_LD_CONF",
+	.code = 0x727084,
+	.short_desc = "TM Load (fav or non-fav) ran into conflict (failed)",
+	.long_desc = "TM Load (fav or non-fav) ran into conflict (failed)",
+},
+{
+	.name = "PM_TM_RST_SC",
+	.code = 0x328086,
+	.short_desc = "tm snp rst tm sc",
+	.long_desc = "tm snp rst tm sc",
+},
+{
+	.name = "PM_TM_SC_CO",
+	.code = 0x318080,
+	.short_desc = "l3 castout tm Sc line",
+	.long_desc = "l3 castout tm Sc line",
+},
+{
+	.name = "PM_TM_ST_CAUSED_FAIL",
+	.code = 0x73708a,
+	.short_desc = "TM Store (fav or non-fav) caused another thread to fail",
+	.long_desc = "TM Store (fav or non-fav) caused another thread to fail",
+},
+{
+	.name = "PM_TM_ST_CONF",
+	.code = 0x737088,
+	.short_desc = "TM Store (fav or non-fav) ran into conflict (failed)",
+	.long_desc = "TM Store (fav or non-fav) ran into conflict (failed)",
+},
+{
+	.name = "PM_TM_TBEGIN",
+	.code = 0x20bc,
+	.short_desc = "Tm nested tbegin",
+	.long_desc = "Tm nested tbegin",
+},
+{
+	.name = "PM_TM_TRANS_RUN_CYC",
+	.code = 0x10060,
+	.short_desc = "run cycles in transactional state",
+	.long_desc = "run cycles in transactional state.",
+},
+{
+	.name = "PM_TM_TRANS_RUN_INST",
+	.code = 0x30060,
+	.short_desc = "Instructions completed in transactional state",
+	.long_desc = "Instructions completed in transactional state.",
+},
+{
+	.name = "PM_TM_TRESUME",
+	.code = 0x3080,
+	.short_desc = "Tm resume",
+	.long_desc = "Tm resume",
+},
+{
+	.name = "PM_TM_TSUSPEND",
+	.code = 0x20be,
+	.short_desc = "Tm suspend",
+	.long_desc = "Tm suspend",
+},
+{
+	.name = "PM_TM_TX_PASS_RUN_CYC",
+	.code = 0x2e012,
+	.short_desc = "cycles spent in successful transactions",
+	.long_desc = "run cycles spent in successful transactions.",
+},
+{
+	.name = "PM_TM_TX_PASS_RUN_INST",
+	.code = 0x4e014,
+	.short_desc = "run instructions spent in successful transactions.",
+	.long_desc = "run instructions spent in successful transactions.",
+},
+{
+	.name = "PM_UP_PREF_L3",
+	.code = 0xe08c,
+	.short_desc = "Micropartition prefetch",
+	.long_desc = "Micropartition prefetch42",
+},
+{
+	.name = "PM_UP_PREF_POINTER",
+	.code = 0xe08e,
+	.short_desc = "Micrpartition pointer prefetches",
+	.long_desc = "Micrpartition pointer prefetches42",
+},
+{
+	.name = "PM_VSU0_16FLOP",
+	.code = 0xa0a4,
+	.short_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)",
+	.long_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)",
+},
+{
+	.name = "PM_VSU0_1FLOP",
+	.code = 0xa080,
+	.short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished",
+	.long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finishedDecode into 1,2,4 FLOP according to instr IOP, multiplied by #vector elements according to route( eg x1, x2, x4) Only if instr sends finish to ISU",
+},
+{
+	.name = "PM_VSU0_2FLOP",
+	.code = 0xa098,
+	.short_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+	.long_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU0_4FLOP",
+	.code = 0xa09c,
+	.short_desc = "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions)",
+	.long_desc = "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU0_8FLOP",
+	.code = 0xa0a0,
+	.short_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub)",
+	.long_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub)",
+},
+{
+	.name = "PM_VSU0_COMPLEX_ISSUED",
+	.code = 0xb0a4,
+	.short_desc = "Complex VMX instruction issued",
+	.long_desc = "Complex VMX instruction issued",
+},
+{
+	.name = "PM_VSU0_CY_ISSUED",
+	.code = 0xb0b4,
+	.short_desc = "Cryptographic instruction RFC02196 Issued",
+	.long_desc = "Cryptographic instruction RFC02196 Issued",
+},
+{
+	.name = "PM_VSU0_DD_ISSUED",
+	.code = 0xb0a8,
+	.short_desc = "64BIT Decimal Issued",
+	.long_desc = "64BIT Decimal Issued",
+},
+{
+	.name = "PM_VSU0_DP_2FLOP",
+	.code = 0xa08c,
+	.short_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+	.long_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+},
+{
+	.name = "PM_VSU0_DP_FMA",
+	.code = 0xa090,
+	.short_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+	.long_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+},
+{
+	.name = "PM_VSU0_DP_FSQRT_FDIV",
+	.code = 0xa094,
+	.short_desc = "DP vector versions of fdiv,fsqrt",
+	.long_desc = "DP vector versions of fdiv,fsqrt",
+},
+{
+	.name = "PM_VSU0_DQ_ISSUED",
+	.code = 0xb0ac,
+	.short_desc = "128BIT Decimal Issued",
+	.long_desc = "128BIT Decimal Issued",
+},
+{
+	.name = "PM_VSU0_EX_ISSUED",
+	.code = 0xb0b0,
+	.short_desc = "Direct move 32/64b VRFtoGPR RFC02206 Issued",
+	.long_desc = "Direct move 32/64b VRFtoGPR RFC02206 Issued",
+},
+{
+	.name = "PM_VSU0_FIN",
+	.code = 0xa0bc,
+	.short_desc = "VSU0 Finished an instruction",
+	.long_desc = "VSU0 Finished an instruction",
+},
+{
+	.name = "PM_VSU0_FMA",
+	.code = 0xa084,
+	.short_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+	.long_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+},
+{
+	.name = "PM_VSU0_FPSCR",
+	.code = 0xb098,
+	.short_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+	.long_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+},
+{
+	.name = "PM_VSU0_FSQRT_FDIV",
+	.code = 0xa088,
+	.short_desc = "four flops operation (fdiv,fsqrt) Scalar Instructions only!",
+	.long_desc = "four flops operation (fdiv,fsqrt) Scalar Instructions only!",
+},
+{
+	.name = "PM_VSU0_PERMUTE_ISSUED",
+	.code = 0xb090,
+	.short_desc = "Permute VMX Instruction Issued",
+	.long_desc = "Permute VMX Instruction Issued",
+},
+{
+	.name = "PM_VSU0_SCALAR_DP_ISSUED",
+	.code = 0xb088,
+	.short_desc = "Double Precision scalar instruction issued on Pipe0",
+	.long_desc = "Double Precision scalar instruction issued on Pipe0",
+},
+{
+	.name = "PM_VSU0_SIMPLE_ISSUED",
+	.code = 0xb094,
+	.short_desc = "Simple VMX instruction issued",
+	.long_desc = "Simple VMX instruction issued",
+},
+{
+	.name = "PM_VSU0_SINGLE",
+	.code = 0xa0a8,
+	.short_desc = "FPU single precision",
+	.long_desc = "FPU single precision",
+},
+{
+	.name = "PM_VSU0_SQ",
+	.code = 0xb09c,
+	.short_desc = "Store Vector Issued",
+	.long_desc = "Store Vector Issued",
+},
+{
+	.name = "PM_VSU0_STF",
+	.code = 0xb08c,
+	.short_desc = "FPU store (SP or DP) issued on Pipe0",
+	.long_desc = "FPU store (SP or DP) issued on Pipe0",
+},
+{
+	.name = "PM_VSU0_VECTOR_DP_ISSUED",
+	.code = 0xb080,
+	.short_desc = "Double Precision vector instruction issued on Pipe0",
+	.long_desc = "Double Precision vector instruction issued on Pipe0",
+},
+{
+	.name = "PM_VSU0_VECTOR_SP_ISSUED",
+	.code = 0xb084,
+	.short_desc = "Single Precision vector instruction issued (executed)",
+	.long_desc = "Single Precision vector instruction issued (executed)",
+},
+{
+	.name = "PM_VSU1_16FLOP",
+	.code = 0xa0a6,
+	.short_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)",
+	.long_desc = "Sixteen flops operation (SP vector versions of fdiv,fsqrt)",
+},
+{
+	.name = "PM_VSU1_1FLOP",
+	.code = 0xa082,
+	.short_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished",
+	.long_desc = "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished",
+},
+{
+	.name = "PM_VSU1_2FLOP",
+	.code = 0xa09a,
+	.short_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+	.long_desc = "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU1_4FLOP",
+	.code = 0xa09e,
+	.short_desc = "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions)",
+	.long_desc = "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions)",
+},
+{
+	.name = "PM_VSU1_8FLOP",
+	.code = 0xa0a2,
+	.short_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub)",
+	.long_desc = "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub)",
+},
+{
+	.name = "PM_VSU1_COMPLEX_ISSUED",
+	.code = 0xb0a6,
+	.short_desc = "Complex VMX instruction issued",
+	.long_desc = "Complex VMX instruction issued",
+},
+{
+	.name = "PM_VSU1_CY_ISSUED",
+	.code = 0xb0b6,
+	.short_desc = "Cryptographic instruction RFC02196 Issued",
+	.long_desc = "Cryptographic instruction RFC02196 Issued",
+},
+{
+	.name = "PM_VSU1_DD_ISSUED",
+	.code = 0xb0aa,
+	.short_desc = "64BIT Decimal Issued",
+	.long_desc = "64BIT Decimal Issued",
+},
+{
+	.name = "PM_VSU1_DP_2FLOP",
+	.code = 0xa08e,
+	.short_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+	.long_desc = "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg",
+},
+{
+	.name = "PM_VSU1_DP_FMA",
+	.code = 0xa092,
+	.short_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+	.long_desc = "DP vector version of fmadd,fnmadd,fmsub,fnmsub",
+},
+{
+	.name = "PM_VSU1_DP_FSQRT_FDIV",
+	.code = 0xa096,
+	.short_desc = "DP vector versions of fdiv,fsqrt",
+	.long_desc = "DP vector versions of fdiv,fsqrt",
+},
+{
+	.name = "PM_VSU1_DQ_ISSUED",
+	.code = 0xb0ae,
+	.short_desc = "128BIT Decimal Issued",
+	.long_desc = "128BIT Decimal Issued",
+},
+{
+	.name = "PM_VSU1_EX_ISSUED",
+	.code = 0xb0b2,
+	.short_desc = "Direct move 32/64b VRFtoGPR RFC02206 Issued",
+	.long_desc = "Direct move 32/64b VRFtoGPR RFC02206 Issued",
+},
+{
+	.name = "PM_VSU1_FIN",
+	.code = 0xa0be,
+	.short_desc = "VSU1 Finished an instruction",
+	.long_desc = "VSU1 Finished an instruction",
+},
+{
+	.name = "PM_VSU1_FMA",
+	.code = 0xa086,
+	.short_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+	.long_desc = "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!",
+},
+{
+	.name = "PM_VSU1_FPSCR",
+	.code = 0xb09a,
+	.short_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+	.long_desc = "Move to/from FPSCR type instruction issued on Pipe 0",
+},
+{
+	.name = "PM_VSU1_FSQRT_FDIV",
+	.code = 0xa08a,
+	.short_desc = "four flops operation (fdiv,fsqrt) Scalar Instructions only!",
+	.long_desc = "four flops operation (fdiv,fsqrt) Scalar Instructions only!",
+},
+{
+	.name = "PM_VSU1_PERMUTE_ISSUED",
+	.code = 0xb092,
+	.short_desc = "Permute VMX Instruction Issued",
+	.long_desc = "Permute VMX Instruction Issued",
+},
+{
+	.name = "PM_VSU1_SCALAR_DP_ISSUED",
+	.code = 0xb08a,
+	.short_desc = "Double Precision scalar instruction issued on Pipe1",
+	.long_desc = "Double Precision scalar instruction issued on Pipe1",
+},
+{
+	.name = "PM_VSU1_SIMPLE_ISSUED",
+	.code = 0xb096,
+	.short_desc = "Simple VMX instruction issued",
+	.long_desc = "Simple VMX instruction issued",
+},
+{
+	.name = "PM_VSU1_SINGLE",
+	.code = 0xa0aa,
+	.short_desc = "FPU single precision",
+	.long_desc = "FPU single precision",
+},
+{
+	.name = "PM_VSU1_SQ",
+	.code = 0xb09e,
+	.short_desc = "Store Vector Issued",
+	.long_desc = "Store Vector Issued",
+},
+{
+	.name = "PM_VSU1_STF",
+	.code = 0xb08e,
+	.short_desc = "FPU store (SP or DP) issued on Pipe1",
+	.long_desc = "FPU store (SP or DP) issued on Pipe1",
+},
+{
+	.name = "PM_VSU1_VECTOR_DP_ISSUED",
+	.code = 0xb082,
+	.short_desc = "Double Precision vector instruction issued on Pipe1",
+	.long_desc = "Double Precision vector instruction issued on Pipe1",
+},
+{
+	.name = "PM_VSU1_VECTOR_SP_ISSUED",
+	.code = 0xb086,
+	.short_desc = "Single Precision vector instruction issued (executed)",
+	.long_desc = "Single Precision vector instruction issued (executed)",
+},
+/* Terminating entry required */
+{
+	.name = NULL,
+	.code = 0,
+	.short_desc = NULL,
+	.long_desc = NULL,
+}
+};
+#endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC][PATCH 3/4] perf/powerpc: Move mfspr and friends to header file
  2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 1/4] perf: Create a table of Power7 PMU events Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 2/4] perf: Create a table of Power8 " Sukadev Bhattiprolu
@ 2015-05-01  7:05 ` Sukadev Bhattiprolu
  2015-05-01  7:05 ` [RFC][PATCH 4/4] perf: Create aliases for PMU events Sukadev Bhattiprolu
  2015-05-02  7:02 ` [RFC][PATCH 0/4] perf: Enable symbolic event names Vineet Gupta
  4 siblings, 0 replies; 9+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-01  7:05 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

mfspr() and related macros will be needed in two separate files.
Move these defintions to a common header file.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 tools/perf/arch/powerpc/util/header.c |    9 +--------
 tools/perf/arch/powerpc/util/header.h |    9 +++++++++
 2 files changed, 10 insertions(+), 8 deletions(-)
 create mode 100644 tools/perf/arch/powerpc/util/header.h

diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c
index 6c1b8a7..05d3fc8 100644
--- a/tools/perf/arch/powerpc/util/header.c
+++ b/tools/perf/arch/powerpc/util/header.c
@@ -6,14 +6,7 @@
 
 #include "../../util/header.h"
 #include "../../util/util.h"
-
-#define mfspr(rn)       ({unsigned long rval; \
-			 asm volatile("mfspr %0," __stringify(rn) \
-				      : "=r" (rval)); rval; })
-
-#define SPRN_PVR        0x11F	/* Processor Version Register */
-#define PVR_VER(pvr)    (((pvr) >>  16) & 0xFFFF) /* Version field */
-#define PVR_REV(pvr)    (((pvr) >>   0) & 0xFFFF) /* Revison field */
+#include "header.h"
 
 int
 get_cpuid(char *buffer, size_t sz)
diff --git a/tools/perf/arch/powerpc/util/header.h b/tools/perf/arch/powerpc/util/header.h
new file mode 100644
index 0000000..b9d3a0d
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/header.h
@@ -0,0 +1,9 @@
+#include "../../util/util.h"	// __stringify
+
+#define mfspr(rn)       ({unsigned long rval; \
+			 asm volatile("mfspr %0," __stringify(rn) \
+				      : "=r" (rval)); rval; })
+
+#define SPRN_PVR        0x11F   /* Processor Version Register */
+#define PVR_VER(pvr)    (((pvr) >>  16) & 0xFFFF) /* Version field */
+#define PVR_REV(pvr)    (((pvr) >>   0) & 0xFFFF) /* Revison field */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC][PATCH 4/4] perf: Create aliases for PMU events
  2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
                   ` (2 preceding siblings ...)
  2015-05-01  7:05 ` [RFC][PATCH 3/4] perf/powerpc: Move mfspr and friends to header file Sukadev Bhattiprolu
@ 2015-05-01  7:05 ` Sukadev Bhattiprolu
  2015-05-02  7:04   ` Vineet Gupta
  2015-05-02 15:38   ` Jiri Olsa
  2015-05-02  7:02 ` [RFC][PATCH 0/4] perf: Enable symbolic event names Vineet Gupta
  4 siblings, 2 replies; 9+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-01  7:05 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

Using the tables of Power7 and Power8 events, create aliases for the
Power PMU events. This would allow us to specify all Power events by
name rather than by raw code:

	$ /tmp/perf stat -e PM_1PLUS_PPC_CMPL sleep 1

	 Performance counter stats for 'sleep 1':

		   757,661      PM_1PLUS_PPC_CMPL

	       1.001620145 seconds time elapsed

The perf binary built on Power8 can be copied to Power7 and it will use
the Power7 events (if arch/powerpc/util/pmu-events.h knows the CPU string).

Hopefully other architecutres can also implement arch_get_events_table()
and take advantage of this.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 tools/perf/arch/powerpc/util/Build        |    2 +-
 tools/perf/arch/powerpc/util/pmu-events.c |   52 +++++++++++++++++++
 tools/perf/arch/powerpc/util/pmu-events.h |   17 +++++++
 tools/perf/util/pmu.c                     |   77 +++++++++++++++++++++++++++++
 tools/perf/util/pmu.h                     |   10 ++++
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 tools/perf/arch/powerpc/util/pmu-events.c
 create mode 100644 tools/perf/arch/powerpc/util/pmu-events.h

diff --git a/tools/perf/arch/powerpc/util/Build b/tools/perf/arch/powerpc/util/Build
index 0af6e9b..52fbc7f 100644
--- a/tools/perf/arch/powerpc/util/Build
+++ b/tools/perf/arch/powerpc/util/Build
@@ -1,4 +1,4 @@
-libperf-y += header.o
+libperf-y += header.o pmu-events.o
 
 libperf-$(CONFIG_DWARF) += dwarf-regs.o
 libperf-$(CONFIG_DWARF) += skip-callchain-idx.o
diff --git a/tools/perf/arch/powerpc/util/pmu-events.c b/tools/perf/arch/powerpc/util/pmu-events.c
new file mode 100644
index 0000000..7036f6d
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/pmu-events.c
@@ -0,0 +1,52 @@
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include "pmu.h"
+#include "pmu-events.h"
+#include "../../util/debug.h"			/* verbose */
+#include "header.h"				/* mfspr */
+
+static char *get_cpu_str(void)
+{
+	char *bufp;
+
+	if (asprintf(&bufp, "%.8lx-core", mfspr(SPRN_PVR)) < 0)
+		bufp = NULL;
+
+	return bufp;
+}
+
+struct perf_pmu_event *arch_get_events_table(char *cpustr)
+{
+	int i, nmaps, must_free;
+	struct  perf_pmu_event *table;
+
+	must_free = 0;
+	if (!cpustr) {
+		cpustr = get_cpu_str();
+		if (!cpustr)
+			return NULL;
+		must_free = 1;
+	}
+
+	nmaps = sizeof(pvr_events_map) / sizeof(struct pvr_events_map_entry);
+
+	for (i = 0; i < nmaps; i++) {
+		if (!strcmp(pvr_events_map[i].pvr, cpustr))
+			break;
+	}
+
+	table = NULL;
+	if (i < nmaps) {
+		/* pvr_events_map is a const; cast to override */
+		table = (struct perf_pmu_event *)pvr_events_map[i].pmu_events;
+	} else if (verbose) {
+		printf("Unknown CPU %s, ignoring aliases\n", cpustr);
+	}
+
+	if (must_free)
+		free(cpustr);
+
+	return table;
+}
+
diff --git a/tools/perf/arch/powerpc/util/pmu-events.h b/tools/perf/arch/powerpc/util/pmu-events.h
new file mode 100644
index 0000000..1daf8e5
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/pmu-events.h
@@ -0,0 +1,17 @@
+/*
+ * Include all Power processor tables that we care about.
+ */
+#include "power7-events.h"
+#include "power8-events.h"
+
+/*
+ * Map a processor version (PVR) to its table of events.
+ */
+struct pvr_events_map_entry {
+	const char *pvr;
+	const struct perf_pmu_event *pmu_events;
+} pvr_events_map[] = {
+	{ .pvr = "004d0100-core",	.pmu_events = power8_pmu_events },
+	{ .pvr = "003f0201-core",	.pmu_events = power7_pmu_events }
+};
+
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 4841167..f998d91 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -435,6 +435,80 @@ perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
 	return NULL;
 }
 
+/*
+ * Default arch_get_events_table() is empty.
+ *
+ * Actual implementation is in arch/$(ARCH)/util/pmu-events.c. This
+ * allows architectures could choose what set(s) of events to a) include
+ * in perf binary b) consider for _this_ invocation of perf.
+ *
+ * Eg: For Power, we include both Power7 and Power8 event tables in the
+ * 	perf binary. But depending on the processor where perf is executed,
+ * 	either the Power7 or Power8 table is returned.
+ */
+struct perf_pmu_event * __attribute__ ((weak))
+arch_get_events_table(char *cpustr __maybe_unused)
+{
+	return NULL;
+}
+
+static int pmu_add_cpu_aliases(char *cpustr, void *data)
+{
+	struct list_head *head = (struct list_head *)data;
+	struct perf_pmu_alias *alias;
+	int i;
+	struct perf_pmu_event *events_table, *event;
+	struct parse_events_term *term;
+
+	events_table = arch_get_events_table(cpustr);
+	if (!events_table)
+		return 0;
+
+	for (i = 0; events_table[i].name != NULL; i++) {
+		event = &events_table[i];
+
+		alias = malloc(sizeof(*alias));
+		if (!alias)
+			return -ENOMEM;
+
+		term = malloc(sizeof(*term));
+		if (!term) {
+			/*
+			 * TODO: cleanup aliases allocated so far?
+			 */
+			free(alias);
+			return -ENOMEM;
+		}
+
+		/* ->config is not const; cast to override */
+		term->config = (char *)"event";
+		term->val.num = event->code;
+		term->type_val = PARSE_EVENTS__TERM_TYPE_NUM;
+		term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
+		INIT_LIST_HEAD(&term->list);
+		term->used = 0;
+
+		INIT_LIST_HEAD(&alias->terms);
+		list_add_tail(&alias->terms, &term->list);
+
+		alias->scale = 1.0;
+		alias->unit[0] = '\0';
+		alias->per_pkg = false;
+
+		alias->name = strdup(event->name);
+#if 0
+		/*
+		 * TODO: Need Andi Kleen's patch for ->desc
+		 */
+		alias->desc = event->short_desc ?
+					strdup(event->short_desc) : NULL;
+#endif
+		list_add_tail(&alias->list, head);
+	}
+
+	return 0;
+}
+
 static struct perf_pmu *pmu_lookup(const char *name)
 {
 	struct perf_pmu *pmu;
@@ -453,6 +527,9 @@ static struct perf_pmu *pmu_lookup(const char *name)
 	if (pmu_aliases(name, &aliases))
 		return NULL;
 
+	if (!strcmp(name, "cpu"))
+		(void)pmu_add_cpu_aliases(NULL, &aliases);
+
 	if (pmu_type(name, &type))
 		return NULL;
 
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index 6b1249f..ca3e7a0 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -45,6 +45,14 @@ struct perf_pmu_alias {
 	bool snapshot;
 };
 
+struct perf_pmu_event {
+	const char *name;
+	const unsigned long code;
+	const char *short_desc;
+	const char *long_desc;
+	/* add unit, mask etc as needed here */
+};
+
 struct perf_pmu *perf_pmu__find(const char *name);
 int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr,
 		     struct list_head *head_terms);
@@ -76,4 +84,6 @@ int perf_pmu__test(void);
 
 struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu);
 
+struct perf_pmu_event *arch_get_events_table(char *cpustr);
+
 #endif /* __PMU_H */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 0/4] perf: Enable symbolic event names
  2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
                   ` (3 preceding siblings ...)
  2015-05-01  7:05 ` [RFC][PATCH 4/4] perf: Create aliases for PMU events Sukadev Bhattiprolu
@ 2015-05-02  7:02 ` Vineet Gupta
  4 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2015-05-02  7:02 UTC (permalink / raw)
  To: Sukadev Bhattiprolu, mingo, ak, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

On Friday 01 May 2015 12:35 PM, Sukadev Bhattiprolu wrote:
> Implement ability to specify Power PMU events by their symbolic event
> names rather than raw codes. This approach pulls tables of the Power7
> and Power8 PMU events into the perf source tree and uses these tables
> to create aliases for the PMU events. With these aliases users can run:
> 
> 	perf stat -e PM_1PLUS_PPC_CMPL:ku sleep 1
> or
> 	perf stat -e cpu/PM_VSU_SINGLE/ sleep 1
> 
> This is an early POC patchset based on discussions with Jiri Olsa,
> Michael Ellerman and Ingo Molnar. Lightly tested on Power7 and Power8.
> 
> Can other architectures can implement arch_get_events_table() and similarly 
> use symoblic event names?


Yes, ARC can certainly use this infrastructure. Our hardware conditions are
actually 1-8 char strings. So using raw events requires me to first convert the
string to ASCII.

> 
> I am also assuming that if the header files like power8-events.h are
> easily readable, we don't need the JSON files anymore?
> 
> TODO:
> 	- Maybe translate event names to lower-case?
> 	- Allow perf to process event descriptions (need Andi Kleen's patch)
> 
> 
> Sukadev Bhattiprolu (4):
>   perf: Create a table of Power7 PMU events
>   perf: Create a table of Power8 PMU events
>   perf/powerpc: Move mfspr and friends to header file
>   perf: Create aliases for Power PMU events
> 
>  tools/perf/arch/powerpc/util/Build           |    2 +-
>  tools/perf/arch/powerpc/util/header.c        |    9 +-
>  tools/perf/arch/powerpc/util/header.h        |    9 +
>  tools/perf/arch/powerpc/util/pmu-events.c    |   52 +
>  tools/perf/arch/powerpc/util/pmu-events.h    |   17 +
>  tools/perf/arch/powerpc/util/power7-events.h | 3315 +++++++++++++
>  tools/perf/arch/powerpc/util/power8-events.h | 6408 ++++++++++++++++++++++++++
>  tools/perf/util/pmu.c                        |   77 +
>  tools/perf/util/pmu.h                        |   10 +
>  9 files changed, 9890 insertions(+), 9 deletions(-)
>  create mode 100644 tools/perf/arch/powerpc/util/header.h
>  create mode 100644 tools/perf/arch/powerpc/util/pmu-events.c
>  create mode 100644 tools/perf/arch/powerpc/util/pmu-events.h
>  create mode 100644 tools/perf/arch/powerpc/util/power7-events.h
>  create mode 100644 tools/perf/arch/powerpc/util/power8-events.h
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 4/4] perf: Create aliases for PMU events
  2015-05-01  7:05 ` [RFC][PATCH 4/4] perf: Create aliases for PMU events Sukadev Bhattiprolu
@ 2015-05-02  7:04   ` Vineet Gupta
  2015-05-02 15:38   ` Jiri Olsa
  1 sibling, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2015-05-02  7:04 UTC (permalink / raw)
  To: Sukadev Bhattiprolu, mingo, ak, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras
  Cc: linuxppc-dev, linux-kernel

On Friday 01 May 2015 12:35 PM, Sukadev Bhattiprolu wrote:
> Using the tables of Power7 and Power8 events, create aliases for the
> Power PMU events. This would allow us to specify all Power events by
> name rather than by raw code:
> 
> 	$ /tmp/perf stat -e PM_1PLUS_PPC_CMPL sleep 1
> 
> 	 Performance counter stats for 'sleep 1':
> 
> 		   757,661      PM_1PLUS_PPC_CMPL
> 
> 	       1.001620145 seconds time elapsed
> 
> The perf binary built on Power8 can be copied to Power7 and it will use
> the Power7 events (if arch/powerpc/util/pmu-events.h knows the CPU string).
> 
> Hopefully other architecutres can also implement arch_get_events_table()
> and take advantage of this.
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> ---
>  tools/perf/arch/powerpc/util/Build        |    2 +-
>  tools/perf/arch/powerpc/util/pmu-events.c |   52 +++++++++++++++++++
>  tools/perf/arch/powerpc/util/pmu-events.h |   17 +++++++
>  tools/perf/util/pmu.c                     |   77 +++++++++++++++++++++++++++++
>  tools/perf/util/pmu.h                     |   10 ++++
>  5 files changed, 157 insertions(+), 1 deletion(-)
>  create mode 100644 tools/perf/arch/powerpc/util/pmu-events.c
>  create mode 100644 tools/perf/arch/powerpc/util/pmu-events.h
> 
> diff --git a/tools/perf/arch/powerpc/util/Build b/tools/perf/arch/powerpc/util/Build
> index 0af6e9b..52fbc7f 100644
> --- a/tools/perf/arch/powerpc/util/Build
> +++ b/tools/perf/arch/powerpc/util/Build
> @@ -1,4 +1,4 @@
> -libperf-y += header.o
> +libperf-y += header.o pmu-events.o
>  
>  libperf-$(CONFIG_DWARF) += dwarf-regs.o
>  libperf-$(CONFIG_DWARF) += skip-callchain-idx.o
> diff --git a/tools/perf/arch/powerpc/util/pmu-events.c b/tools/perf/arch/powerpc/util/pmu-events.c
> new file mode 100644
> index 0000000..7036f6d
> --- /dev/null
> +++ b/tools/perf/arch/powerpc/util/pmu-events.c
> @@ -0,0 +1,52 @@
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <sys/types.h>
> +#include "pmu.h"
> +#include "pmu-events.h"
> +#include "../../util/debug.h"			/* verbose */
> +#include "header.h"				/* mfspr */
> +
> +static char *get_cpu_str(void)
> +{
> +	char *bufp;
> +
> +	if (asprintf(&bufp, "%.8lx-core", mfspr(SPRN_PVR)) < 0)
> +		bufp = NULL;
> +
> +	return bufp;
> +}
> +
> +struct perf_pmu_event *arch_get_events_table(char *cpustr)
> +{
> +	int i, nmaps, must_free;
> +	struct  perf_pmu_event *table;
> +
> +	must_free = 0;
> +	if (!cpustr) {
> +		cpustr = get_cpu_str();
> +		if (!cpustr)
> +			return NULL;
> +		must_free = 1;
> +	}
> +
> +	nmaps = sizeof(pvr_events_map) / sizeof(struct pvr_events_map_entry);
> +
> +	for (i = 0; i < nmaps; i++) {
> +		if (!strcmp(pvr_events_map[i].pvr, cpustr))
> +			break;
> +	}
> +
> +	table = NULL;
> +	if (i < nmaps) {
> +		/* pvr_events_map is a const; cast to override */
> +		table = (struct perf_pmu_event *)pvr_events_map[i].pmu_events;
> +	} else if (verbose) {
> +		printf("Unknown CPU %s, ignoring aliases\n", cpustr);
> +	}
> +
> +	if (must_free)
> +		free(cpustr);
> +
> +	return table;
> +}
> +
> diff --git a/tools/perf/arch/powerpc/util/pmu-events.h b/tools/perf/arch/powerpc/util/pmu-events.h
> new file mode 100644
> index 0000000..1daf8e5
> --- /dev/null
> +++ b/tools/perf/arch/powerpc/util/pmu-events.h
> @@ -0,0 +1,17 @@
> +/*
> + * Include all Power processor tables that we care about.
> + */
> +#include "power7-events.h"
> +#include "power8-events.h"
> +
> +/*
> + * Map a processor version (PVR) to its table of events.
> + */
> +struct pvr_events_map_entry {
> +	const char *pvr;
> +	const struct perf_pmu_event *pmu_events;
> +} pvr_events_map[] = {
> +	{ .pvr = "004d0100-core",	.pmu_events = power8_pmu_events },
> +	{ .pvr = "003f0201-core",	.pmu_events = power7_pmu_events }
> +};

Do u really need the header - this could go in the .c file ?

> +
> diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
> index 4841167..f998d91 100644
> --- a/tools/perf/util/pmu.c
> +++ b/tools/perf/util/pmu.c
> @@ -435,6 +435,80 @@ perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
>  	return NULL;
>  }
>  
> +/*
> + * Default arch_get_events_table() is empty.
> + *
> + * Actual implementation is in arch/$(ARCH)/util/pmu-events.c. This
> + * allows architectures could choose what set(s) of events to a) include
> + * in perf binary b) consider for _this_ invocation of perf.
> + *
> + * Eg: For Power, we include both Power7 and Power8 event tables in the
> + * 	perf binary. But depending on the processor where perf is executed,
> + * 	either the Power7 or Power8 table is returned.
> + */
> +struct perf_pmu_event * __attribute__ ((weak))
> +arch_get_events_table(char *cpustr __maybe_unused)
> +{
> +	return NULL;
> +}
> +
> +static int pmu_add_cpu_aliases(char *cpustr, void *data)
> +{
> +	struct list_head *head = (struct list_head *)data;
> +	struct perf_pmu_alias *alias;
> +	int i;
> +	struct perf_pmu_event *events_table, *event;
> +	struct parse_events_term *term;
> +
> +	events_table = arch_get_events_table(cpustr);
> +	if (!events_table)
> +		return 0;
> +
> +	for (i = 0; events_table[i].name != NULL; i++) {
> +		event = &events_table[i];
> +
> +		alias = malloc(sizeof(*alias));
> +		if (!alias)
> +			return -ENOMEM;
> +
> +		term = malloc(sizeof(*term));
> +		if (!term) {
> +			/*
> +			 * TODO: cleanup aliases allocated so far?
> +			 */
> +			free(alias);
> +			return -ENOMEM;
> +		}
> +
> +		/* ->config is not const; cast to override */
> +		term->config = (char *)"event";
> +		term->val.num = event->code;
> +		term->type_val = PARSE_EVENTS__TERM_TYPE_NUM;
> +		term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> +		INIT_LIST_HEAD(&term->list);
> +		term->used = 0;
> +
> +		INIT_LIST_HEAD(&alias->terms);
> +		list_add_tail(&alias->terms, &term->list);
> +
> +		alias->scale = 1.0;
> +		alias->unit[0] = '\0';
> +		alias->per_pkg = false;
> +
> +		alias->name = strdup(event->name);
> +#if 0
> +		/*
> +		 * TODO: Need Andi Kleen's patch for ->desc
> +		 */
> +		alias->desc = event->short_desc ?
> +					strdup(event->short_desc) : NULL;
> +#endif
> +		list_add_tail(&alias->list, head);
> +	}
> +
> +	return 0;
> +}
> +
>  static struct perf_pmu *pmu_lookup(const char *name)
>  {
>  	struct perf_pmu *pmu;
> @@ -453,6 +527,9 @@ static struct perf_pmu *pmu_lookup(const char *name)
>  	if (pmu_aliases(name, &aliases))
>  		return NULL;
>  
> +	if (!strcmp(name, "cpu"))
> +		(void)pmu_add_cpu_aliases(NULL, &aliases);
> +
>  	if (pmu_type(name, &type))
>  		return NULL;
>  
> diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
> index 6b1249f..ca3e7a0 100644
> --- a/tools/perf/util/pmu.h
> +++ b/tools/perf/util/pmu.h
> @@ -45,6 +45,14 @@ struct perf_pmu_alias {
>  	bool snapshot;
>  };
>  
> +struct perf_pmu_event {
> +	const char *name;
> +	const unsigned long code;
> +	const char *short_desc;
> +	const char *long_desc;
> +	/* add unit, mask etc as needed here */
> +};
> +
>  struct perf_pmu *perf_pmu__find(const char *name);
>  int perf_pmu__config(struct perf_pmu *pmu, struct perf_event_attr *attr,
>  		     struct list_head *head_terms);
> @@ -76,4 +84,6 @@ int perf_pmu__test(void);
>  
>  struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu);
>  
> +struct perf_pmu_event *arch_get_events_table(char *cpustr);
> +
>  #endif /* __PMU_H */
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 4/4] perf: Create aliases for PMU events
  2015-05-01  7:05 ` [RFC][PATCH 4/4] perf: Create aliases for PMU events Sukadev Bhattiprolu
  2015-05-02  7:04   ` Vineet Gupta
@ 2015-05-02 15:38   ` Jiri Olsa
  2015-05-04 11:44     ` Andi Kleen
  1 sibling, 1 reply; 9+ messages in thread
From: Jiri Olsa @ 2015-05-02 15:38 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, ak, Michael Ellerman, Arnaldo Carvalho de Melo,
	Paul Mackerras, linuxppc-dev, linux-kernel

On Fri, May 01, 2015 at 12:05:41AM -0700, Sukadev Bhattiprolu wrote:

SNIP

> +
> +static int pmu_add_cpu_aliases(char *cpustr, void *data)
> +{
> +	struct list_head *head = (struct list_head *)data;
> +	struct perf_pmu_alias *alias;
> +	int i;
> +	struct perf_pmu_event *events_table, *event;
> +	struct parse_events_term *term;
> +
> +	events_table = arch_get_events_table(cpustr);
> +	if (!events_table)
> +		return 0;
> +
> +	for (i = 0; events_table[i].name != NULL; i++) {
> +		event = &events_table[i];
> +
> +		alias = malloc(sizeof(*alias));
> +		if (!alias)
> +			return -ENOMEM;
> +
> +		term = malloc(sizeof(*term));
> +		if (!term) {
> +			/*
> +			 * TODO: cleanup aliases allocated so far?
> +			 */
> +			free(alias);
> +			return -ENOMEM;
> +		}
> +
> +		/* ->config is not const; cast to override */
> +		term->config = (char *)"event";
> +		term->val.num = event->code;
> +		term->type_val = PARSE_EVENTS__TERM_TYPE_NUM;
> +		term->type_term = PARSE_EVENTS__TERM_TYPE_USER;
> +		INIT_LIST_HEAD(&term->list);
> +		term->used = 0;

hum, so I checked and for x86 the 'struct perf_pmu_event::code' is not enough

Andi introduced following JSON notation for event:

[
  {
    "EventCode": "0x00",
    "UMask": "0x01",
    "EventName": "INST_RETIRED.ANY",
    "BriefDescription": "Instructions retired from execution.",
    "PublicDescription": "Instructions retired from execution.",
    "Counter": "Fixed counter 1",
    "CounterHTOff": "Fixed counter 1",
    "SampleAfterValue": "2000003",
    "MSRIndex": "0",
    "MSRValue": "0",
    "TakenAlone": "0",
    "CounterMask": "0",
    "Invert": "0",
    "AnyThread": "0",
    "EdgeDetect": "0",
    "PEBS": "0",
    "PRECISE_STORE": "0",
    "Errata": "null",
    "Offcore": "0"
  }

which gets processed/translated into string of terms "event=..,umask=,..."
which is used to create alias 'struct perf_pmu_alias'

please check Andi's patch:
  [PATCH v9 04/11] perf, tools: Add support for reading JSON event files

  function json_events


I wonder we should stay with JSON description during build time
translate all (for architecture) events into strings in term format
"event=..,umask=,...'  and this array of string would be loaded
as aliases in runtime

so we would have architecture specific tool that would translate
JSON events data into array of strings (events in terms form for
given architecture) which would get compiled into perf

I personaly like having set of event files in JSON notation
rather than having them directly in C structure

thoughts?

jirka

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 4/4] perf: Create aliases for PMU events
  2015-05-02 15:38   ` Jiri Olsa
@ 2015-05-04 11:44     ` Andi Kleen
  0 siblings, 0 replies; 9+ messages in thread
From: Andi Kleen @ 2015-05-04 11:44 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Sukadev Bhattiprolu, mingo, Michael Ellerman,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

> I personaly like having set of event files in JSON notation
> rather than having them directly in C structure

Yes, strings are better and JSON input is also better. 

I prototyped translating JSON into the proposed structures. I already had to
add three new fields, and it wouldn't work for uncore. The string 
format is much more extensible.

BTW as expected the binary sizes are gigantic (for 14 CPU types):

% size all.o
   text    data     bss     dec     hex filename
 662698       0       0  662698   a1caa all.o

% gcc -E all.c | wc -l
55475

-Andi

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-05-04 11:44 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-01  7:05 [RFC][PATCH 0/4] perf: Enable symbolic event names Sukadev Bhattiprolu
2015-05-01  7:05 ` [RFC][PATCH 1/4] perf: Create a table of Power7 PMU events Sukadev Bhattiprolu
2015-05-01  7:05 ` [RFC][PATCH 2/4] perf: Create a table of Power8 " Sukadev Bhattiprolu
2015-05-01  7:05 ` [RFC][PATCH 3/4] perf/powerpc: Move mfspr and friends to header file Sukadev Bhattiprolu
2015-05-01  7:05 ` [RFC][PATCH 4/4] perf: Create aliases for PMU events Sukadev Bhattiprolu
2015-05-02  7:04   ` Vineet Gupta
2015-05-02 15:38   ` Jiri Olsa
2015-05-04 11:44     ` Andi Kleen
2015-05-02  7:02 ` [RFC][PATCH 0/4] perf: Enable symbolic event names Vineet Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).