linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] perf: Add support for PMU events in JSON format
@ 2015-05-20  0:02 Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 1/4] perf: Add jsmn `jasmine' JSON parser Sukadev Bhattiprolu
                   ` (3 more replies)
  0 siblings, 4 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-20  0:02 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: namhyung, linuxppc-dev, linux-kernel

CPUs support a large number of performance monitoring events (PMU events)
and often these events are very specific to an architecture/model of the
CPU. To use most of these PMU events with perf we currently have to identify
the events by their raw codes:

	perf stat -e r100f2 sleep 1

This patchset allows architectures to specify these PMU events in a JSON
files which are defined in the tools/perf/pmu-events/arch/ directory of
the mainline tree

	Eg: snippet from 004d0100.json (in patch 4)
	[
		
	  {
	    "EventCode": "0x100f2",
	    "EventName": "PM_1PLUS_PPC_CMPL",
	    "BriefDescription": "1 or more ppc insts finished,",
	    "PublicDescription": "1 or more ppc insts finished (completed).,"
	  },
	]

When building the perf tool, this patchset, first builds/uses a 'jevents'
which locates all the JSON files for the architecture (currently Powerpc).
The jevents binary then translates the JSON files into into a C-style
"PMU events table":

	struct pmu_event pme_004d0100_core[] = {
		
		...

		{
			.name = "pm_1plus_ppc_cmpl",
			.event = "event=0x100f2",
			.desc = "1 or more ppc insts finished,",
		},

		...
	}

The jevents binary also looks for a "mapfile" to map a processor model/
version to a specific events table:

	$ cat mapfile.csv
	IBM-Power8-9188,004d0100,004d0100-core.json,core
	
and uses this to build a mapping table:

	struct pmu_events_map pmu_events_map[] = {
	{
		.vfm = "IBM-Power8-9188",
		.version = "004d0100",
		.type = "core",
		.table = pme_004d0100_core
	},
	
This mapping and events tables for the architecture are then included in
the perf binary during build.

At run time, perf identifies the specific events table, based on the model
of the CPU perf is running on. Perf uses that table to create event aliases
which would allow the user to specify the event as:

	perf stat -e pm_1plus_ppc_cmpl sleep 1

Note:
	- All known events tables for the architecture are included in the
	  perf binary.

	- Inconsistencies between the JSON files and the mapfile can result
	  in build failures in perf (although jevents try to recover from
	  some and continue the build by leaving out event aliases).

	- For architectures that don't have any JSON files, an empty mapping
	  table is created and they should continue to build)

Andi Kleen (2):
  perf, tools: Add jsmn `jasmine' JSON parser
  jevents: Program to convert JSON file to C style file

Sukadev Bhattiprolu (2):
  Use pmu_events_map table to create event aliases
  perf: Add power8 PMU events in json format

 tools/perf/Build                                   |    1 +
 tools/perf/Makefile.perf                           |    4 +-
 tools/perf/arch/powerpc/util/header.c              |   33 +
 tools/perf/pmu-events/Build                        |   38 +
 tools/perf/pmu-events/README                       |   67 +
 .../pmu-events/arch/powerpc/004d0100-core.json     | 5766 ++++++++++++++++++++
 tools/perf/pmu-events/arch/powerpc/mapfile.csv     |    1 +
 tools/perf/pmu-events/arch/powerpc/power8.json     | 5766 ++++++++++++++++++++
 tools/perf/pmu-events/jevents.c                    |  700 +++
 tools/perf/pmu-events/jevents.h                    |   17 +
 tools/perf/pmu-events/jsmn.c                       |  313 ++
 tools/perf/pmu-events/jsmn.h                       |   67 +
 tools/perf/pmu-events/json.c                       |  162 +
 tools/perf/pmu-events/json.h                       |   36 +
 tools/perf/pmu-events/pmu-events.h                 |   39 +
 tools/perf/util/header.h                           |    4 +-
 tools/perf/util/pmu.c                              |  104 +-
 17 files changed, 13103 insertions(+), 15 deletions(-)
 create mode 100644 tools/perf/pmu-events/Build
 create mode 100644 tools/perf/pmu-events/README
 create mode 100644 tools/perf/pmu-events/arch/powerpc/004d0100-core.json
 create mode 100644 tools/perf/pmu-events/arch/powerpc/mapfile.csv
 create mode 100644 tools/perf/pmu-events/arch/powerpc/power8.json
 create mode 100644 tools/perf/pmu-events/jevents.c
 create mode 100644 tools/perf/pmu-events/jevents.h
 create mode 100644 tools/perf/pmu-events/jsmn.c
 create mode 100644 tools/perf/pmu-events/jsmn.h
 create mode 100644 tools/perf/pmu-events/json.c
 create mode 100644 tools/perf/pmu-events/json.h
 create mode 100644 tools/perf/pmu-events/pmu-events.h

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/4] perf: Add jsmn `jasmine' JSON parser
  2015-05-20  0:02 [PATCH 0/4] perf: Add support for PMU events in JSON format Sukadev Bhattiprolu
@ 2015-05-20  0:02 ` Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-20  0:02 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: namhyung, linuxppc-dev, linux-kernel

From: Andi Kleen <ak@linux.intel.com>

I need a JSON parser. This adds the simplest JSON
parser I could find -- Serge Zaitsev's jsmn `jasmine' --
to the perf library. I merely converted it to (mostly)
Linux style and added support for non 0 terminated input.

The parser is quite straight forward and does not
copy any data, just returns tokens with offsets
into the input buffer. So it's relatively efficient
and simple to use.

The code is not fully checkpatch clean, but I didn't
want to completely fork the upstream code.

Original source: http://zserge.bitbucket.org/jsmn.html

In addition I added a simple wrapper that mmaps a json
file and provides some straight forward access functions.

Used in follow-on patches to parse event files.

Acked-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---

v2: Address review feedback.
v3: Minor checkpatch fixes.
v4 (by Sukadev Bhattiprolu)
	- Rebase to 4.0 and fix minor conflicts in tools/perf/Makefile.perf
	- Report error if specified events file is invalid.
v5 (Sukadev Bhattiprolu)
	- Move files to tools/perf/pmu-events/ since parsing of JSON file
	now occurs when _building_ rather than running perf.
---
 tools/perf/pmu-events/jsmn.c |  313 ++++++++++++++++++++++++++++++++++++++++++
 tools/perf/pmu-events/jsmn.h |   67 +++++++++
 tools/perf/pmu-events/json.c |  162 ++++++++++++++++++++++
 tools/perf/pmu-events/json.h |   36 +++++
 4 files changed, 578 insertions(+)
 create mode 100644 tools/perf/pmu-events/jsmn.c
 create mode 100644 tools/perf/pmu-events/jsmn.h
 create mode 100644 tools/perf/pmu-events/json.c
 create mode 100644 tools/perf/pmu-events/json.h

diff --git a/tools/perf/pmu-events/jsmn.c b/tools/perf/pmu-events/jsmn.c
new file mode 100644
index 0000000..11d1fa1
--- /dev/null
+++ b/tools/perf/pmu-events/jsmn.c
@@ -0,0 +1,313 @@
+/*
+ * Copyright (c) 2010 Serge A. Zaitsev
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ *
+ * Slightly modified by AK to not assume 0 terminated input.
+ */
+
+#include <stdlib.h>
+#include "jsmn.h"
+
+/*
+ * Allocates a fresh unused token from the token pool.
+ */
+static jsmntok_t *jsmn_alloc_token(jsmn_parser *parser,
+				   jsmntok_t *tokens, size_t num_tokens)
+{
+	jsmntok_t *tok;
+
+	if ((unsigned)parser->toknext >= num_tokens)
+		return NULL;
+	tok = &tokens[parser->toknext++];
+	tok->start = tok->end = -1;
+	tok->size = 0;
+	return tok;
+}
+
+/*
+ * Fills token type and boundaries.
+ */
+static void jsmn_fill_token(jsmntok_t *token, jsmntype_t type,
+			    int start, int end)
+{
+	token->type = type;
+	token->start = start;
+	token->end = end;
+	token->size = 0;
+}
+
+/*
+ * Fills next available token with JSON primitive.
+ */
+static jsmnerr_t jsmn_parse_primitive(jsmn_parser *parser, const char *js,
+				      size_t len,
+				      jsmntok_t *tokens, size_t num_tokens)
+{
+	jsmntok_t *token;
+	int start;
+
+	start = parser->pos;
+
+	for (; parser->pos < len; parser->pos++) {
+		switch (js[parser->pos]) {
+#ifndef JSMN_STRICT
+		/*
+		 * In strict mode primitive must be followed by ","
+		 * or "}" or "]"
+		 */
+		case ':':
+#endif
+		case '\t':
+		case '\r':
+		case '\n':
+		case ' ':
+		case ',':
+		case ']':
+		case '}':
+			goto found;
+		default:
+			break;
+		}
+		if (js[parser->pos] < 32 || js[parser->pos] >= 127) {
+			parser->pos = start;
+			return JSMN_ERROR_INVAL;
+		}
+	}
+#ifdef JSMN_STRICT
+	/*
+	 * In strict mode primitive must be followed by a
+	 * comma/object/array.
+	 */
+	parser->pos = start;
+	return JSMN_ERROR_PART;
+#endif
+
+found:
+	token = jsmn_alloc_token(parser, tokens, num_tokens);
+	if (token == NULL) {
+		parser->pos = start;
+		return JSMN_ERROR_NOMEM;
+	}
+	jsmn_fill_token(token, JSMN_PRIMITIVE, start, parser->pos);
+	parser->pos--; /* parent sees closing brackets */
+	return JSMN_SUCCESS;
+}
+
+/*
+ * Fills next token with JSON string.
+ */
+static jsmnerr_t jsmn_parse_string(jsmn_parser *parser, const char *js,
+				   size_t len,
+				   jsmntok_t *tokens, size_t num_tokens)
+{
+	jsmntok_t *token;
+	int start = parser->pos;
+
+	/* Skip starting quote */
+	parser->pos++;
+
+	for (; parser->pos < len; parser->pos++) {
+		char c = js[parser->pos];
+
+		/* Quote: end of string */
+		if (c == '\"') {
+			token = jsmn_alloc_token(parser, tokens, num_tokens);
+			if (token == NULL) {
+				parser->pos = start;
+				return JSMN_ERROR_NOMEM;
+			}
+			jsmn_fill_token(token, JSMN_STRING, start+1,
+					parser->pos);
+			return JSMN_SUCCESS;
+		}
+
+		/* Backslash: Quoted symbol expected */
+		if (c == '\\') {
+			parser->pos++;
+			switch (js[parser->pos]) {
+				/* Allowed escaped symbols */
+			case '\"':
+			case '/':
+			case '\\':
+			case 'b':
+			case 'f':
+			case 'r':
+			case 'n':
+			case 't':
+				break;
+				/* Allows escaped symbol \uXXXX */
+			case 'u':
+				/* TODO */
+				break;
+				/* Unexpected symbol */
+			default:
+				parser->pos = start;
+				return JSMN_ERROR_INVAL;
+			}
+		}
+	}
+	parser->pos = start;
+	return JSMN_ERROR_PART;
+}
+
+/*
+ * Parse JSON string and fill tokens.
+ */
+jsmnerr_t jsmn_parse(jsmn_parser *parser, const char *js, size_t len,
+		     jsmntok_t *tokens, unsigned int num_tokens)
+{
+	jsmnerr_t r;
+	int i;
+	jsmntok_t *token;
+
+	for (; parser->pos < len; parser->pos++) {
+		char c;
+		jsmntype_t type;
+
+		c = js[parser->pos];
+		switch (c) {
+		case '{':
+		case '[':
+			token = jsmn_alloc_token(parser, tokens, num_tokens);
+			if (token == NULL)
+				return JSMN_ERROR_NOMEM;
+			if (parser->toksuper != -1)
+				tokens[parser->toksuper].size++;
+			token->type = (c == '{' ? JSMN_OBJECT : JSMN_ARRAY);
+			token->start = parser->pos;
+			parser->toksuper = parser->toknext - 1;
+			break;
+		case '}':
+		case ']':
+			type = (c == '}' ? JSMN_OBJECT : JSMN_ARRAY);
+			for (i = parser->toknext - 1; i >= 0; i--) {
+				token = &tokens[i];
+				if (token->start != -1 && token->end == -1) {
+					if (token->type != type)
+						return JSMN_ERROR_INVAL;
+					parser->toksuper = -1;
+					token->end = parser->pos + 1;
+					break;
+				}
+			}
+			/* Error if unmatched closing bracket */
+			if (i == -1)
+				return JSMN_ERROR_INVAL;
+			for (; i >= 0; i--) {
+				token = &tokens[i];
+				if (token->start != -1 && token->end == -1) {
+					parser->toksuper = i;
+					break;
+				}
+			}
+			break;
+		case '\"':
+			r = jsmn_parse_string(parser, js, len, tokens,
+					      num_tokens);
+			if (r < 0)
+				return r;
+			if (parser->toksuper != -1)
+				tokens[parser->toksuper].size++;
+			break;
+		case '\t':
+		case '\r':
+		case '\n':
+		case ':':
+		case ',':
+		case ' ':
+			break;
+#ifdef JSMN_STRICT
+			/*
+			 * In strict mode primitives are:
+			 * numbers and booleans.
+			 */
+		case '-':
+		case '0':
+		case '1':
+		case '2':
+		case '3':
+		case '4':
+		case '5':
+		case '6':
+		case '7':
+		case '8':
+		case '9':
+		case 't':
+		case 'f':
+		case 'n':
+#else
+			/*
+			 * In non-strict mode every unquoted value
+			 * is a primitive.
+			 */
+			/*FALL THROUGH */
+		default:
+#endif
+			r = jsmn_parse_primitive(parser, js, len, tokens,
+						 num_tokens);
+			if (r < 0)
+				return r;
+			if (parser->toksuper != -1)
+				tokens[parser->toksuper].size++;
+			break;
+
+#ifdef JSMN_STRICT
+			/* Unexpected char in strict mode */
+		default:
+			return JSMN_ERROR_INVAL;
+#endif
+		}
+	}
+
+	for (i = parser->toknext - 1; i >= 0; i--) {
+		/* Unmatched opened object or array */
+		if (tokens[i].start != -1 && tokens[i].end == -1)
+			return JSMN_ERROR_PART;
+	}
+
+	return JSMN_SUCCESS;
+}
+
+/*
+ * Creates a new parser based over a given  buffer with an array of tokens
+ * available.
+ */
+void jsmn_init(jsmn_parser *parser)
+{
+	parser->pos = 0;
+	parser->toknext = 0;
+	parser->toksuper = -1;
+}
+
+const char *jsmn_strerror(jsmnerr_t err)
+{
+	switch (err) {
+	case JSMN_ERROR_NOMEM:
+		return "No enough tokens";
+	case JSMN_ERROR_INVAL:
+		return "Invalid character inside JSON string";
+	case JSMN_ERROR_PART:
+		return "The string is not a full JSON packet, more bytes expected";
+	case JSMN_SUCCESS:
+		return "Success";
+	default:
+		return "Unknown json error";
+	}
+}
diff --git a/tools/perf/pmu-events/jsmn.h b/tools/perf/pmu-events/jsmn.h
new file mode 100644
index 0000000..d666b10
--- /dev/null
+++ b/tools/perf/pmu-events/jsmn.h
@@ -0,0 +1,67 @@
+#ifndef __JSMN_H_
+#define __JSMN_H_
+
+/*
+ * JSON type identifier. Basic types are:
+ *	o Object
+ *	o Array
+ *	o String
+ *	o Other primitive: number, boolean (true/false) or null
+ */
+typedef enum {
+	JSMN_PRIMITIVE = 0,
+	JSMN_OBJECT = 1,
+	JSMN_ARRAY = 2,
+	JSMN_STRING = 3
+} jsmntype_t;
+
+typedef enum {
+	/* Not enough tokens were provided */
+	JSMN_ERROR_NOMEM = -1,
+	/* Invalid character inside JSON string */
+	JSMN_ERROR_INVAL = -2,
+	/* The string is not a full JSON packet, more bytes expected */
+	JSMN_ERROR_PART = -3,
+	/* Everything was fine */
+	JSMN_SUCCESS = 0
+} jsmnerr_t;
+
+/*
+ * JSON token description.
+ * @param		type	type (object, array, string etc.)
+ * @param		start	start position in JSON data string
+ * @param		end		end position in JSON data string
+ */
+typedef struct {
+	jsmntype_t type;
+	int start;
+	int end;
+	int size;
+} jsmntok_t;
+
+/*
+ * JSON parser. Contains an array of token blocks available. Also stores
+ * the string being parsed now and current position in that string
+ */
+typedef struct {
+	unsigned int pos; /* offset in the JSON string */
+	int toknext; /* next token to allocate */
+	int toksuper; /* superior token node, e.g parent object or array */
+} jsmn_parser;
+
+/*
+ * Create JSON parser over an array of tokens
+ */
+void jsmn_init(jsmn_parser *parser);
+
+/*
+ * Run JSON parser. It parses a JSON data string into and array of tokens,
+ * each describing a single JSON object.
+ */
+jsmnerr_t jsmn_parse(jsmn_parser *parser, const char *js,
+		     size_t len,
+		     jsmntok_t *tokens, unsigned int num_tokens);
+
+const char *jsmn_strerror(jsmnerr_t err);
+
+#endif /* __JSMN_H_ */
diff --git a/tools/perf/pmu-events/json.c b/tools/perf/pmu-events/json.c
new file mode 100644
index 0000000..87f0c4b
--- /dev/null
+++ b/tools/perf/pmu-events/json.c
@@ -0,0 +1,162 @@
+/* Parse JSON files using the JSMN parser. */
+
+/*
+ * Copyright (c) 2014, Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <sys/fcntl.h>
+#include <stdio.h>
+#include <errno.h>
+#include <unistd.h>
+#include "jsmn.h"
+#include "json.h"
+#include <linux/kernel.h>
+
+
+static char *mapfile(const char *fn, size_t *size)
+{
+	unsigned ps = sysconf(_SC_PAGESIZE);
+	struct stat st;
+	char *map = NULL;
+	int err;
+	int fd = open(fn, O_RDONLY);
+
+	if (fd < 0 && verbose && fn) {
+		pr_err("Error opening events file '%s': %s\n", fn,
+				strerror(errno));
+	}
+
+	if (fd < 0)
+		return NULL;
+	err = fstat(fd, &st);
+	if (err < 0)
+		goto out;
+	*size = st.st_size;
+	map = mmap(NULL,
+		   (st.st_size + ps - 1) & ~(ps - 1),
+		   PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+	if (map == MAP_FAILED)
+		map = NULL;
+out:
+	close(fd);
+	return map;
+}
+
+static void unmapfile(char *map, size_t size)
+{
+	unsigned ps = sysconf(_SC_PAGESIZE);
+	munmap(map, roundup(size, ps));
+}
+
+/*
+ * Parse json file using jsmn. Return array of tokens,
+ * and mapped file. Caller needs to free array.
+ */
+jsmntok_t *parse_json(const char *fn, char **map, size_t *size, int *len)
+{
+	jsmn_parser parser;
+	jsmntok_t *tokens;
+	jsmnerr_t res;
+	unsigned sz;
+
+	*map = mapfile(fn, size);
+	if (!*map)
+		return NULL;
+	/* Heuristic */
+	sz = *size * 16;
+	tokens = malloc(sz);
+	if (!tokens)
+		goto error;
+	jsmn_init(&parser);
+	res = jsmn_parse(&parser, *map, *size, tokens,
+			 sz / sizeof(jsmntok_t));
+	if (res != JSMN_SUCCESS) {
+		pr_err("%s: json error %s\n", fn, jsmn_strerror(res));
+		goto error_free;
+	}
+	if (len)
+		*len = parser.toknext;
+	return tokens;
+error_free:
+	free(tokens);
+error:
+	unmapfile(*map, *size);
+	return NULL;
+}
+
+void free_json(char *map, size_t size, jsmntok_t *tokens)
+{
+	free(tokens);
+	unmapfile(map, size);
+}
+
+static int countchar(char *map, char c, int end)
+{
+	int i;
+	int count = 0;
+	for (i = 0; i < end; i++)
+		if (map[i] == c)
+			count++;
+	return count;
+}
+
+/* Return line number of a jsmn token */
+int json_line(char *map, jsmntok_t *t)
+{
+	return countchar(map, '\n', t->start) + 1;
+}
+
+static const char * const jsmn_types[] = {
+	[JSMN_PRIMITIVE] = "primitive",
+	[JSMN_ARRAY] = "array",
+	[JSMN_OBJECT] = "object",
+	[JSMN_STRING] = "string"
+};
+
+#define LOOKUP(a, i) ((i) < (sizeof(a)/sizeof(*(a))) ? ((a)[i]) : "?")
+
+/* Return type name of a jsmn token */
+const char *json_name(jsmntok_t *t)
+{
+	return LOOKUP(jsmn_types, t->type);
+}
+
+int json_len(jsmntok_t *t)
+{
+	return t->end - t->start;
+}
+
+/* Is string t equal to s? */
+int json_streq(char *map, jsmntok_t *t, const char *s)
+{
+	unsigned len = json_len(t);
+	return len == strlen(s) && !strncasecmp(map + t->start, s, len);
+}
diff --git a/tools/perf/pmu-events/json.h b/tools/perf/pmu-events/json.h
new file mode 100644
index 0000000..6b8337e
--- /dev/null
+++ b/tools/perf/pmu-events/json.h
@@ -0,0 +1,36 @@
+#ifndef JSON_H
+#define JSON_H 1
+
+#include "jsmn.h"
+
+jsmntok_t *parse_json(const char *fn, char **map, size_t *size, int *len);
+void free_json(char *map, size_t size, jsmntok_t *tokens);
+int json_line(char *map, jsmntok_t *t);
+const char *json_name(jsmntok_t *t);
+int json_streq(char *map, jsmntok_t *t, const char *s);
+int json_len(jsmntok_t *t);
+
+extern int verbose;
+
+typedef unsigned int bool;
+
+#ifndef true
+#define	true 1
+#endif
+
+extern int eprintf(int level, int var, const char *fmt, ...);
+#define pr_fmt(fmt)	fmt
+
+#define pr_err(fmt, ...) \
+	eprintf(0, verbose, pr_fmt(fmt), ##__VA_ARGS__)
+
+#ifndef roundup
+#define roundup(x, y) (                                \
+{                                                      \
+        const typeof(y) __y = y;                       \
+        (((x) + (__y - 1)) / __y) * __y;               \
+}                                                      \
+)
+#endif
+
+#endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-20  0:02 [PATCH 0/4] perf: Add support for PMU events in JSON format Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 1/4] perf: Add jsmn `jasmine' JSON parser Sukadev Bhattiprolu
@ 2015-05-20  0:02 ` Sukadev Bhattiprolu
  2015-05-22 14:56   ` Jiri Olsa
                     ` (2 more replies)
  2015-05-20  0:02 ` [PATCH 3/4] perf: Use pmu_events_map table to create event aliases Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 4/4] perf: Add power8 PMU events in JSON format Sukadev Bhattiprolu
  3 siblings, 3 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-20  0:02 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: namhyung, linuxppc-dev, linux-kernel

From: Andi Kleen <ak@linux.intel.com>

This is a modified version of an earlier patch by Andi Kleen.

We expect architectures to describe the performance monitoring events
for each CPU in a corresponding JSON file, which look like:

	[
	{
	"EventCode": "0x00",
	"UMask": "0x01",
	"EventName": "INST_RETIRED.ANY",
	"BriefDescription": "Instructions retired from execution.",
	"PublicDescription": "Instructions retired from execution.",
	"Counter": "Fixed counter 1",
	"CounterHTOff": "Fixed counter 1",
	"SampleAfterValue": "2000003",
	"SampleAfterValue": "2000003",
	"MSRIndex": "0",
	"MSRValue": "0",
	"TakenAlone": "0",
	"CounterMask": "0",
	"Invert": "0",
	"AnyThread": "0",
	"EdgeDetect": "0",
	"PEBS": "0",
	"PRECISE_STORE": "0",
	"Errata": "null",
	"Offcore": "0"
	}
	]

We also expect the architectures to provide a mapping between individual
CPUs to their JSON files. Eg:

	GenuineIntel-6-1E,V1,/NHM-EP/NehalemEP_core_V1.json,core

which maps each CPU, identified by [vendor, family, model, version, type]
to a JSON file.

Given these files, the program, jevents::
	- locates all JSON files for the architecture,
	- parses each JSON file and generates a C-style "PMU-events table"
	  (pmu-events.c)
	- locates a mapfile for the architecture
	- builds a global table, mapping each model of CPU to the
	  corresponding PMU-events table.

The 'pmu-events.c' is generated when building perf and added to libperf.a.
The global table pmu_events_map[] table in this pmu-events.c will be used
in perf in a follow-on patch.

If the architecture does not have any JSON files or there is an error in
processing them, an empty mapping file is created. This would allow the
build of perf to proceed even if we are not able to provide aliases for
events.

The parser for JSON files allows parsing Intel style JSON event files. This
allows to use an Intel event list directly with perf. The Intel event lists
can be quite large and are too big to store in unswappable kernel memory.

The conversion from JSON to C-style is straight forward.  The parser knows
(very little) Intel specific information, and can be easily extended to
handle fields for other CPUs.

The parser code is partially shared with an independent parsing library,
which is 2-clause BSD licenced. To avoid any conflicts I marked those
files as BSD licenced too. As part of perf they become GPLv2.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>

v2: Address review feedback. Rename option to --event-files
v3: Add JSON example
v4: Update manpages.
v5: Don't remove dot in fixname. Fix compile error. Add include
	protection. Comment realloc.
v6: Include debug/util.h
v7: (Sukadev Bhattiprolu)
	Rebase to 4.0 and fix some conflicts.
v8: (Sukadev Bhattiprolu)
	Move jevents.[hc] to tools/perf/pmu-events/
	Rewrite to locate and process arch specific JSON and "map" files;
	and generate a C file.
	(Removed acked-by Namhyung Kim due to modest changes to patch)
	Compile the generated pmu-events.c and add the pmu-events.o to
	libperf.a
---
 tools/perf/Build                   |    1 +
 tools/perf/Makefile.perf           |    4 +-
 tools/perf/pmu-events/Build        |   38 ++
 tools/perf/pmu-events/README       |   67 ++++
 tools/perf/pmu-events/jevents.c    |  700 ++++++++++++++++++++++++++++++++++++
 tools/perf/pmu-events/jevents.h    |   17 +
 tools/perf/pmu-events/pmu-events.h |   39 ++
 7 files changed, 865 insertions(+), 1 deletion(-)
 create mode 100644 tools/perf/pmu-events/Build
 create mode 100644 tools/perf/pmu-events/README
 create mode 100644 tools/perf/pmu-events/jevents.c
 create mode 100644 tools/perf/pmu-events/jevents.h
 create mode 100644 tools/perf/pmu-events/pmu-events.h

diff --git a/tools/perf/Build b/tools/perf/Build
index b77370e..40bffa0 100644
--- a/tools/perf/Build
+++ b/tools/perf/Build
@@ -36,6 +36,7 @@ CFLAGS_builtin-help.o      += $(paths)
 CFLAGS_builtin-timechart.o += $(paths)
 CFLAGS_perf.o              += -DPERF_HTML_PATH="BUILD_STR($(htmldir_SQ))" -include $(OUTPUT)PERF-VERSION-FILE
 
+libperf-y += pmu-events/
 libperf-y += util/
 libperf-y += arch/
 libperf-y += ui/
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index c43a205..d078c71 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -306,6 +306,8 @@ perf.spec $(SCRIPTS) \
 ifneq ($(OUTPUT),)
 %.o: $(OUTPUT)%.o
 	@echo "    # Redirected target $@ => $(OUTPUT)$@"
+pmu-events/%.o: $(OUTPUT)pmu-events/%.o
+	@echo "    # Redirected target $@ => $(OUTPUT)$@"
 util/%.o: $(OUTPUT)util/%.o
 	@echo "    # Redirected target $@ => $(OUTPUT)$@"
 bench/%.o: $(OUTPUT)bench/%.o
@@ -529,7 +531,7 @@ clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean config-clean
 	$(call QUIET_CLEAN, core-objs)  $(RM) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf-with-kcore $(LANG_BINDINGS)
 	$(Q)find . -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
 	$(Q)$(RM) .config-detected
-	$(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 perf-read-vdsox32
+	$(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 perf-read-vdsox32 $(OUTPUT)pmu-events/jevents
 	$(call QUIET_CLEAN, core-gen)   $(RM)  *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex*
 	$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) clean
 	$(python-clean)
diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
new file mode 100644
index 0000000..7a2aaaf
--- /dev/null
+++ b/tools/perf/pmu-events/Build
@@ -0,0 +1,38 @@
+.SUFFIXES:
+
+libperf-y += pmu-events.o
+
+JEVENTS =	$(OUTPUT)pmu-events/jevents
+JEVENTS_OBJS =	$(OUTPUT)pmu-events/json.o $(OUTPUT)pmu-events/jsmn.o \
+		$(OUTPUT)pmu-events/jevents.o
+
+PMU_EVENTS =	$(srctree)/tools/perf/pmu-events/
+
+all: $(OUTPUT)pmu-events.o
+
+$(OUTPUT)pmu-events/jevents: $(JEVENTS_OBJS)
+	$(call rule_mkdir)
+	$(CC) -o $@ $(JEVENTS_OBJS)
+
+#
+# Look for JSON files in $(PMU_EVENTS)/arch directory,
+# process them and create tables in $(PMU_EVENTS)/pmu-events.c
+#
+pmu-events/pmu-events.c: $(JEVENTS) FORCE
+	$(JEVENTS) $(PMU_EVENTS)/arch $(PMU_EVENTS)/pmu-events.c
+ 
+
+#
+# If we fail to build pmu-events.o, it could very well be due to
+# inconsistencies between the architecture's mapfile.csv and the
+# directory tree. If the compilation of the pmu-events.c generated
+# by jevents fails, create an "empty" mapping table in pmu-events.c
+# so the build of perf can succeed even if we are not able to use
+# the PMU event aliases.
+#
+
+clean:
+	rm -f $(JEVENTS_OBJS) $(JEVENTS) $(OUTPUT)pmu-events.o \
+		$(PMU_EVENTS)pmu-events.c
+
+FORCE:
diff --git a/tools/perf/pmu-events/README b/tools/perf/pmu-events/README
new file mode 100644
index 0000000..d9ed641
--- /dev/null
+++ b/tools/perf/pmu-events/README
@@ -0,0 +1,67 @@
+The contents of this directory allows users to specify PMU events
+in their CPUs by their symbolic names rather than raw event codes
+(see example below).
+
+
+The main program in this directory, is the 'jevents', which is built
+and executed _before_ the perf binary itself is built.
+
+The 'jevents' program tries to locate and process JSON files in the directory
+tree tools/perf/pmu-events/arch/xxx. 
+
+	- Regular files with .json extension in the name are assumed to be
+	  JSON files.
+
+	- Regular files with base name starting with 'mapfile' are assumed to
+	  be a CSV file that - maps a specific CPU to its set of PMU events.
+
+Directories are traversed but all other files are ignored.
+
+Using the JSON files and the mapfile, 'jevents' generates a C source file,
+'pmu-events.c', which encodes the two sets of tables:
+
+	- Set of 'PMU events tables' for all known CPUs in the architecture
+
+	- A 'mapping table' that maps a CPU ot its 'PMU events table'
+
+The 'pmu-events.h' has an extern declaration for the mapping table and the
+generated 'pmu-events.c' defines this table.
+
+After the 'pmu-events.c' is generated, it is compiled and the resulting
+'pmu-events.o' is added to 'libperf.a' which is then used by perf to process
+PMU event aliases. eg:
+
+	$ perf stat -e pm_1plus_ppc_cmpl sleep 1
+
+where pm_1plus_ppc_cmpl is a Power8 PMU event.
+
+In case of errors when processing files in the tools/perf/pmu-events/arch
+directory, 'jevents' tries to create an empty mapping file to allow perf
+build to succeed even if the PMU event aliases cannot be used.
+
+However some errors in processing may cause the perf build to fail.
+
+The mapfile format is expected to be:
+
+	VFM,Version,JSON_file_path_name,Type
+
+where:
+	Comma:
+		is the required field delimiter.
+
+	VFM:
+		represents vendor, family, model of the CPU. Architectures
+		can use a delimiter other than comma to further separate the
+		fields if they so choose. Architectures should implement the
+		function arch_pmu_events_match_cpu() and can use the
+		VFM, Version and Type fields to uniquely identify a CPU.
+
+	Version:
+		is the CPU version (PVR in case of Powerpc)
+
+	JSON_file_path_name:
+		is the pathname for the JSON file, relative to the directory
+		containing the mapfile.
+
+	Type:
+		indicates whether the events or "core" or "uncore" events.
diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
new file mode 100644
index 0000000..3afa6e9
--- /dev/null
+++ b/tools/perf/pmu-events/jevents.c
@@ -0,0 +1,700 @@
+#define  _XOPEN_SOURCE 500	/* needed for nftw() */
+
+/* Parse event JSON files */
+
+/*
+ * Copyright (c) 2014, Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
+ * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+ * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+#include <ctype.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <libgen.h>
+#include <dirent.h>
+#include <sys/utsname.h>
+#include <sys/time.h>			/* getrlimit */
+#include <sys/resource.h>		/* getrlimit */
+#include <ftw.h>
+#include <sys/stat.h>
+#include "jsmn.h"
+#include "json.h"
+#include "jevents.h"
+
+#ifndef  __maybe_unused
+#define __maybe_unused                  __attribute__((unused))
+#endif
+
+int verbose = 1;
+
+int eprintf(int level, int var, const char *fmt, ...)
+{
+
+	int ret;
+	va_list args;
+
+	if (var < level)
+		return 0;
+
+	va_start(args, fmt);
+
+	ret = vfprintf(stderr, fmt, args);
+
+	va_end(args);
+
+	return ret;
+}
+
+__attribute__((weak)) char *get_cpu_str(void)
+{
+	return NULL;
+}
+
+static void addfield(char *map, char **dst, const char *sep,
+		     const char *a, jsmntok_t *bt)
+{
+	unsigned len = strlen(a) + 1 + strlen(sep);
+	int olen = *dst ? strlen(*dst) : 0;
+	int blen = bt ? json_len(bt) : 0;
+	char *out;
+
+	out = realloc(*dst, len + olen + blen);
+	if (!out) {
+		/* Don't add field in this case */
+		return;
+	}
+	*dst = out;
+
+	if (!olen)
+		*(*dst) = 0;
+	else
+		strcat(*dst, sep);
+	strcat(*dst, a);
+	if (bt)
+		strncat(*dst, map + bt->start, blen);
+}
+
+static void fixname(char *s)
+{
+	for (; *s; s++)
+		*s = tolower(*s);
+}
+
+static void fixdesc(char *s)
+{
+	char *e = s + strlen(s);
+
+	/* Remove trailing dots that look ugly in perf list */
+	--e;
+	while (e >= s && isspace(*e))
+		--e;
+	if (*e == '.')
+		*e = 0;
+}
+
+static struct msrmap {
+	const char *num;
+	const char *pname;
+} msrmap[] = {
+	{ "0x3F6", "ldlat=" },
+	{ "0x1A6", "offcore_rsp=" },
+	{ "0x1A7", "offcore_rsp=" },
+	{ NULL, NULL }
+};
+
+static struct field {
+	const char *field;
+	const char *kernel;
+} fields[] = {
+	{ "EventCode",	"event=" },
+	{ "UMask",	"umask=" },
+	{ "CounterMask", "cmask=" },
+	{ "Invert",	"inv=" },
+	{ "AnyThread",	"any=" },
+	{ "EdgeDetect",	"edge=" },
+	{ "SampleAfterValue", "period=" },
+	{ NULL, NULL }
+};
+
+static void cut_comma(char *map, jsmntok_t *newval)
+{
+	int i;
+
+	/* Cut off everything after comma */
+	for (i = newval->start; i < newval->end; i++) {
+		if (map[i] == ',')
+			newval->end = i;
+	}
+}
+
+static int match_field(char *map, jsmntok_t *field, int nz,
+		       char **event, jsmntok_t *val)
+{
+	struct field *f;
+	jsmntok_t newval = *val;
+
+	for (f = fields; f->field; f++)
+		if (json_streq(map, field, f->field) && nz) {
+			cut_comma(map, &newval);
+			addfield(map, event, ",", f->kernel, &newval);
+			return 1;
+		}
+	return 0;
+}
+
+static struct msrmap *lookup_msr(char *map, jsmntok_t *val)
+{
+	jsmntok_t newval = *val;
+	static bool warned;
+	int i;
+
+	cut_comma(map, &newval);
+	for (i = 0; msrmap[i].num; i++)
+		if (json_streq(map, &newval, msrmap[i].num))
+			return &msrmap[i];
+	if (!warned) {
+		warned = true;
+		pr_err("Unknown MSR in event file %.*s\n",
+			json_len(val), map + val->start);
+	}
+	return NULL;
+}
+
+#define EXPECT(e, t, m) do { if (!(e)) {			\
+	jsmntok_t *loc = (t);					\
+	if (!(t)->start && (t) > tokens)			\
+		loc = (t) - 1;					\
+		pr_err("%s:%d: " m ", got %s\n", fn,		\
+			json_line(map, loc),			\
+			json_name(t));				\
+	goto out_free;						\
+} } while (0)
+
+static void print_events_table_prefix(FILE *fp, const char *tblname)
+{
+	fprintf(fp, "struct pmu_event %s[] = {\n", tblname);
+}
+
+static int print_events_table_entry(void *data, char *name, char *event,
+				    char *desc)
+{
+	FILE *outfp = data;
+	/*
+	 * TODO: Remove formatting chars after debugging to reduce
+	 *	 string lengths.
+	 */
+	fprintf(outfp, "{\n");
+
+	fprintf(outfp, "\t.name = \"%s\",\n", name);
+	fprintf(outfp, "\t.event = \"%s\",\n", event);
+	fprintf(outfp, "\t.desc = \"%s\",\n", desc);
+
+	fprintf(outfp, "},\n");
+
+	return 0;
+}
+
+static void print_events_table_suffix(FILE *outfp)
+{
+	fprintf(outfp, "{\n");
+
+	fprintf(outfp, "\t.name = 0,\n");
+	fprintf(outfp, "\t.event = 0,\n");
+	fprintf(outfp, "\t.desc = 0,\n");
+
+	fprintf(outfp, "},\n");
+	fprintf(outfp, "};\n");
+}
+
+/* Call func with each event in the json file */
+int json_events(const char *fn,
+	  int (*func)(void *data, char *name, char *event, char *desc),
+	  void *data)
+{
+	int err = -EIO;
+	size_t size;
+	jsmntok_t *tokens, *tok;
+	int i, j, len;
+	char *map;
+
+	if (!fn)
+		return -ENOENT;
+
+	tokens = parse_json(fn, &map, &size, &len);
+	if (!tokens)
+		return -EIO;
+	EXPECT(tokens->type == JSMN_ARRAY, tokens, "expected top level array");
+	tok = tokens + 1;
+	for (i = 0; i < tokens->size; i++) {
+		char *event = NULL, *desc = NULL, *name = NULL;
+		struct msrmap *msr = NULL;
+		jsmntok_t *msrval = NULL;
+		jsmntok_t *precise = NULL;
+		jsmntok_t *obj = tok++;
+
+		EXPECT(obj->type == JSMN_OBJECT, obj, "expected object");
+		for (j = 0; j < obj->size; j += 2) {
+			jsmntok_t *field, *val;
+			int nz;
+
+			field = tok + j;
+			EXPECT(field->type == JSMN_STRING, tok + j,
+			       "Expected field name");
+			val = tok + j + 1;
+			EXPECT(val->type == JSMN_STRING, tok + j + 1,
+			       "Expected string value");
+
+			nz = !json_streq(map, val, "0");
+			if (match_field(map, field, nz, &event, val)) {
+				/* ok */
+			} else if (json_streq(map, field, "EventName")) {
+				addfield(map, &name, "", "", val);
+			} else if (json_streq(map, field, "BriefDescription")) {
+				addfield(map, &desc, "", "", val);
+				fixdesc(desc);
+			} else if (json_streq(map, field, "PEBS") && nz) {
+				precise = val;
+			} else if (json_streq(map, field, "MSRIndex") && nz) {
+				msr = lookup_msr(map, val);
+			} else if (json_streq(map, field, "MSRValue")) {
+				msrval = val;
+			} else if (json_streq(map, field, "Errata") &&
+				   !json_streq(map, val, "null")) {
+				addfield(map, &desc, ". ",
+					" Spec update: ", val);
+			} else if (json_streq(map, field, "Data_LA") && nz) {
+				addfield(map, &desc, ". ",
+					" Supports address when precise",
+					NULL);
+			}
+			/* ignore unknown fields */
+		}
+		if (precise && !strstr(desc, "(Precise Event)")) {
+			if (json_streq(map, precise, "2"))
+				addfield(map, &desc, " ", "(Must be precise)",
+						NULL);
+			else
+				addfield(map, &desc, " ",
+						"(Precise event)", NULL);
+		}
+		if (msr != NULL)
+			addfield(map, &event, ",", msr->pname, msrval);
+		fixname(name);
+		err = func(data, name, event, desc);
+		free(event);
+		free(desc);
+		free(name);
+		if (err)
+			break;
+		tok += j;
+	}
+	EXPECT(tok - tokens == len, tok, "unexpected objects at end");
+	err = 0;
+out_free:
+	free_json(map, size, tokens);
+	return err;
+}
+
+static char *file_name_to_table_name(char *fname)
+{
+	unsigned int i, j;
+	int c;
+	int n = 1024;		/* use max variable length? */
+	char *tblname;
+	char *p;
+
+	tblname = malloc(n);
+	if (!tblname)
+		return NULL;
+
+	p = basename(fname);
+
+	memset(tblname, 0, n);
+
+	/* Ensure table name starts with an alphabetic char */
+	strcpy(tblname, "pme_");
+
+	n = strlen(fname) + strlen(tblname);
+	n = min(1024, n);
+
+	for (i = 0, j = strlen(tblname); i < strlen(fname); i++, j++) {
+		c = p[i];
+		if (isalnum(c) || c == '_')
+			tblname[j] = c;
+		else if (c == '-')
+			tblname[j] = '_';
+		else if (c == '.') {
+			tblname[j] = '\0';
+			break;
+		} else {
+			printf("Invalid character '%c' in file name %s\n",
+					c, p);
+			free(tblname);
+			return NULL;
+		}
+	}
+
+	return tblname;
+}
+
+static void print_mapping_table_prefix(FILE *outfp)
+{
+	fprintf(outfp, "struct pmu_events_map pmu_events_map[] = {\n");
+}
+
+static void print_mapping_table_suffix(FILE *outfp)
+{
+	/*
+	 * Print the terminating, NULL entry.
+	 */
+	fprintf(outfp, "{\n");
+	fprintf(outfp, "\t.vfm = 0,\n");
+	fprintf(outfp, "\t.version = 0,\n");
+	fprintf(outfp, "\t.type = 0,\n");
+	fprintf(outfp, "\t.table = 0,\n");
+	fprintf(outfp, "},\n");
+
+	/* and finally, the closing curly bracket for the struct */
+	fprintf(outfp, "};\n");
+}
+
+/*
+ * Process the JSON file @json_file and write a table of PMU events found in
+ * the JSON file to the outfp.
+ */
+static int process_json(FILE *outfp, const char *json_file)
+{
+	char *tblname;
+	int err;
+
+	/*
+	 * Drop file name suffix. Replace hyphens with underscores.
+	 * Fail if file name contains any alphanum characters besides
+	 * underscores.
+	 */
+	tblname = file_name_to_table_name((char *)json_file);
+	if (!tblname) {
+		printf("Error determining table name for %s\n", json_file);
+		return -1;
+	}
+
+	print_events_table_prefix(outfp, tblname);
+
+	err = json_events(json_file, print_events_table_entry, outfp);
+
+	if (err) {
+		printf("Translation failed\n");
+		_Exit(1);
+	}
+
+	print_events_table_suffix(outfp);
+
+	return 0;
+}
+
+static int process_mapfile(FILE *outfp, char *fpath)
+{
+	int n = 16384;
+	FILE *mapfp;
+	char *save;
+	char *line, *p;
+	int line_num;
+	char *tblname;
+
+	printf("Processing mapfile %s\n", fpath);
+
+	line = malloc(n);
+	if (!line)
+		return -1;
+
+	mapfp = fopen(fpath, "r");
+	if (!mapfp) {
+		printf("Error %s opening %s\n", strerror(errno), fpath);
+		return -1;
+	}
+
+	print_mapping_table_prefix(outfp);
+
+	line_num = 0;
+	while (1) {
+		char *vfm, *version, *type, *fname;
+
+		line_num++;
+		p = fgets(line, n, mapfp);
+		if (!p)
+			break;
+
+		if (line[0] == '#')
+			continue;
+
+		if (line[strlen(line)-1] != '\n') {
+			/* TODO Deal with lines longer than 16K */
+			printf("Mapfile %s: line %d too long, aborting\n",
+					fpath, line_num);
+			return -1;
+		}
+		line[strlen(line)-1] = '\0';
+
+		vfm = strtok_r(p, ",", &save);
+		version = strtok_r(NULL, ",", &save);
+		fname = strtok_r(NULL, ",", &save);
+		type = strtok_r(NULL, ",", &save);
+
+		tblname = file_name_to_table_name(fname);
+		fprintf(outfp, "{\n");
+		fprintf(outfp, "\t.vfm = \"%s\",\n", vfm);
+		fprintf(outfp, "\t.version = \"%s\",\n", version);
+		fprintf(outfp, "\t.type = \"%s\",\n", type);
+
+		/*
+		 * CHECK: We can't use the type (eg "core") field in the
+		 * table name. For us to do that, we need to somehow tweak
+		 * the other caller of file_name_to_table(), process_json()
+		 * to determine the type. process_json() file has no way
+		 * of knowing these are "core" events unless file name has
+		 * core in it. If filename has core in it, we can safely
+		 * ignore the type field here also.
+		 */
+		fprintf(outfp, "\t.table = %s\n", tblname);
+		fprintf(outfp, "},\n");
+	}
+
+	print_mapping_table_suffix(outfp);
+
+	return 0;
+}
+
+/*
+ * If we fail to locate/process JSON and map files, create a NULL mapping
+ * table. This would at least allow perf to build even if we can't find/use
+ * the aliases.
+ */
+static void create_empty_mapping(const char *output_file)
+{
+	FILE *outfp;
+
+	printf("Creating empty pmu_events_map[] table\n");
+
+	/* Unlink file to clear any partial writes to it */
+	unlink(output_file);
+
+	outfp = fopen(output_file, "a");
+	if (!outfp) {
+		perror("fopen()");
+		_Exit(1);
+	}
+
+	fprintf(outfp, "#include \"pmu-events.h\"\n");
+	print_mapping_table_prefix(outfp);
+	print_mapping_table_suffix(outfp);
+	fclose(outfp);
+}
+
+static int get_maxfds(void)
+{
+	struct rlimit rlim;
+
+	if (getrlimit(RLIMIT_NOFILE, &rlim) == 0)
+		return rlim.rlim_max;
+
+	return 512;
+}
+
+/*
+ * nftw() doesn't let us pass an argument to the processing function,
+ * so use a global variables.
+ */
+FILE *eventsfp;
+char *mapfile;
+
+static int process_one_file(const char *fpath, const struct stat *sb,
+				int typeflag __maybe_unused,
+				struct FTW *ftwbuf __maybe_unused)
+{
+	char *bname;
+
+	if (!S_ISREG(sb->st_mode))
+		return 0;
+
+	/*
+	 * Save the mapfile name for now. We will process mapfile
+	 * after processing all JSON files (so we can write out the
+	 * mapping table after all PMU events tables).
+	 *
+	 * Allow for optional .csv on mapfile name.
+	 *
+	 * TODO: Allow for multiple mapfiles? Punt for now.
+	 */
+	bname = basename((char *)fpath);
+	if (!strncmp(bname, "mapfile", 7)) {
+		if (mapfile) {
+			printf("Multiple mapfiles? Using %s, ignoring %s\n",
+					mapfile, fpath);
+		} else {
+			mapfile = strdup(fpath);
+		}
+		return 0;
+	}
+
+	/*
+	 * If the file name does not have a .json extension,
+	 * ignore it. It could be a readme.txt for instance.
+	 */
+	bname += strlen(bname) - 5;
+	if (strncmp(bname, ".json", 5)) {
+		printf("Ignoring file without .json suffix %s\n", fpath);
+		return 0;
+	}
+
+	/*
+	 * Assume all other files are JSON files.
+	 *
+	 * If mapfile refers to 'power7_core.json', we create a table
+	 * named 'power7_core'. Any inconsistencies between the mapfile
+	 * and directory tree could result in build failure due to table
+	 * names not being found.
+	 *
+	 * Atleast for now, be strict with processing JSON file names.
+	 * i.e. if JSON file name cannot be mapped to C-style table name,
+	 * fail.
+	 */
+	if (process_json(eventsfp, fpath)) {
+		printf("Error processing JSON file %s, ignoring all\n", fpath);
+		return -1;
+	}
+
+	return 0;
+}
+
+#ifndef PATH_MAX
+#define PATH_MAX	4096
+#endif
+
+/*
+ * Starting in directory 'start_dirname', find the "mapfile.csv" and
+ * the set of JSON files for this architecture.
+ *
+ * From each JSON file, create a C-style "PMU events table" from the
+ * JSON file (see struct pmu_event).
+ *
+ * From the mapfile, create a mapping between the CPU revisions and
+ * PMU event tables (see struct pmu_events_map).
+ *
+ * Write out the PMU events tables and the mapping table to pmu-event.c.
+ *
+ * If unable to process the JSON or arch files, create an empty mapping
+ * table so we can continue to build/use  perf even if we cannot use the
+ * PMU event aliases.
+ */
+int main(int argc, char *argv[])
+{
+	int rc;
+	int flags;
+	int maxfds;
+	const char *arch;
+	struct utsname uts;
+
+	char dirname[PATH_MAX];
+
+	const char *output_file = "pmu-events.c";
+	const char *start_dirname = "arch";
+
+	if (argc > 1)
+		start_dirname = argv[1];
+
+	if (argc > 2)
+		output_file = argv[2];
+
+	unlink(output_file);
+	eventsfp = fopen(output_file, "a");
+	if (!eventsfp) {
+		printf("%s Unable to create required file %s (%s)\n",
+				argv[0], output_file, strerror(errno));
+		_Exit(1);
+	}
+
+	rc = uname(&uts);
+	if (rc < 0) {
+		printf("%s: uname() failed: %s\n", argv[0], strerror(errno));
+		goto empty_map;
+	}
+
+	/* TODO: Add other flavors of machine type here */
+	if (!strcmp(uts.machine, "ppc64"))
+		arch = "powerpc";
+	else if (!strcmp(uts.machine, "i686"))
+		arch = "x86";
+	else if (!strcmp(uts.machine, "x86_64"))
+		arch = "x86";
+	else {
+		printf("%s: Unknown architecture %s\n", argv[0], uts.machine);
+		goto empty_map;
+	}
+
+	/* Include pmu-events.h first */
+	fprintf(eventsfp, "#include \"pmu-events.h\"\n");
+
+	sprintf(dirname, "%s/%s", start_dirname, arch);
+
+	/*
+	 * Treat symlinks of JSON files as regular files for now and create
+	 * separate tables for each symlink (presumably, each symlink refers
+	 * to specific version of the CPU).
+	 *
+	 * TODO: Maybe add another level of mapping if necessary to allow
+	 *	 several processor versions (i.e symlinks) share a table
+	 *	 of PMU events.
+	 */
+	maxfds = get_maxfds();
+	mapfile = NULL;
+	flags = FTW_DEPTH;
+	rc = nftw(dirname, process_one_file, maxfds, flags);
+	if (rc) {
+		printf("%s: Error walking file tree %s\n", argv[0], dirname);
+		goto empty_map;
+	}
+
+	if (!mapfile) {
+		printf("No CPU->JSON mapping?\n");
+		goto empty_map;
+	}
+
+	if (process_mapfile(eventsfp, mapfile)) {
+		printf("Error processing mapfile %s\n", mapfile);
+		goto empty_map;
+	}
+
+	return 0;
+
+empty_map:
+	fclose(eventsfp);
+	create_empty_mapping(output_file);
+	return 0;
+}
diff --git a/tools/perf/pmu-events/jevents.h b/tools/perf/pmu-events/jevents.h
new file mode 100644
index 0000000..996601f
--- /dev/null
+++ b/tools/perf/pmu-events/jevents.h
@@ -0,0 +1,17 @@
+#ifndef JEVENTS_H
+#define JEVENTS_H 1
+
+int json_events(const char *fn,
+		int (*func)(void *data, char *name, char *event, char *desc),
+		void *data);
+char *get_cpu_str(void);
+
+#ifndef min
+#define min(x, y) ({                            \
+	typeof(x) _min1 = (x);                  \
+	typeof(y) _min2 = (y);                  \
+	(void) (&_min1 == &_min2);              \
+	_min1 < _min2 ? _min1 : _min2; })
+#endif
+
+#endif
diff --git a/tools/perf/pmu-events/pmu-events.h b/tools/perf/pmu-events/pmu-events.h
new file mode 100644
index 0000000..a24faef
--- /dev/null
+++ b/tools/perf/pmu-events/pmu-events.h
@@ -0,0 +1,39 @@
+#ifndef PMU_EVENTS_H
+#define PMU_EVENTS_H
+
+/*
+ * Describe each PMU event. Each CPU has a table of these
+ * events.
+ */
+struct pmu_event {
+	const char *name;
+	const char *event;
+	const char *desc;
+};
+
+/*
+ *
+ * Map a CPU to its table of PMU events. The CPU is identified, in
+ * an arch-specific manner, in arch_pmu_events_match_cpu(), by one
+ * or more of the following attributes:
+ *
+ *	vendor, family, model, revision, type
+ *
+ * TODO: Split vfm into individual fields or leave it to architectures
+ *	 to split it with an alternate delimiter like hyphen in the
+ *	 mapfile?
+ */
+struct pmu_events_map {
+	const char *vfm;		/* vendor, family, model */
+	const char *version;
+	const char *type;		/* core, uncore etc */
+	struct pmu_event *table;
+};
+
+/*
+ * Global table mapping each known CPU for the architecture to its
+ * table of PMU-Events.
+ */
+extern struct pmu_events_map pmu_events_map[];
+
+#endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-20  0:02 [PATCH 0/4] perf: Add support for PMU events in JSON format Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 1/4] perf: Add jsmn `jasmine' JSON parser Sukadev Bhattiprolu
  2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
@ 2015-05-20  0:02 ` Sukadev Bhattiprolu
  2015-05-20 23:58   ` Andi Kleen
  2015-05-20  0:02 ` [PATCH 4/4] perf: Add power8 PMU events in JSON format Sukadev Bhattiprolu
  3 siblings, 1 reply; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-20  0:02 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: namhyung, linuxppc-dev, linux-kernel

At run time, (i.e when perf is starting up), locate the specific events
table for the current CPU and create event aliases for each of the events.

Use these aliases to parse user's specified perf event.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 tools/perf/arch/powerpc/util/header.c |   33 +++++++++++
 tools/perf/util/header.h              |    4 +-
 tools/perf/util/pmu.c                 |  104 ++++++++++++++++++++++++++++-----
 3 files changed, 127 insertions(+), 14 deletions(-)

diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c
index 6c1b8a7..8325012 100644
--- a/tools/perf/arch/powerpc/util/header.c
+++ b/tools/perf/arch/powerpc/util/header.c
@@ -32,3 +32,36 @@ get_cpuid(char *buffer, size_t sz)
 	}
 	return -1;
 }
+
+static char *
+get_cpu_str(void)
+{
+        char *bufp;
+
+        if (asprintf(&bufp, "%.8lx", mfspr(SPRN_PVR)) < 0)
+                bufp = NULL;
+
+        return bufp;
+}
+
+/*
+ * Return TRUE if the CPU identified by @vfm, @version, and @type
+ * matches the current CPU.  vfm refers to [Vendor, Family, Model],
+ *
+ * Return FALSE otherwise.
+ *
+ * For Powerpc, we only compare @version to the processor PVR.
+ */
+bool arch_pmu_events_match_cpu(const char *vfm __maybe_unused,
+				const char *version,
+				const char *type __maybe_unused)
+{
+	char *cpustr;
+	bool rc;
+
+	cpustr = get_cpu_str();
+	rc = !strcmp(version, cpustr);
+	free(cpustr);
+
+	return rc;
+}
diff --git a/tools/perf/util/header.h b/tools/perf/util/header.h
index 3bb90ac..207c5b8 100644
--- a/tools/perf/util/header.h
+++ b/tools/perf/util/header.h
@@ -8,7 +8,6 @@
 #include <linux/types.h>
 #include "event.h"
 
-
 enum {
 	HEADER_RESERVED		= 0,	/* always cleared */
 	HEADER_FIRST_FEATURE	= 1,
@@ -156,4 +155,7 @@ int write_padded(int fd, const void *bf, size_t count, size_t count_aligned);
  */
 int get_cpuid(char *buffer, size_t sz);
 
+bool arch_pmu_events_match_cpu(const char *vfm, const char *version, 
+				const char *type);
+
 #endif /* __PERF_HEADER_H */
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 4841167..7665f0f 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -10,7 +10,9 @@
 #include "util.h"
 #include "pmu.h"
 #include "parse-events.h"
+#include "pmu-events/pmu-events.h"	// Move to global file???
 #include "cpumap.h"
+#include "header.h"
 
 struct perf_pmu_format {
 	char *name;
@@ -198,17 +200,11 @@ static int perf_pmu__parse_snapshot(struct perf_pmu_alias *alias,
 	return 0;
 }
 
-static int perf_pmu__new_alias(struct list_head *list, char *dir, char *name, FILE *file)
+static int __perf_pmu__new_alias(struct list_head *list, char *name, char *dir, char *desc __maybe_unused, char *val)
 {
 	struct perf_pmu_alias *alias;
-	char buf[256];
 	int ret;
 
-	ret = fread(buf, 1, sizeof(buf), file);
-	if (ret == 0)
-		return -EINVAL;
-	buf[ret] = 0;
-
 	alias = malloc(sizeof(*alias));
 	if (!alias)
 		return -ENOMEM;
@@ -218,26 +214,47 @@ static int perf_pmu__new_alias(struct list_head *list, char *dir, char *name, FI
 	alias->unit[0] = '\0';
 	alias->per_pkg = false;
 
-	ret = parse_events_terms(&alias->terms, buf);
+	ret = parse_events_terms(&alias->terms, val);
 	if (ret) {
+		pr_err("Cannot parse alias %s: %d\n", val, ret);
 		free(alias);
 		return ret;
 	}
 
 	alias->name = strdup(name);
+	if (dir) {
+		/*
+		 * load unit name and scale if available
+		 */
+		perf_pmu__parse_unit(alias, dir, name);
+		perf_pmu__parse_scale(alias, dir, name);
+		perf_pmu__parse_per_pkg(alias, dir, name);
+		perf_pmu__parse_snapshot(alias, dir, name);
+	}
+
 	/*
-	 * load unit name and scale if available
+	 * TODO: pickup description from Andi's patchset
 	 */
-	perf_pmu__parse_unit(alias, dir, name);
-	perf_pmu__parse_scale(alias, dir, name);
-	perf_pmu__parse_per_pkg(alias, dir, name);
-	perf_pmu__parse_snapshot(alias, dir, name);
+	//alias->desc = desc ? strdpu(desc) : NULL;
 
 	list_add_tail(&alias->list, list);
 
 	return 0;
 }
 
+static int perf_pmu__new_alias(struct list_head *list, char *dir, char *name, FILE *file)
+{
+	char buf[256];
+	int ret;
+
+	ret = fread(buf, 1, sizeof(buf), file);
+	if (ret == 0)
+		return -EINVAL;
+	buf[ret] = 0;
+	
+	return __perf_pmu__new_alias(list, name, dir, NULL, buf);
+}
+
 static inline bool pmu_alias_info_file(char *name)
 {
 	size_t len;
@@ -435,6 +452,65 @@ perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
 	return NULL;
 }
 
+/*
+ * Return TRUE if the CPU identified by @vfm, @version, and @type
+ * matches the current CPU.  vfm refers to [Vendor, Family, Model],
+ *
+ * Return FALSE otherwise.
+ *
+ * Each architecture can choose what subset of these attributes they
+ * need to compare/identify a CPU.
+ */
+bool __attribute__((weak))
+arch_pmu_events_match_cpu(const char *vfm __maybe_unused, 
+			const char *version __maybe_unused,
+			const char *type  __maybe_unused)
+{
+	return 0;
+}
+
+/*
+ * From the pmu_events_map, find the table of PMU events that corresponds
+ * to the current running CPU. Then, add all PMU events from that table
+ * as aliases.
+ */
+static int pmu_add_cpu_aliases(void *data)
+{
+	struct list_head *head = (struct list_head *)data;
+	int i;
+	struct pmu_events_map *map;
+	struct pmu_event *pe;
+
+	i = 0;
+	while(1) {
+		map = &pmu_events_map[i++];
+
+		if (!map->table)
+			return 0;
+
+		if (arch_pmu_events_match_cpu(map->vfm, map->version,
+						map->type))
+			break;
+	}
+
+	/*
+	 * Found a matching PMU events table. Create aliases
+	 */
+	i = 0;
+	while(1) {
+		pe = &map->table[i++];
+		if (!pe->name)
+			break;
+
+		/* need type casts to override 'const' */
+		__perf_pmu__new_alias(head, (char *)pe->name, NULL, 
+				(char *)pe->desc, (char *)pe->event);
+	}
+
+	return 0;
+}
+
+
 static struct perf_pmu *pmu_lookup(const char *name)
 {
 	struct perf_pmu *pmu;
@@ -453,6 +529,8 @@ static struct perf_pmu *pmu_lookup(const char *name)
 	if (pmu_aliases(name, &aliases))
 		return NULL;
 
+	if (!strcmp(name, "cpu"))
+		(void)pmu_add_cpu_aliases(&aliases);
 	if (pmu_type(name, &type))
 		return NULL;
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-20  0:02 [PATCH 0/4] perf: Add support for PMU events in JSON format Sukadev Bhattiprolu
                   ` (2 preceding siblings ...)
  2015-05-20  0:02 ` [PATCH 3/4] perf: Use pmu_events_map table to create event aliases Sukadev Bhattiprolu
@ 2015-05-20  0:02 ` Sukadev Bhattiprolu
  2015-05-27 13:59   ` Namhyung Kim
  3 siblings, 1 reply; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-20  0:02 UTC (permalink / raw)
  To: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras
  Cc: namhyung, linuxppc-dev, linux-kernel

The power8.json and 004d0100.json files describe the PMU events in the
Power8 processor.

The jevents program from the prior patches will use these JSON files
to create tables which will then be used in perf to build aliases for
PMU events. This in turn would allow users to specify these PMU events
by name:

	$ perf stat -e pm_1plus_ppc_cmpl sleep 1

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
---
 .../pmu-events/arch/powerpc/004d0100-core.json     | 5766 ++++++++++++++++++++
 tools/perf/pmu-events/arch/powerpc/mapfile.csv     |    1 +
 tools/perf/pmu-events/arch/powerpc/power8.json     | 5766 ++++++++++++++++++++
 3 files changed, 11533 insertions(+)
 create mode 100644 tools/perf/pmu-events/arch/powerpc/004d0100-core.json
 create mode 100644 tools/perf/pmu-events/arch/powerpc/mapfile.csv
 create mode 100644 tools/perf/pmu-events/arch/powerpc/power8.json

diff --git a/tools/perf/pmu-events/arch/powerpc/004d0100-core.json b/tools/perf/pmu-events/arch/powerpc/004d0100-core.json
new file mode 100644
index 0000000..1511138
--- /dev/null
+++ b/tools/perf/pmu-events/arch/powerpc/004d0100-core.json
@@ -0,0 +1,5766 @@
+[
+  {
+    "EventCode": "0x1f05e",
+    "EventName": "PM_1LPAR_CYC",
+    "PEBS" : "1",
+    "Umask": "0x01",
+    "MSRIndex": "0",
+    "MSRValue": "1",
+    "BriefDescription": "Number of cycles in single lpar mode. All threads in the core are assigned to the same lpar (Precise Event),",
+    "PublicDescription": "Number of cycles in single lpar mode. (Precise Event),"
+  },
+  {
+    "EventCode": "0x100f2",
+    "EventName": "PM_1PLUS_PPC_CMPL",
+    "BriefDescription": "1 or more ppc insts finished,",
+    "PublicDescription": "1 or more ppc insts finished (completed).,"
+  },
+  {
+    "EventCode": "0x400f2",
+    "EventName": "PM_1PLUS_PPC_DISP",
+    "BriefDescription": "Cycles at least one Instr Dispatched,",
+    "PublicDescription": "Cycles at least one Instr Dispatched. Could be a group with only microcode. Issue HW016521,"
+  },
+  {
+    "EventCode": "0x2006e",
+    "EventName": "PM_2LPAR_CYC",
+    "BriefDescription": "Cycles in 2-lpar mode. Threads 0-3 belong to Lpar0 and threads 4-7 belong to Lpar1,",
+    "PublicDescription": "Number of cycles in 2 lpar mode.,"
+  },
+  {
+    "EventCode": "0x4e05e",
+    "EventName": "PM_4LPAR_CYC",
+    "BriefDescription": "Number of cycles in 4 LPAR mode. Threads 0-1 belong to lpar0, threads 2-3 belong to lpar1, threads 4-5 belong to lpar2, and threads 6-7 belong to lpar3,",
+    "PublicDescription": "Number of cycles in 4 LPAR mode.,"
+  },
+  {
+    "EventCode": "0x610050",
+    "EventName": "PM_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d),"
+  },
+  {
+    "EventCode": "0x520050",
+    "EventName": "PM_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x620052",
+    "EventName": "PM_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x610052",
+    "EventName": "PM_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x610054",
+    "EventName": "PM_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x640052",
+    "EventName": "PM_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x630050",
+    "EventName": "PM_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x630052",
+    "EventName": "PM_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x640050",
+    "EventName": "PM_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x100fa",
+    "EventName": "PM_ANY_THRD_RUN_CYC",
+    "BriefDescription": "One of threads in run_cycles,",
+    "PublicDescription": "Any thread in run_cycles (was one thread in run_cycles).,"
+  },
+  {
+    "EventCode": "0x2505e",
+    "EventName": "PM_BACK_BR_CMPL",
+    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
+    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"
+  },
+  {
+    "EventCode": "0x4082",
+    "EventName": "PM_BANK_CONFLICT",
+    "BriefDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,",
+    "PublicDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,"
+  },
+  {
+    "EventCode": "0x10068",
+    "EventName": "PM_BRU_FIN",
+    "BriefDescription": "Branch Instruction Finished,",
+    "PublicDescription": "Branch Instruction Finished .,"
+  },
+  {
+    "EventCode": "0x20036",
+    "EventName": "PM_BR_2PATH",
+    "BriefDescription": "two path branch,",
+    "PublicDescription": "two path branch.,"
+  },
+  {
+    "EventCode": "0x5086",
+    "EventName": "PM_BR_BC_8",
+    "BriefDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,",
+    "PublicDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,"
+  },
+  {
+    "EventCode": "0x5084",
+    "EventName": "PM_BR_BC_8_CONV",
+    "BriefDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,",
+    "PublicDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,"
+  },
+  {
+    "EventCode": "0x40060",
+    "EventName": "PM_BR_CMPL",
+    "BriefDescription": "Branch Instruction completed,",
+    "PublicDescription": "Branch Instruction completed.,"
+  },
+  {
+    "EventCode": "0x40ac",
+    "EventName": "PM_BR_MPRED_CCACHE",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction,"
+  },
+  {
+    "EventCode": "0x400f6",
+    "EventName": "PM_BR_MPRED_CMPL",
+    "BriefDescription": "Number of Branch Mispredicts,",
+    "PublicDescription": "Number of Branch Mispredicts.,"
+  },
+  {
+    "EventCode": "0x40b8",
+    "EventName": "PM_BR_MPRED_CR",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).,"
+  },
+  {
+    "EventCode": "0x40ae",
+    "EventName": "PM_BR_MPRED_LSTACK",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction,"
+  },
+  {
+    "EventCode": "0x40ba",
+    "EventName": "PM_BR_MPRED_TA",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.,"
+  },
+  {
+    "EventCode": "0x10138",
+    "EventName": "PM_BR_MRK_2PATH",
+    "BriefDescription": "marked two path branch,",
+    "PublicDescription": "marked two path branch.,"
+  },
+  {
+    "EventCode": "0x409c",
+    "EventName": "PM_BR_PRED_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target,",
+    "PublicDescription": "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target,"
+  },
+  {
+    "EventCode": "0x409e",
+    "EventName": "PM_BR_PRED_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.,",
+    "PublicDescription": "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.,"
+  },
+  {
+    "EventCode": "0x489c",
+    "EventName": "PM_BR_PRED_BR_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) OR if_pc_br0_br_pred(1).,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40a4",
+    "EventName": "PM_BR_PRED_CCACHE_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction,"
+  },
+  {
+    "EventCode": "0x40a6",
+    "EventName": "PM_BR_PRED_CCACHE_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction,"
+  },
+  {
+    "EventCode": "0x48a4",
+    "EventName": "PM_BR_PRED_CCACHE_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND if_pc_br0_pred_type.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40b0",
+    "EventName": "PM_BR_PRED_CR_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra,"
+  },
+  {
+    "EventCode": "0x40b2",
+    "EventName": "PM_BR_PRED_CR_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra,"
+  },
+  {
+    "EventCode": "0x48b0",
+    "EventName": "PM_BR_PRED_CR_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(1)='1'.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40a8",
+    "EventName": "PM_BR_PRED_LSTACK_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction,"
+  },
+  {
+    "EventCode": "0x40aa",
+    "EventName": "PM_BR_PRED_LSTACK_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction,"
+  },
+  {
+    "EventCode": "0x48a8",
+    "EventName": "PM_BR_PRED_LSTACK_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND (not if_pc_br0_pred_type).,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40b4",
+    "EventName": "PM_BR_PRED_TA_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.,"
+  },
+  {
+    "EventCode": "0x40b6",
+    "EventName": "PM_BR_PRED_TA_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.,"
+  },
+  {
+    "EventCode": "0x48b4",
+    "EventName": "PM_BR_PRED_TA_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0)='1'.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x200fa",
+    "EventName": "PM_BR_TAKEN_CMPL",
+    "BriefDescription": "New event for Branch Taken,",
+    "PublicDescription": "Branch Taken.,"
+  },
+  {
+    "EventCode": "0x40a0",
+    "EventName": "PM_BR_UNCOND_BR0",
+    "BriefDescription": "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,",
+    "PublicDescription": "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,"
+  },
+  {
+    "EventCode": "0x40a2",
+    "EventName": "PM_BR_UNCOND_BR1",
+    "BriefDescription": "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,",
+    "PublicDescription": "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,"
+  },
+  {
+    "EventCode": "0x48a0",
+    "EventName": "PM_BR_UNCOND_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred=00 AND if_pc_br0_completed.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x3094",
+    "EventName": "PM_CASTOUT_ISSUED",
+    "BriefDescription": "Castouts issued,",
+    "PublicDescription": "Castouts issued,"
+  },
+  {
+    "EventCode": "0x3096",
+    "EventName": "PM_CASTOUT_ISSUED_GPR",
+    "BriefDescription": "Castouts issued GPR,",
+    "PublicDescription": "Castouts issued GPR,"
+  },
+  {
+    "EventCode": "0x10050",
+    "EventName": "PM_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d).,"
+  },
+  {
+    "EventCode": "0x2090",
+    "EventName": "PM_CLB_HELD",
+    "BriefDescription": "CLB Hold: Any Reason,",
+    "PublicDescription": "CLB Hold: Any Reason,"
+  },
+  {
+    "EventCode": "0x4000a",
+    "EventName": "PM_CMPLU_STALL",
+    "BriefDescription": "Completion stall,",
+    "PublicDescription": "Completion stall.,"
+  },
+  {
+    "EventCode": "0x4d018",
+    "EventName": "PM_CMPLU_STALL_BRU",
+    "BriefDescription": "Completion stall due to a Branch Unit,",
+    "PublicDescription": "Completion stall due to a Branch Unit.,"
+  },
+  {
+    "EventCode": "0x2d018",
+    "EventName": "PM_CMPLU_STALL_BRU_CRU",
+    "BriefDescription": "Completion stall due to IFU,",
+    "PublicDescription": "Completion stall due to IFU.,"
+  },
+  {
+    "EventCode": "0x30026",
+    "EventName": "PM_CMPLU_STALL_COQ_FULL",
+    "BriefDescription": "Completion stall due to CO q full,",
+    "PublicDescription": "Completion stall due to CO q full.,"
+  },
+  {
+    "EventCode": "0x2c012",
+    "EventName": "PM_CMPLU_STALL_DCACHE_MISS",
+    "BriefDescription": "Completion stall by Dcache miss,",
+    "PublicDescription": "Completion stall by Dcache miss.,"
+  },
+  {
+    "EventCode": "0x2c018",
+    "EventName": "PM_CMPLU_STALL_DMISS_L21_L31",
+    "BriefDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3),",
+    "PublicDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).,"
+  },
+  {
+    "EventCode": "0x2c016",
+    "EventName": "PM_CMPLU_STALL_DMISS_L2L3",
+    "BriefDescription": "Completion stall by Dcache miss which resolved in L2/L3,",
+    "PublicDescription": "Completion stall by Dcache miss which resolved in L2/L3.,"
+  },
+  {
+    "EventCode": "0x4c016",
+    "EventName": "PM_CMPLU_STALL_DMISS_L2L3_CONFLICT",
+    "BriefDescription": "Completion stall due to cache miss that resolves in the L2 or L3 with a conflict,",
+    "PublicDescription": "Completion stall due to cache miss resolving in core's L2/L3 with a conflict.,"
+  },
+  {
+    "EventCode": "0x4c01a",
+    "EventName": "PM_CMPLU_STALL_DMISS_L3MISS",
+    "BriefDescription": "Completion stall due to cache miss resolving missed the L3,",
+    "PublicDescription": "Completion stall due to cache miss resolving missed the L3.,"
+  },
+  {
+    "EventCode": "0x4c018",
+    "EventName": "PM_CMPLU_STALL_DMISS_LMEM",
+    "BriefDescription": "Completion stall due to cache miss that resolves in local memory,",
+    "PublicDescription": "Completion stall due to cache miss resolving in core's Local Memory.,"
+  },
+  {
+    "EventCode": "0x2c01c",
+    "EventName": "PM_CMPLU_STALL_DMISS_REMOTE",
+    "BriefDescription": "Completion stall by Dcache miss which resolved from remote chip (cache or memory),",
+    "PublicDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).,"
+  },
+  {
+    "EventCode": "0x4c012",
+    "EventName": "PM_CMPLU_STALL_ERAT_MISS",
+    "BriefDescription": "Completion stall due to LSU reject ERAT miss,",
+    "PublicDescription": "Completion stall due to LSU reject ERAT miss.,"
+  },
+  {
+    "EventCode": "0x30038",
+    "EventName": "PM_CMPLU_STALL_FLUSH",
+    "BriefDescription": "completion stall due to flush by own thread,",
+    "PublicDescription": "completion stall due to flush by own thread.,"
+  },
+  {
+    "EventCode": "0x4d016",
+    "EventName": "PM_CMPLU_STALL_FXLONG",
+    "BriefDescription": "Completion stall due to a long latency fixed point instruction,",
+    "PublicDescription": "Completion stall due to a long latency fixed point instruction.,"
+  },
+  {
+    "EventCode": "0x2d016",
+    "EventName": "PM_CMPLU_STALL_FXU",
+    "BriefDescription": "Completion stall due to FXU,",
+    "PublicDescription": "Completion stall due to FXU.,"
+  },
+  {
+    "EventCode": "0x30036",
+    "EventName": "PM_CMPLU_STALL_HWSYNC",
+    "BriefDescription": "completion stall due to hwsync,",
+    "PublicDescription": "completion stall due to hwsync.,"
+  },
+  {
+    "EventCode": "0x4d014",
+    "EventName": "PM_CMPLU_STALL_LOAD_FINISH",
+    "BriefDescription": "Completion stall due to a Load finish,",
+    "PublicDescription": "Completion stall due to a Load finish.,"
+  },
+  {
+    "EventCode": "0x2c010",
+    "EventName": "PM_CMPLU_STALL_LSU",
+    "BriefDescription": "Completion stall by LSU instruction,",
+    "PublicDescription": "Completion stall by LSU instruction.,"
+  },
+  {
+    "EventCode": "0x10036",
+    "EventName": "PM_CMPLU_STALL_LWSYNC",
+    "BriefDescription": "completion stall due to isync/lwsync,",
+    "PublicDescription": "completion stall due to isync/lwsync.,"
+  },
+  {
+    "EventCode": "0x30028",
+    "EventName": "PM_CMPLU_STALL_MEM_ECC_DELAY",
+    "BriefDescription": "Completion stall due to mem ECC delay,",
+    "PublicDescription": "Completion stall due to mem ECC delay.,"
+  },
+  {
+    "EventCode": "0x2e01c",
+    "EventName": "PM_CMPLU_STALL_NO_NTF",
+    "BriefDescription": "Completion stall due to nop,",
+    "PublicDescription": "Completion stall due to nop.,"
+  },
+  {
+    "EventCode": "0x2e01e",
+    "EventName": "PM_CMPLU_STALL_NTCG_FLUSH",
+    "BriefDescription": "Completion stall due to ntcg flush,",
+    "PublicDescription": "Completion stall due to reject (load hit store).,"
+  },
+  {
+    "EventCode": "0x30006",
+    "EventName": "PM_CMPLU_STALL_OTHER_CMPL",
+    "BriefDescription": "Instructions core completed while this tread was stalled,",
+    "PublicDescription": "Instructions core completed while this thread was stalled.,"
+  },
+  {
+    "EventCode": "0x4c010",
+    "EventName": "PM_CMPLU_STALL_REJECT",
+    "BriefDescription": "Completion stall due to LSU reject,",
+    "PublicDescription": "Completion stall due to LSU reject.,"
+  },
+  {
+    "EventCode": "0x2c01a",
+    "EventName": "PM_CMPLU_STALL_REJECT_LHS",
+    "BriefDescription": "Completion stall due to reject (load hit store),",
+    "PublicDescription": "Completion stall due to reject (load hit store).,"
+  },
+  {
+    "EventCode": "0x4c014",
+    "EventName": "PM_CMPLU_STALL_REJ_LMQ_FULL",
+    "BriefDescription": "Completion stall due to LSU reject LMQ full,",
+    "PublicDescription": "Completion stall due to LSU reject LMQ full.,"
+  },
+  {
+    "EventCode": "0x4d010",
+    "EventName": "PM_CMPLU_STALL_SCALAR",
+    "BriefDescription": "Completion stall due to VSU scalar instruction,",
+    "PublicDescription": "Completion stall due to VSU scalar instruction.,"
+  },
+  {
+    "EventCode": "0x2d010",
+    "EventName": "PM_CMPLU_STALL_SCALAR_LONG",
+    "BriefDescription": "Completion stall due to VSU scalar long latency instruction,",
+    "PublicDescription": "Completion stall due to VSU scalar long latency instruction.,"
+  },
+  {
+    "EventCode": "0x2c014",
+    "EventName": "PM_CMPLU_STALL_STORE",
+    "BriefDescription": "Completion stall by stores this includes store agen finishes in pipe LS0/LS1 and store data finishes in LS2/LS3,",
+    "PublicDescription": "Completion stall by stores.,"
+  },
+  {
+    "EventCode": "0x4c01c",
+    "EventName": "PM_CMPLU_STALL_ST_FWD",
+    "BriefDescription": "Completion stall due to store forward,",
+    "PublicDescription": "Completion stall due to store forward.,"
+  },
+  {
+    "EventCode": "0x1001c",
+    "EventName": "PM_CMPLU_STALL_THRD",
+    "BriefDescription": "Completion Stalled due to thread conflict. Group ready to complete but it was another thread's turn,",
+    "PublicDescription": "Completion stall due to thread conflict.,"
+  },
+  {
+    "EventCode": "0x2d014",
+    "EventName": "PM_CMPLU_STALL_VECTOR",
+    "BriefDescription": "Completion stall due to VSU vector instruction,",
+    "PublicDescription": "Completion stall due to VSU vector instruction.,"
+  },
+  {
+    "EventCode": "0x4d012",
+    "EventName": "PM_CMPLU_STALL_VECTOR_LONG",
+    "BriefDescription": "Completion stall due to VSU vector long instruction,",
+    "PublicDescription": "Completion stall due to VSU vector long instruction.,"
+  },
+  {
+    "EventCode": "0x2d012",
+    "EventName": "PM_CMPLU_STALL_VSU",
+    "BriefDescription": "Completion stall due to VSU instruction,",
+    "PublicDescription": "Completion stall due to VSU instruction.,"
+  },
+  {
+    "EventCode": "0x16083",
+    "EventName": "PM_CO0_ALLOC",
+    "BriefDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x16082",
+    "EventName": "PM_CO0_BUSY",
+    "BriefDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0x3608a",
+    "EventName": "PM_CO_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x40066",
+    "EventName": "PM_CRU_FIN",
+    "BriefDescription": "IFU Finished a (non-branch) instruction,",
+    "PublicDescription": "IFU Finished a (non-branch) instruction.,"
+  },
+  {
+    "EventCode": "0x1e",
+    "EventName": "PM_CYC",
+    "BriefDescription": "Cycles,",
+    "PublicDescription": "Cycles .,"
+  },
+  {
+    "EventCode": "0x61c050",
+    "EventName": "PM_DATA_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load,"
+  },
+  {
+    "EventCode": "0x64c048",
+    "EventName": "PM_DATA_ALL_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c048",
+    "EventName": "PM_DATA_ALL_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c04c",
+    "EventName": "PM_DATA_ALL_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04c",
+    "EventName": "PM_DATA_ALL_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c042",
+    "EventName": "PM_DATA_ALL_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c046",
+    "EventName": "PM_DATA_ALL_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c046",
+    "EventName": "PM_DATA_ALL_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04e",
+    "EventName": "PM_DATA_ALL_FROM_L2MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c042",
+    "EventName": "PM_DATA_ALL_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c046",
+    "EventName": "PM_DATA_ALL_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04e",
+    "EventName": "PM_DATA_ALL_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c042",
+    "EventName": "PM_DATA_ALL_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c042",
+    "EventName": "PM_DATA_ALL_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c044",
+    "EventName": "PM_DATA_ALL_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04c",
+    "EventName": "PM_DATA_ALL_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c048",
+    "EventName": "PM_DATA_ALL_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c04c",
+    "EventName": "PM_DATA_ALL_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04a",
+    "EventName": "PM_DATA_ALL_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c048",
+    "EventName": "PM_DATA_ALL_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c046",
+    "EventName": "PM_DATA_ALL_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04a",
+    "EventName": "PM_DATA_ALL_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c04a",
+    "EventName": "PM_DATA_ALL_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c04a",
+    "EventName": "PM_DATA_ALL_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c050",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load,"
+  },
+  {
+    "EventCode": "0x62c052",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x61c052",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load,"
+  },
+  {
+    "EventCode": "0x61c054",
+    "EventName": "PM_DATA_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for either demand loads or data prefetch,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumps for a demand load,"
+  },
+  {
+    "EventCode": "0x64c052",
+    "EventName": "PM_DATA_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for either demand loads or data prefetch,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor a demand load,"
+  },
+  {
+    "EventCode": "0x63c050",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load,"
+  },
+  {
+    "EventCode": "0x63c052",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x64c050",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load,"
+  },
+  {
+    "EventCode": "0x1c050",
+    "EventName": "PM_DATA_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load.,"
+  },
+  {
+    "EventCode": "0x4c048",
+    "EventName": "PM_DATA_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c048",
+    "EventName": "PM_DATA_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c04c",
+    "EventName": "PM_DATA_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c04c",
+    "EventName": "PM_DATA_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c042",
+    "EventName": "PM_DATA_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c046",
+    "EventName": "PM_DATA_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c046",
+    "EventName": "PM_DATA_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x200fe",
+    "EventName": "PM_DATA_FROM_L2MISS",
+    "BriefDescription": "Demand LD - L2 Miss (not L2 hit),",
+    "PublicDescription": "Demand LD - L2 Miss (not L2 hit).,"
+  },
+  {
+    "EventCode": "0x1c04e",
+    "EventName": "PM_DATA_FROM_L2MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c040",
+    "EventName": "PM_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c040",
+    "EventName": "PM_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c040",
+    "EventName": "PM_DATA_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c040",
+    "EventName": "PM_DATA_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1 .,"
+  },
+  {
+    "EventCode": "0x4c042",
+    "EventName": "PM_DATA_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c044",
+    "EventName": "PM_DATA_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c044",
+    "EventName": "PM_DATA_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c044",
+    "EventName": "PM_DATA_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c046",
+    "EventName": "PM_DATA_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x300fe",
+    "EventName": "PM_DATA_FROM_L3MISS",
+    "BriefDescription": "Demand LD - L3 Miss (not L2 hit and not L3 hit),",
+    "PublicDescription": "Demand LD - L3 Miss (not L2 hit and not L3 hit).,"
+  },
+  {
+    "EventCode": "0x4c04e",
+    "EventName": "PM_DATA_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c042",
+    "EventName": "PM_DATA_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c042",
+    "EventName": "PM_DATA_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c044",
+    "EventName": "PM_DATA_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c04c",
+    "EventName": "PM_DATA_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c048",
+    "EventName": "PM_DATA_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x400fe",
+    "EventName": "PM_DATA_FROM_MEM",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load,",
+    "PublicDescription": "Data cache reload from memory (including L4).,"
+  },
+  {
+    "EventCode": "0x2c04c",
+    "EventName": "PM_DATA_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c04a",
+    "EventName": "PM_DATA_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c048",
+    "EventName": "PM_DATA_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c046",
+    "EventName": "PM_DATA_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c04a",
+    "EventName": "PM_DATA_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c04a",
+    "EventName": "PM_DATA_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c04a",
+    "EventName": "PM_DATA_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c050",
+    "EventName": "PM_DATA_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load.,"
+  },
+  {
+    "EventCode": "0x2c052",
+    "EventName": "PM_DATA_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for a demand load,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x1c052",
+    "EventName": "PM_DATA_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for a demand load,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load.,"
+  },
+  {
+    "EventCode": "0x1c054",
+    "EventName": "PM_DATA_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for a demand load,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumps for a demand load.,"
+  },
+  {
+    "EventCode": "0x4c052",
+    "EventName": "PM_DATA_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for a demand load,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor a demand load.,"
+  },
+  {
+    "EventCode": "0x3c050",
+    "EventName": "PM_DATA_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load.,"
+  },
+  {
+    "EventCode": "0x3c052",
+    "EventName": "PM_DATA_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for a demand load,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x4c050",
+    "EventName": "PM_DATA_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for a demand load,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load.,"
+  },
+  {
+    "EventCode": "0x3001a",
+    "EventName": "PM_DATA_TABLEWALK_CYC",
+    "BriefDescription": "Tablwalk Cycles (could be 1 or 2 active),",
+    "PublicDescription": "Data Tablewalk Active.,"
+  },
+  {
+    "EventCode": "0xe0bc",
+    "EventName": "PM_DC_COLLISIONS",
+    "BriefDescription": "DATA Cache collisions,",
+    "PublicDescription": "DATA Cache collisions42,"
+  },
+  {
+    "EventCode": "0x1e050",
+    "EventName": "PM_DC_PREF_STREAM_ALLOC",
+    "BriefDescription": "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1,",
+    "PublicDescription": "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1.,"
+  },
+  {
+    "EventCode": "0x2e050",
+    "EventName": "PM_DC_PREF_STREAM_CONF",
+    "BriefDescription": "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down,",
+    "PublicDescription": "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down.,"
+  },
+  {
+    "EventCode": "0x4e050",
+    "EventName": "PM_DC_PREF_STREAM_FUZZY_CONF",
+    "BriefDescription": "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up),",
+    "PublicDescription": "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up).,"
+  },
+  {
+    "EventCode": "0x3e050",
+    "EventName": "PM_DC_PREF_STREAM_STRIDED_CONF",
+    "BriefDescription": "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.,",
+    "PublicDescription": "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software..,"
+  },
+  {
+    "EventCode": "0x4c054",
+    "EventName": "PM_DERAT_MISS_16G",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16G,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 16G.,"
+  },
+  {
+    "EventCode": "0x3c054",
+    "EventName": "PM_DERAT_MISS_16M",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16M,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 16M.,"
+  },
+  {
+    "EventCode": "0x1c056",
+    "EventName": "PM_DERAT_MISS_4K",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 4K,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 4K.,"
+  },
+  {
+    "EventCode": "0x2c054",
+    "EventName": "PM_DERAT_MISS_64K",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 64K.,"
+  },
+  {
+    "EventCode": "0xb0ba",
+    "EventName": "PM_DFU",
+    "BriefDescription": "Finish DFU (all finish),",
+    "PublicDescription": "Finish DFU (all finish),"
+  },
+  {
+    "EventCode": "0xb0be",
+    "EventName": "PM_DFU_DCFFIX",
+    "BriefDescription": "Convert from fixed opcode finish (dcffix,dcffixq),",
+    "PublicDescription": "Convert from fixed opcode finish (dcffix,dcffixq),"
+  },
+  {
+    "EventCode": "0xb0bc",
+    "EventName": "PM_DFU_DENBCD",
+    "BriefDescription": "BCD->DPD opcode finish (denbcd, denbcdq),",
+    "PublicDescription": "BCD->DPD opcode finish (denbcd, denbcdq),"
+  },
+  {
+    "EventCode": "0xb0b8",
+    "EventName": "PM_DFU_MC",
+    "BriefDescription": "Finish DFU multicycle,",
+    "PublicDescription": "Finish DFU multicycle,"
+  },
+  {
+    "EventCode": "0x2092",
+    "EventName": "PM_DISP_CLB_HELD_BAL",
+    "BriefDescription": "Dispatch/CLB Hold: Balance,",
+    "PublicDescription": "Dispatch/CLB Hold: Balance,"
+  },
+  {
+    "EventCode": "0x2094",
+    "EventName": "PM_DISP_CLB_HELD_RES",
+    "BriefDescription": "Dispatch/CLB Hold: Resource,",
+    "PublicDescription": "Dispatch/CLB Hold: Resource,"
+  },
+  {
+    "EventCode": "0x20a8",
+    "EventName": "PM_DISP_CLB_HELD_SB",
+    "BriefDescription": "Dispatch/CLB Hold: Scoreboard,",
+    "PublicDescription": "Dispatch/CLB Hold: Scoreboard,"
+  },
+  {
+    "EventCode": "0x2098",
+    "EventName": "PM_DISP_CLB_HELD_SYNC",
+    "BriefDescription": "Dispatch/CLB Hold: Sync type instruction,",
+    "PublicDescription": "Dispatch/CLB Hold: Sync type instruction,"
+  },
+  {
+    "EventCode": "0x2096",
+    "EventName": "PM_DISP_CLB_HELD_TLBIE",
+    "BriefDescription": "Dispatch Hold: Due to TLBIE,",
+    "PublicDescription": "Dispatch Hold: Due to TLBIE,"
+  },
+  {
+    "EventCode": "0x10006",
+    "EventName": "PM_DISP_HELD",
+    "BriefDescription": "Dispatch Held,",
+    "PublicDescription": "Dispatch Held.,"
+  },
+  {
+    "EventCode": "0x20006",
+    "EventName": "PM_DISP_HELD_IQ_FULL",
+    "BriefDescription": "Dispatch held due to Issue q full,",
+    "PublicDescription": "Dispatch held due to Issue q full.,"
+  },
+  {
+    "EventCode": "0x1002a",
+    "EventName": "PM_DISP_HELD_MAP_FULL",
+    "BriefDescription": "Dispatch for this thread was held because the Mappers were full,",
+    "PublicDescription": "Dispatch held due to Mapper full.,"
+  },
+  {
+    "EventCode": "0x30018",
+    "EventName": "PM_DISP_HELD_SRQ_FULL",
+    "BriefDescription": "Dispatch held due SRQ no room,",
+    "PublicDescription": "Dispatch held due SRQ no room.,"
+  },
+  {
+    "EventCode": "0x4003c",
+    "EventName": "PM_DISP_HELD_SYNC_HOLD",
+    "BriefDescription": "Dispatch held due to SYNC hold,",
+    "PublicDescription": "Dispatch held due to SYNC hold.,"
+  },
+  {
+    "EventCode": "0x30a6",
+    "EventName": "PM_DISP_HOLD_GCT_FULL",
+    "BriefDescription": "Dispatch Hold Due to no space in the GCT,",
+    "PublicDescription": "Dispatch Hold Due to no space in the GCT,"
+  },
+  {
+    "EventCode": "0x30008",
+    "EventName": "PM_DISP_WT",
+    "BriefDescription": "Dispatched Starved,",
+    "PublicDescription": "Dispatched Starved (not held, nothing to dispatch).,"
+  },
+  {
+    "EventCode": "0x4e048",
+    "EventName": "PM_DPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e048",
+    "EventName": "PM_DPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e04c",
+    "EventName": "PM_DPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04c",
+    "EventName": "PM_DPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e042",
+    "EventName": "PM_DPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e046",
+    "EventName": "PM_DPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e046",
+    "EventName": "PM_DPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04e",
+    "EventName": "PM_DPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e040",
+    "EventName": "PM_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e040",
+    "EventName": "PM_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e040",
+    "EventName": "PM_DPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e040",
+    "EventName": "PM_DPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e042",
+    "EventName": "PM_DPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e044",
+    "EventName": "PM_DPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e044",
+    "EventName": "PM_DPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e044",
+    "EventName": "PM_DPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e046",
+    "EventName": "PM_DPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04e",
+    "EventName": "PM_DPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e042",
+    "EventName": "PM_DPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e042",
+    "EventName": "PM_DPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e044",
+    "EventName": "PM_DPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04c",
+    "EventName": "PM_DPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e048",
+    "EventName": "PM_DPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e04c",
+    "EventName": "PM_DPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04a",
+    "EventName": "PM_DPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e048",
+    "EventName": "PM_DPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e046",
+    "EventName": "PM_DPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04a",
+    "EventName": "PM_DPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e04a",
+    "EventName": "PM_DPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e04a",
+    "EventName": "PM_DPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request.,"
+  },
+  {
+    "EventCode": "0xd094",
+    "EventName": "PM_DSLB_MISS",
+    "BriefDescription": "Data SLB Miss - Total of all segment sizes,",
+    "PublicDescription": "Data SLB Miss - Total of all segment sizesData SLB misses,"
+  },
+  {
+    "EventCode": "0x300fc",
+    "EventName": "PM_DTLB_MISS",
+    "BriefDescription": "Data PTEG reload,",
+    "PublicDescription": "Data PTEG Reloaded (DTLB Miss).,"
+  },
+  {
+    "EventCode": "0x1c058",
+    "EventName": "PM_DTLB_MISS_16G",
+    "BriefDescription": "Data TLB Miss page size 16G,",
+    "PublicDescription": "Data TLB Miss page size 16G.,"
+  },
+  {
+    "EventCode": "0x4c056",
+    "EventName": "PM_DTLB_MISS_16M",
+    "BriefDescription": "Data TLB Miss page size 16M,",
+    "PublicDescription": "Data TLB Miss page size 16M.,"
+  },
+  {
+    "EventCode": "0x2c056",
+    "EventName": "PM_DTLB_MISS_4K",
+    "BriefDescription": "Data TLB Miss page size 4k,",
+    "PublicDescription": "Data TLB Miss page size 4k.,"
+  },
+  {
+    "EventCode": "0x3c056",
+    "EventName": "PM_DTLB_MISS_64K",
+    "BriefDescription": "Data TLB Miss page size 64K,",
+    "PublicDescription": "Data TLB Miss page size 64K.,"
+  },
+  {
+    "EventCode": "0x50a8",
+    "EventName": "PM_EAT_FORCE_MISPRED",
+    "BriefDescription": "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is issue,",
+    "PublicDescription": "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is,"
+  },
+  {
+    "EventCode": "0x4084",
+    "EventName": "PM_EAT_FULL_CYC",
+    "BriefDescription": "Cycles No room in EAT,",
+    "PublicDescription": "Cycles No room in EATSet on bank conflict and case where no ibuffers available.,"
+  },
+  {
+    "EventCode": "0x2080",
+    "EventName": "PM_EE_OFF_EXT_INT",
+    "BriefDescription": "Ee off and external interrupt,",
+    "PublicDescription": "Ee off and external interrupt,"
+  },
+  {
+    "EventCode": "0x200f8",
+    "EventName": "PM_EXT_INT",
+    "BriefDescription": "external interrupt,",
+    "PublicDescription": "external interrupt.,"
+  },
+  {
+    "EventCode": "0x20b4",
+    "EventName": "PM_FAV_TBEGIN",
+    "BriefDescription": "Dispatch time Favored tbegin,",
+    "PublicDescription": "Dispatch time Favored tbegin,"
+  },
+  {
+    "EventCode": "0x100f4",
+    "EventName": "PM_FLOP",
+    "BriefDescription": "Floating Point Operation Finished,",
+    "PublicDescription": "Floating Point Operations Finished.,"
+  },
+  {
+    "EventCode": "0xa0ae",
+    "EventName": "PM_FLOP_SUM_SCALAR",
+    "BriefDescription": "flops summary scalar instructions,",
+    "PublicDescription": "flops summary scalar instructions,"
+  },
+  {
+    "EventCode": "0xa0ac",
+    "EventName": "PM_FLOP_SUM_VEC",
+    "BriefDescription": "flops summary vector instructions,",
+    "PublicDescription": "flops summary vector instructions,"
+  },
+  {
+    "EventCode": "0x400f8",
+    "EventName": "PM_FLUSH",
+    "BriefDescription": "Flush (any type),",
+    "PublicDescription": "Flush (any type).,"
+  },
+  {
+    "EventCode": "0x2084",
+    "EventName": "PM_FLUSH_BR_MPRED",
+    "BriefDescription": "Flush caused by branch mispredict,",
+    "PublicDescription": "Flush caused by branch mispredict,"
+  },
+  {
+    "EventCode": "0x30012",
+    "EventName": "PM_FLUSH_COMPLETION",
+    "BriefDescription": "Completion Flush,",
+    "PublicDescription": "Completion Flush.,"
+  },
+  {
+    "EventCode": "0x2082",
+    "EventName": "PM_FLUSH_DISP",
+    "BriefDescription": "Dispatch flush,",
+    "PublicDescription": "Dispatch flush,"
+  },
+  {
+    "EventCode": "0x208c",
+    "EventName": "PM_FLUSH_DISP_SB",
+    "BriefDescription": "Dispatch Flush: Scoreboard,",
+    "PublicDescription": "Dispatch Flush: Scoreboard,"
+  },
+  {
+    "EventCode": "0x2088",
+    "EventName": "PM_FLUSH_DISP_SYNC",
+    "BriefDescription": "Dispatch Flush: Sync,",
+    "PublicDescription": "Dispatch Flush: Sync,"
+  },
+  {
+    "EventCode": "0x208a",
+    "EventName": "PM_FLUSH_DISP_TLBIE",
+    "BriefDescription": "Dispatch Flush: TLBIE,",
+    "PublicDescription": "Dispatch Flush: TLBIE,"
+  },
+  {
+    "EventCode": "0x208e",
+    "EventName": "PM_FLUSH_LSU",
+    "BriefDescription": "Flush initiated by LSU,",
+    "PublicDescription": "Flush initiated by LSU,"
+  },
+  {
+    "EventCode": "0x2086",
+    "EventName": "PM_FLUSH_PARTIAL",
+    "BriefDescription": "Partial flush,",
+    "PublicDescription": "Partial flush,"
+  },
+  {
+    "EventCode": "0xa0b0",
+    "EventName": "PM_FPU0_FCONV",
+    "BriefDescription": "Convert instruction executed,",
+    "PublicDescription": "Convert instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b8",
+    "EventName": "PM_FPU0_FEST",
+    "BriefDescription": "Estimate instruction executed,",
+    "PublicDescription": "Estimate instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b4",
+    "EventName": "PM_FPU0_FRSP",
+    "BriefDescription": "Round to single precision instruction executed,",
+    "PublicDescription": "Round to single precision instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b2",
+    "EventName": "PM_FPU1_FCONV",
+    "BriefDescription": "Convert instruction executed,",
+    "PublicDescription": "Convert instruction executed,"
+  },
+  {
+    "EventCode": "0xa0ba",
+    "EventName": "PM_FPU1_FEST",
+    "BriefDescription": "Estimate instruction executed,",
+    "PublicDescription": "Estimate instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b6",
+    "EventName": "PM_FPU1_FRSP",
+    "BriefDescription": "Round to single precision instruction executed,",
+    "PublicDescription": "Round to single precision instruction executed,"
+  },
+  {
+    "EventCode": "0x3000c",
+    "EventName": "PM_FREQ_DOWN",
+    "BriefDescription": "Power Management: Below Threshold B,",
+    "PublicDescription": "Frequency is being slewed down due to Power Management.,"
+  },
+  {
+    "EventCode": "0x4000c",
+    "EventName": "PM_FREQ_UP",
+    "BriefDescription": "Power Management: Above Threshold A,",
+    "PublicDescription": "Frequency is being slewed up due to Power Management.,"
+  },
+  {
+    "EventCode": "0x50b0",
+    "EventName": "PM_FUSION_TOC_GRP0_1",
+    "BriefDescription": "One pair of instructions fused with TOC in Group0,",
+    "PublicDescription": "One pair of instructions fused with TOC in Group0,"
+  },
+  {
+    "EventCode": "0x50ae",
+    "EventName": "PM_FUSION_TOC_GRP0_2",
+    "BriefDescription": "Two pairs of instructions fused with TOCin Group0,",
+    "PublicDescription": "Two pairs of instructions fused with TOCin Group0,"
+  },
+  {
+    "EventCode": "0x50ac",
+    "EventName": "PM_FUSION_TOC_GRP0_3",
+    "BriefDescription": "Three pairs of instructions fused with TOC in Group0,",
+    "PublicDescription": "Three pairs of instructions fused with TOC in Group0,"
+  },
+  {
+    "EventCode": "0x50b2",
+    "EventName": "PM_FUSION_TOC_GRP1_1",
+    "BriefDescription": "One pair of instructions fused with TOX in Group1,",
+    "PublicDescription": "One pair of instructions fused with TOX in Group1,"
+  },
+  {
+    "EventCode": "0x50b8",
+    "EventName": "PM_FUSION_VSX_GRP0_1",
+    "BriefDescription": "One pair of instructions fused with VSX in Group0,",
+    "PublicDescription": "One pair of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50b6",
+    "EventName": "PM_FUSION_VSX_GRP0_2",
+    "BriefDescription": "Two pairs of instructions fused with VSX in Group0,",
+    "PublicDescription": "Two pairs of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50b4",
+    "EventName": "PM_FUSION_VSX_GRP0_3",
+    "BriefDescription": "Three pairs of instructions fused with VSX in Group0,",
+    "PublicDescription": "Three pairs of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50ba",
+    "EventName": "PM_FUSION_VSX_GRP1_1",
+    "BriefDescription": "One pair of instructions fused with VSX in Group1,",
+    "PublicDescription": "One pair of instructions fused with VSX in Group1,"
+  },
+  {
+    "EventCode": "0x3000e",
+    "EventName": "PM_FXU0_BUSY_FXU1_IDLE",
+    "BriefDescription": "fxu0 busy and fxu1 idle,",
+    "PublicDescription": "fxu0 busy and fxu1 idle.,"
+  },
+  {
+    "EventCode": "0x10004",
+    "EventName": "PM_FXU0_FIN",
+    "BriefDescription": "The fixed point unit Unit 0 finished an instruction. Instructions that finish may not necessary complete.,",
+    "PublicDescription": "FXU0 Finished.,"
+  },
+  {
+    "EventCode": "0x4000e",
+    "EventName": "PM_FXU1_BUSY_FXU0_IDLE",
+    "BriefDescription": "fxu0 idle and fxu1 busy.,",
+    "PublicDescription": "fxu0 idle and fxu1 busy. .,"
+  },
+  {
+    "EventCode": "0x40004",
+    "EventName": "PM_FXU1_FIN",
+    "BriefDescription": "FXU1 Finished,",
+    "PublicDescription": "FXU1 Finished.,"
+  },
+  {
+    "EventCode": "0x2000e",
+    "EventName": "PM_FXU_BUSY",
+    "BriefDescription": "fxu0 busy and fxu1 busy.,",
+    "PublicDescription": "fxu0 busy and fxu1 busy..,"
+  },
+  {
+    "EventCode": "0x1000e",
+    "EventName": "PM_FXU_IDLE",
+    "BriefDescription": "fxu0 idle and fxu1 idle,",
+    "PublicDescription": "fxu0 idle and fxu1 idle.,"
+  },
+  {
+    "EventCode": "0x20008",
+    "EventName": "PM_GCT_EMPTY_CYC",
+    "BriefDescription": "No itags assigned either thread (GCT Empty),",
+    "PublicDescription": "No itags assigned either thread (GCT Empty).,"
+  },
+  {
+    "EventCode": "0x30a4",
+    "EventName": "PM_GCT_MERGE",
+    "BriefDescription": "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread,",
+    "PublicDescription": "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread,"
+  },
+  {
+    "EventCode": "0x4d01e",
+    "EventName": "PM_GCT_NOSLOT_BR_MPRED",
+    "BriefDescription": "Gct empty for this thread due to branch mispred,",
+    "PublicDescription": "Gct empty for this thread due to branch mispred.,"
+  },
+  {
+    "EventCode": "0x4d01a",
+    "EventName": "PM_GCT_NOSLOT_BR_MPRED_ICMISS",
+    "BriefDescription": "Gct empty for this thread due to Icache Miss and branch mispred,",
+    "PublicDescription": "Gct empty for this thread due to Icache Miss and branch mispred.,"
+  },
+  {
+    "EventCode": "0x100f8",
+    "EventName": "PM_GCT_NOSLOT_CYC",
+    "BriefDescription": "No itags assigned,",
+    "PublicDescription": "Pipeline empty (No itags assigned , no GCT slots used).,"
+  },
+  {
+    "EventCode": "0x2d01e",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_ISSQ",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to Issue q full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to Issue q full.,"
+  },
+  {
+    "EventCode": "0x4d01c",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_MAP",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to Mapper full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to Mapper full.,"
+  },
+  {
+    "EventCode": "0x2e010",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_OTHER",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to sync,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to sync.,"
+  },
+  {
+    "EventCode": "0x2d01c",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_SRQ",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to SRQ full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to SRQ full.,"
+  },
+  {
+    "EventCode": "0x4e010",
+    "EventName": "PM_GCT_NOSLOT_IC_L3MISS",
+    "BriefDescription": "Gct empty for this thread due to icach l3 miss,",
+    "PublicDescription": "Gct empty for this thread due to icach l3 miss.,"
+  },
+  {
+    "EventCode": "0x2d01a",
+    "EventName": "PM_GCT_NOSLOT_IC_MISS",
+    "BriefDescription": "Gct empty for this thread due to Icache Miss,",
+    "PublicDescription": "Gct empty for this thread due to Icache Miss.,"
+  },
+  {
+    "EventCode": "0x20a2",
+    "EventName": "PM_GCT_UTIL_11_14_ENTRIES",
+    "BriefDescription": "GCT Utilization 11-14 entries,",
+    "PublicDescription": "GCT Utilization 11-14 entries,"
+  },
+  {
+    "EventCode": "0x20a4",
+    "EventName": "PM_GCT_UTIL_15_17_ENTRIES",
+    "BriefDescription": "GCT Utilization 15-17 entries,",
+    "PublicDescription": "GCT Utilization 15-17 entries,"
+  },
+  {
+    "EventCode": "0x20a6",
+    "EventName": "PM_GCT_UTIL_18_ENTRIES",
+    "BriefDescription": "GCT Utilization 18+ entries,",
+    "PublicDescription": "GCT Utilization 18+ entries,"
+  },
+  {
+    "EventCode": "0x209c",
+    "EventName": "PM_GCT_UTIL_1_2_ENTRIES",
+    "BriefDescription": "GCT Utilization 1-2 entries,",
+    "PublicDescription": "GCT Utilization 1-2 entries,"
+  },
+  {
+    "EventCode": "0x209e",
+    "EventName": "PM_GCT_UTIL_3_6_ENTRIES",
+    "BriefDescription": "GCT Utilization 3-6 entries,",
+    "PublicDescription": "GCT Utilization 3-6 entries,"
+  },
+  {
+    "EventCode": "0x20a0",
+    "EventName": "PM_GCT_UTIL_7_10_ENTRIES",
+    "BriefDescription": "GCT Utilization 7-10 entries,",
+    "PublicDescription": "GCT Utilization 7-10 entries,"
+  },
+  {
+    "EventCode": "0x1000a",
+    "EventName": "PM_GRP_BR_MPRED_NONSPEC",
+    "BriefDescription": "Group experienced non-speculative branch redirect,",
+    "PublicDescription": "Group experienced Non-speculative br mispredicct.,"
+  },
+  {
+    "EventCode": "0x30004",
+    "EventName": "PM_GRP_CMPL",
+    "BriefDescription": "group completed,",
+    "PublicDescription": "group completed.,"
+  },
+  {
+    "EventCode": "0x3000a",
+    "EventName": "PM_GRP_DISP",
+    "BriefDescription": "group dispatch,",
+    "PublicDescription": "dispatch_success (Group Dispatched).,"
+  },
+  {
+    "EventCode": "0x1000c",
+    "EventName": "PM_GRP_IC_MISS_NONSPEC",
+    "BriefDescription": "Group experienced non-speculative I cache miss,",
+    "PublicDescription": "Group experi enced Non-specu lative I cache miss.,"
+  },
+  {
+    "EventCode": "0x10130",
+    "EventName": "PM_GRP_MRK",
+    "BriefDescription": "Instruction Marked,",
+    "PublicDescription": "Instruction marked in idu.,"
+  },
+  {
+    "EventCode": "0x509c",
+    "EventName": "PM_GRP_NON_FULL_GROUP",
+    "BriefDescription": "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches,",
+    "PublicDescription": "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches,"
+  },
+  {
+    "EventCode": "0x20050",
+    "EventName": "PM_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x20052",
+    "EventName": "PM_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x10052",
+    "EventName": "PM_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x50a4",
+    "EventName": "PM_GRP_TERM_2ND_BRANCH",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but 2nd branch ends group,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but 2nd branch ends group,"
+  },
+  {
+    "EventCode": "0x50a6",
+    "EventName": "PM_GRP_TERM_FPU_AFTER_BR",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes,"
+  },
+  {
+    "EventCode": "0x509e",
+    "EventName": "PM_GRP_TERM_NOINST",
+    "BriefDescription": "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).,",
+    "PublicDescription": "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).,"
+  },
+  {
+    "EventCode": "0x50a0",
+    "EventName": "PM_GRP_TERM_OTHER",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.,"
+  },
+  {
+    "EventCode": "0x50a2",
+    "EventName": "PM_GRP_TERM_SLOT_LIMIT",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination,"
+  },
+  {
+    "EventCode": "0x2000a",
+    "EventName": "PM_HV_CYC",
+    "BriefDescription": "Cycles in which msr_hv is high. Note that this event does not take msr_pr into consideration,",
+    "PublicDescription": "cycles in hypervisor mode .,"
+  },
+  {
+    "EventCode": "0x4086",
+    "EventName": "PM_IBUF_FULL_CYC",
+    "BriefDescription": "Cycles No room in ibuff,",
+    "PublicDescription": "Cycles No room in ibufffully qualified tranfer (if5 valid).,"
+  },
+  {
+    "EventCode": "0x10018",
+    "EventName": "PM_IC_DEMAND_CYC",
+    "BriefDescription": "Cycles when a demand ifetch was pending,",
+    "PublicDescription": "Demand ifetch pending.,"
+  },
+  {
+    "EventCode": "0x4098",
+    "EventName": "PM_IC_DEMAND_L2_BHT_REDIRECT",
+    "BriefDescription": "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles),",
+    "PublicDescription": "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles),"
+  },
+  {
+    "EventCode": "0x409a",
+    "EventName": "PM_IC_DEMAND_L2_BR_REDIRECT",
+    "BriefDescription": "L2 I cache demand request due to branch Mispredict ( 15 cycle path),",
+    "PublicDescription": "L2 I cache demand request due to branch Mispredict ( 15 cycle path),"
+  },
+  {
+    "EventCode": "0x4088",
+    "EventName": "PM_IC_DEMAND_REQ",
+    "BriefDescription": "Demand Instruction fetch request,",
+    "PublicDescription": "Demand Instruction fetch request,"
+  },
+  {
+    "EventCode": "0x508a",
+    "EventName": "PM_IC_INVALIDATE",
+    "BriefDescription": "Ic line invalidated,",
+    "PublicDescription": "Ic line invalidated,"
+  },
+  {
+    "EventCode": "0x4092",
+    "EventName": "PM_IC_PREF_CANCEL_HIT",
+    "BriefDescription": "Prefetch Canceled due to icache hit,",
+    "PublicDescription": "Prefetch Canceled due to icache hit,"
+  },
+  {
+    "EventCode": "0x4094",
+    "EventName": "PM_IC_PREF_CANCEL_L2",
+    "BriefDescription": "L2 Squashed request,",
+    "PublicDescription": "L2 Squashed request,"
+  },
+  {
+    "EventCode": "0x4090",
+    "EventName": "PM_IC_PREF_CANCEL_PAGE",
+    "BriefDescription": "Prefetch Canceled due to page boundary,",
+    "PublicDescription": "Prefetch Canceled due to page boundary,"
+  },
+  {
+    "EventCode": "0x408a",
+    "EventName": "PM_IC_PREF_REQ",
+    "BriefDescription": "Instruction prefetch requests,",
+    "PublicDescription": "Instruction prefetch requests,"
+  },
+  {
+    "EventCode": "0x408e",
+    "EventName": "PM_IC_PREF_WRITE",
+    "BriefDescription": "Instruction prefetch written into IL1,",
+    "PublicDescription": "Instruction prefetch written into IL1,"
+  },
+  {
+    "EventCode": "0x4096",
+    "EventName": "PM_IC_RELOAD_PRIVATE",
+    "BriefDescription": "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was invalidat,",
+    "PublicDescription": "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was inv,"
+  },
+  {
+    "EventCode": "0x100f6",
+    "EventName": "PM_IERAT_RELOAD",
+    "BriefDescription": "Number of I-ERAT reloads,",
+    "PublicDescription": "IERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0x4006a",
+    "EventName": "PM_IERAT_RELOAD_16M",
+    "BriefDescription": "IERAT Reloaded (Miss) for a 16M page,",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 16M page.,"
+  },
+  {
+    "EventCode": "0x20064",
+    "EventName": "PM_IERAT_RELOAD_4K",
+    "BriefDescription": "IERAT Miss (Not implemented as DI on POWER6),",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 4k page.,"
+  },
+  {
+    "EventCode": "0x3006a",
+    "EventName": "PM_IERAT_RELOAD_64K",
+    "BriefDescription": "IERAT Reloaded (Miss) for a 64k page,",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 64k page.,"
+  },
+  {
+    "EventCode": "0x3405e",
+    "EventName": "PM_IFETCH_THROTTLE",
+    "BriefDescription": "Cycles in which Instruction fetch throttle was active,",
+    "PublicDescription": "Cycles instruction fecth was throttled in IFU.,"
+  },
+  {
+    "EventCode": "0x5088",
+    "EventName": "PM_IFU_L2_TOUCH",
+    "BriefDescription": "L2 touch to update MRU on a line,",
+    "PublicDescription": "L2 touch to update MRU on a line,"
+  },
+  {
+    "EventCode": "0x514050",
+    "EventName": "PM_INST_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x544048",
+    "EventName": "PM_INST_ALL_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534048",
+    "EventName": "PM_INST_ALL_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x53404c",
+    "EventName": "PM_INST_ALL_FROM_DL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404c",
+    "EventName": "PM_INST_ALL_FROM_DMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514042",
+    "EventName": "PM_INST_ALL_FROM_L2",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544046",
+    "EventName": "PM_INST_ALL_FROM_L21_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534046",
+    "EventName": "PM_INST_ALL_FROM_L21_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404e",
+    "EventName": "PM_INST_ALL_FROM_L2MISS",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534040",
+    "EventName": "PM_INST_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544040",
+    "EventName": "PM_INST_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524040",
+    "EventName": "PM_INST_ALL_FROM_L2_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514040",
+    "EventName": "PM_INST_ALL_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544042",
+    "EventName": "PM_INST_ALL_FROM_L3",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544044",
+    "EventName": "PM_INST_ALL_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534044",
+    "EventName": "PM_INST_ALL_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524044",
+    "EventName": "PM_INST_ALL_FROM_L31_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514046",
+    "EventName": "PM_INST_ALL_FROM_L31_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404e",
+    "EventName": "PM_INST_ALL_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534042",
+    "EventName": "PM_INST_ALL_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524042",
+    "EventName": "PM_INST_ALL_FROM_L3_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514044",
+    "EventName": "PM_INST_ALL_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404c",
+    "EventName": "PM_INST_ALL_FROM_LL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524048",
+    "EventName": "PM_INST_ALL_FROM_LMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x52404c",
+    "EventName": "PM_INST_ALL_FROM_MEMORY",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404a",
+    "EventName": "PM_INST_ALL_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514048",
+    "EventName": "PM_INST_ALL_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524046",
+    "EventName": "PM_INST_ALL_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404a",
+    "EventName": "PM_INST_ALL_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x52404a",
+    "EventName": "PM_INST_ALL_FROM_RL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x53404a",
+    "EventName": "PM_INST_ALL_FROM_RMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524050",
+    "EventName": "PM_INST_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x524052",
+    "EventName": "PM_INST_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x514052",
+    "EventName": "PM_INST_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x514054",
+    "EventName": "PM_INST_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for instruction fetches and prefetches,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x544052",
+    "EventName": "PM_INST_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for instruction fetches and prefetches,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x534050",
+    "EventName": "PM_INST_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x534052",
+    "EventName": "PM_INST_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x544050",
+    "EventName": "PM_INST_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x14050",
+    "EventName": "PM_INST_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x2",
+    "EventName": "PM_INST_CMPL",
+    "BriefDescription": "Number of PowerPC Instructions that completed.,",
+    "PublicDescription": "PPC Instructions Finished (completed).,"
+  },
+  {
+    "EventCode": "0x200f2",
+    "EventName": "PM_INST_DISP",
+    "BriefDescription": "PPC Dispatched,",
+    "PublicDescription": "PPC Dispatched.,"
+  },
+  {
+    "EventCode": "0x44048",
+    "EventName": "PM_INST_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34048",
+    "EventName": "PM_INST_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x3404c",
+    "EventName": "PM_INST_FROM_DL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4404c",
+    "EventName": "PM_INST_FROM_DMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4080",
+    "EventName": "PM_INST_FROM_L1",
+    "BriefDescription": "Instruction fetches from L1,",
+    "PublicDescription": "Instruction fetches from L1,"
+  },
+  {
+    "EventCode": "0x14042",
+    "EventName": "PM_INST_FROM_L2",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44046",
+    "EventName": "PM_INST_FROM_L21_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34046",
+    "EventName": "PM_INST_FROM_L21_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404e",
+    "EventName": "PM_INST_FROM_L2MISS",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34040",
+    "EventName": "PM_INST_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44040",
+    "EventName": "PM_INST_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24040",
+    "EventName": "PM_INST_FROM_L2_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14040",
+    "EventName": "PM_INST_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44042",
+    "EventName": "PM_INST_FROM_L3",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44044",
+    "EventName": "PM_INST_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34044",
+    "EventName": "PM_INST_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24044",
+    "EventName": "PM_INST_FROM_L31_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14046",
+    "EventName": "PM_INST_FROM_L31_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x300fa",
+    "EventName": "PM_INST_FROM_L3MISS",
+    "BriefDescription": "Marked instruction was reloaded from a location beyond the local chiplet,",
+    "PublicDescription": "Inst from L3 miss.,"
+  },
+  {
+    "EventCode": "0x4404e",
+    "EventName": "PM_INST_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34042",
+    "EventName": "PM_INST_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24042",
+    "EventName": "PM_INST_FROM_L3_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14044",
+    "EventName": "PM_INST_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404c",
+    "EventName": "PM_INST_FROM_LL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24048",
+    "EventName": "PM_INST_FROM_LMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x2404c",
+    "EventName": "PM_INST_FROM_MEMORY",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4404a",
+    "EventName": "PM_INST_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14048",
+    "EventName": "PM_INST_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24046",
+    "EventName": "PM_INST_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404a",
+    "EventName": "PM_INST_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x2404a",
+    "EventName": "PM_INST_FROM_RL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x3404a",
+    "EventName": "PM_INST_FROM_RMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24050",
+    "EventName": "PM_INST_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x24052",
+    "EventName": "PM_INST_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x14052",
+    "EventName": "PM_INST_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x1003a",
+    "EventName": "PM_INST_IMC_MATCH_CMPL",
+    "BriefDescription": "IMC Match Count ( Not architected in P8),",
+    "PublicDescription": "IMC Match Count.,"
+  },
+  {
+    "EventCode": "0x30016",
+    "EventName": "PM_INST_IMC_MATCH_DISP",
+    "BriefDescription": "Matched Instructions Dispatched,",
+    "PublicDescription": "IMC Matches dispatched.,"
+  },
+  {
+    "EventCode": "0x14054",
+    "EventName": "PM_INST_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for an instruction fetch,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x44052",
+    "EventName": "PM_INST_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for an instruction fetch,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x34050",
+    "EventName": "PM_INST_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x34052",
+    "EventName": "PM_INST_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x44050",
+    "EventName": "PM_INST_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x10014",
+    "EventName": "PM_IOPS_CMPL",
+    "BriefDescription": "Internal Operations completed,",
+    "PublicDescription": "IOPS Completed.,"
+  },
+  {
+    "EventCode": "0x30014",
+    "EventName": "PM_IOPS_DISP",
+    "BriefDescription": "Internal Operations dispatched,",
+    "PublicDescription": "IOPS dispatched.,"
+  },
+  {
+    "EventCode": "0x45048",
+    "EventName": "PM_IPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35048",
+    "EventName": "PM_IPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x3504c",
+    "EventName": "PM_IPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504c",
+    "EventName": "PM_IPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15042",
+    "EventName": "PM_IPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45046",
+    "EventName": "PM_IPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35046",
+    "EventName": "PM_IPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504e",
+    "EventName": "PM_IPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35040",
+    "EventName": "PM_IPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45040",
+    "EventName": "PM_IPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25040",
+    "EventName": "PM_IPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15040",
+    "EventName": "PM_IPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45042",
+    "EventName": "PM_IPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45044",
+    "EventName": "PM_IPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35044",
+    "EventName": "PM_IPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25044",
+    "EventName": "PM_IPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15046",
+    "EventName": "PM_IPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504e",
+    "EventName": "PM_IPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35042",
+    "EventName": "PM_IPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25042",
+    "EventName": "PM_IPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15044",
+    "EventName": "PM_IPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504c",
+    "EventName": "PM_IPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25048",
+    "EventName": "PM_IPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x2504c",
+    "EventName": "PM_IPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504a",
+    "EventName": "PM_IPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15048",
+    "EventName": "PM_IPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25046",
+    "EventName": "PM_IPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504a",
+    "EventName": "PM_IPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x2504a",
+    "EventName": "PM_IPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x3504a",
+    "EventName": "PM_IPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4608e",
+    "EventName": "PM_ISIDE_L2MEMACC",
+    "BriefDescription": "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4),",
+    "PublicDescription": "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4),"
+  },
+  {
+    "EventCode": "0xd096",
+    "EventName": "PM_ISLB_MISS",
+    "BriefDescription": "I SLB Miss.,",
+    "PublicDescription": "I SLB Miss.,"
+  },
+  {
+    "EventCode": "0x30ac",
+    "EventName": "PM_ISU_REF_FX0",
+    "BriefDescription": "FX0 ISU reject,",
+    "PublicDescription": "FX0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30ae",
+    "EventName": "PM_ISU_REF_FX1",
+    "BriefDescription": "FX1 ISU reject,",
+    "PublicDescription": "FX1 ISU reject,"
+  },
+  {
+    "EventCode": "0x38ac",
+    "EventName": "PM_ISU_REF_FXU",
+    "BriefDescription": "FXU ISU reject from either pipe,",
+    "PublicDescription": "ISU,"
+  },
+  {
+    "EventCode": "0x30b0",
+    "EventName": "PM_ISU_REF_LS0",
+    "BriefDescription": "LS0 ISU reject,",
+    "PublicDescription": "LS0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b2",
+    "EventName": "PM_ISU_REF_LS1",
+    "BriefDescription": "LS1 ISU reject,",
+    "PublicDescription": "LS1 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b4",
+    "EventName": "PM_ISU_REF_LS2",
+    "BriefDescription": "LS2 ISU reject,",
+    "PublicDescription": "LS2 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b6",
+    "EventName": "PM_ISU_REF_LS3",
+    "BriefDescription": "LS3 ISU reject,",
+    "PublicDescription": "LS3 ISU reject,"
+  },
+  {
+    "EventCode": "0x309c",
+    "EventName": "PM_ISU_REJECTS_ALL",
+    "BriefDescription": "All isu rejects could be more than 1 per cycle,",
+    "PublicDescription": "All isu rejects could be more than 1 per cycle,"
+  },
+  {
+    "EventCode": "0x30a2",
+    "EventName": "PM_ISU_REJECT_RES_NA",
+    "BriefDescription": "ISU reject due to resource not available,",
+    "PublicDescription": "ISU reject due to resource not available,"
+  },
+  {
+    "EventCode": "0x309e",
+    "EventName": "PM_ISU_REJECT_SAR_BYPASS",
+    "BriefDescription": "Reject because of SAR bypass,",
+    "PublicDescription": "Reject because of SAR bypass,"
+  },
+  {
+    "EventCode": "0x30a0",
+    "EventName": "PM_ISU_REJECT_SRC_NA",
+    "BriefDescription": "ISU reject due to source not available,",
+    "PublicDescription": "ISU reject due to source not available,"
+  },
+  {
+    "EventCode": "0x30a8",
+    "EventName": "PM_ISU_REJ_VS0",
+    "BriefDescription": "VS0 ISU reject,",
+    "PublicDescription": "VS0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30aa",
+    "EventName": "PM_ISU_REJ_VS1",
+    "BriefDescription": "VS1 ISU reject,",
+    "PublicDescription": "VS1 ISU reject,"
+  },
+  {
+    "EventCode": "0x38a8",
+    "EventName": "PM_ISU_REJ_VSU",
+    "BriefDescription": "VSU ISU reject from either pipe,",
+    "PublicDescription": "ISU,"
+  },
+  {
+    "EventCode": "0x30b8",
+    "EventName": "PM_ISYNC",
+    "BriefDescription": "Isync count per thread,",
+    "PublicDescription": "Isync count per thread,"
+  },
+  {
+    "EventCode": "0x400fc",
+    "EventName": "PM_ITLB_MISS",
+    "BriefDescription": "ITLB Reloaded (always zero on POWER6),",
+    "PublicDescription": "ITLB Reloaded.,"
+  },
+  {
+    "EventCode": "0x200301ea",
+    "EventName": "PM_L1MISS_LAT_EXC_1024",
+    "BriefDescription": "L1 misses that took longer than 1024 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 1024 cyc,"
+  },
+  {
+    "EventCode": "0x200401ec",
+    "EventName": "PM_L1MISS_LAT_EXC_2048",
+    "BriefDescription": "L1 misses that took longer than 2048 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 2048 cyc,"
+  },
+  {
+    "EventCode": "0x200101e8",
+    "EventName": "PM_L1MISS_LAT_EXC_256",
+    "BriefDescription": "L1 misses that took longer than 256 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 256 cyc,"
+  },
+  {
+    "EventCode": "0x200201e6",
+    "EventName": "PM_L1MISS_LAT_EXC_32",
+    "BriefDescription": "L1 misses that took longer than 32 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 32 cyc,"
+  },
+  {
+    "EventCode": "0x26086",
+    "EventName": "PM_L1PF_L2MEMACC",
+    "BriefDescription": "valid when first beat of data comes in for an L1pref where data came from mem(or L4),",
+    "PublicDescription": "valid when first beat of data comes in for an L1pref where data came from mem(or L4),"
+  },
+  {
+    "EventCode": "0x1002c",
+    "EventName": "PM_L1_DCACHE_RELOADED_ALL",
+    "BriefDescription": "L1 data cache reloaded for demand or prefetch,",
+    "PublicDescription": "L1 data cache reloaded for demand or prefetch .,"
+  },
+  {
+    "EventCode": "0x300f6",
+    "EventName": "PM_L1_DCACHE_RELOAD_VALID",
+    "BriefDescription": "DL1 reloaded due to Demand Load,",
+    "PublicDescription": "DL1 reloaded due to Demand Load .,"
+  },
+  {
+    "EventCode": "0x408c",
+    "EventName": "PM_L1_DEMAND_WRITE",
+    "BriefDescription": "Instruction Demand sectors wriittent into IL1,",
+    "PublicDescription": "Instruction Demand sectors wriittent into IL1,"
+  },
+  {
+    "EventCode": "0x200fd",
+    "EventName": "PM_L1_ICACHE_MISS",
+    "BriefDescription": "Demand iCache Miss,",
+    "PublicDescription": "Demand iCache Miss.,"
+  },
+  {
+    "EventCode": "0x40012",
+    "EventName": "PM_L1_ICACHE_RELOADED_ALL",
+    "BriefDescription": "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch,",
+    "PublicDescription": "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch.,"
+  },
+  {
+    "EventCode": "0x30068",
+    "EventName": "PM_L1_ICACHE_RELOADED_PREF",
+    "BriefDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch),",
+    "PublicDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch).,"
+  },
+  {
+    "EventCode": "0x27084",
+    "EventName": "PM_L2_CHIP_PUMP",
+    "BriefDescription": "RC requests that were local on chip pump attempts,",
+    "PublicDescription": "RC requests that were local on chip pump attempts,"
+  },
+  {
+    "EventCode": "0x27086",
+    "EventName": "PM_L2_GROUP_PUMP",
+    "BriefDescription": "RC requests that were on Node Pump attempts,",
+    "PublicDescription": "RC requests that were on Node Pump attempts,"
+  },
+  {
+    "EventCode": "0x3708a",
+    "EventName": "PM_L2_RTY_ST",
+    "BriefDescription": "RC retries on PB for any store from core,",
+    "PublicDescription": "RC retries on PB for any store from core,"
+  },
+  {
+    "EventCode": "0x17080",
+    "EventName": "PM_L2_ST",
+    "BriefDescription": "All successful D-side store dispatches for this thread,",
+    "PublicDescription": "All successful D-side store dispatches for this thread,"
+  },
+  {
+    "EventCode": "0x17082",
+    "EventName": "PM_L2_ST_MISS",
+    "BriefDescription": "All successful D-side store dispatches for this thread that were L2 Miss,",
+    "PublicDescription": "All successful D-side store dispatches for this thread that were L2 Miss,"
+  },
+  {
+    "EventCode": "0x1e05e",
+    "EventName": "PM_L2_TM_REQ_ABORT",
+    "BriefDescription": "TM abort,",
+    "PublicDescription": "TM abort.,"
+  },
+  {
+    "EventCode": "0x3e05c",
+    "EventName": "PM_L2_TM_ST_ABORT_SISTER",
+    "BriefDescription": "TM marked store abort,",
+    "PublicDescription": "TM marked store abort.,"
+  },
+  {
+    "EventCode": "0x819082",
+    "EventName": "PM_L3_CI_USAGE",
+    "BriefDescription": "rotating sample of 16 CI or CO actives,",
+    "PublicDescription": "rotating sample of 16 CI or CO actives,"
+  },
+  {
+    "EventCode": "0x83908b",
+    "EventName": "PM_L3_CO0_ALLOC",
+    "BriefDescription": "lifetime, sample of CO machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x83908a",
+    "EventName": "PM_L3_CO0_BUSY",
+    "BriefDescription": "lifetime, sample of CO machine 0 valid,",
+    "PublicDescription": "lifetime, sample of CO machine 0 valid,"
+  },
+  {
+    "EventCode": "0x28086",
+    "EventName": "PM_L3_CO_L31",
+    "BriefDescription": "L3 CO to L3.1 OR of port 0 and 1 ( lossy),",
+    "PublicDescription": "L3 CO to L3.1 OR of port 0 and 1 ( lossy),"
+  },
+  {
+    "EventCode": "0x28084",
+    "EventName": "PM_L3_CO_MEM",
+    "BriefDescription": "L3 CO to memory OR of port 0 and 1 ( lossy),",
+    "PublicDescription": "L3 CO to memory OR of port 0 and 1 ( lossy),"
+  },
+  {
+    "EventCode": "0x18082",
+    "EventName": "PM_L3_CO_MEPF",
+    "BriefDescription": "L3 CO of line in Mep state ( includes casthrough,",
+    "PublicDescription": "L3 CO of line in Mep state ( includes casthrough,"
+  },
+  {
+    "EventCode": "0x1e052",
+    "EventName": "PM_L3_LD_PREF",
+    "BriefDescription": "L3 Load Prefetches,",
+    "PublicDescription": "L3 Load Prefetches.,"
+  },
+  {
+    "EventCode": "0x84908d",
+    "EventName": "PM_L3_PF0_ALLOC",
+    "BriefDescription": "lifetime, sample of PF machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x84908c",
+    "EventName": "PM_L3_PF0_BUSY",
+    "BriefDescription": "lifetime, sample of PF machine 0 valid,",
+    "PublicDescription": "lifetime, sample of PF machine 0 valid,"
+  },
+  {
+    "EventCode": "0x18080",
+    "EventName": "PM_L3_PF_MISS_L3",
+    "BriefDescription": "L3 Prefetch missed in L3,",
+    "PublicDescription": "L3 Prefetch missed in L3,"
+  },
+  {
+    "EventCode": "0x3808a",
+    "EventName": "PM_L3_PF_OFF_CHIP_CACHE",
+    "BriefDescription": "L3 Prefetch from Off chip cache,",
+    "PublicDescription": "L3 Prefetch from Off chip cache,"
+  },
+  {
+    "EventCode": "0x4808e",
+    "EventName": "PM_L3_PF_OFF_CHIP_MEM",
+    "BriefDescription": "L3 Prefetch from Off chip memory,",
+    "PublicDescription": "L3 Prefetch from Off chip memory,"
+  },
+  {
+    "EventCode": "0x38088",
+    "EventName": "PM_L3_PF_ON_CHIP_CACHE",
+    "BriefDescription": "L3 Prefetch from On chip cache,",
+    "PublicDescription": "L3 Prefetch from On chip cache,"
+  },
+  {
+    "EventCode": "0x4808c",
+    "EventName": "PM_L3_PF_ON_CHIP_MEM",
+    "BriefDescription": "L3 Prefetch from On chip memory,",
+    "PublicDescription": "L3 Prefetch from On chip memory,"
+  },
+  {
+    "EventCode": "0x829084",
+    "EventName": "PM_L3_PF_USAGE",
+    "BriefDescription": "rotating sample of 32 PF actives,",
+    "PublicDescription": "rotating sample of 32 PF actives,"
+  },
+  {
+    "EventCode": "0x4e052",
+    "EventName": "PM_L3_PREF_ALL",
+    "BriefDescription": "Total HW L3 prefetches(Load+store),",
+    "PublicDescription": "Total HW L3 prefetches(Load+store).,"
+  },
+  {
+    "EventCode": "0x84908f",
+    "EventName": "PM_L3_RD0_ALLOC",
+    "BriefDescription": "lifetime, sample of RD machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x84908e",
+    "EventName": "PM_L3_RD0_BUSY",
+    "BriefDescription": "lifetime, sample of RD machine 0 valid,",
+    "PublicDescription": "lifetime, sample of RD machine 0 valid,"
+  },
+  {
+    "EventCode": "0x829086",
+    "EventName": "PM_L3_RD_USAGE",
+    "BriefDescription": "rotating sample of 16 RD actives,",
+    "PublicDescription": "rotating sample of 16 RD actives,"
+  },
+  {
+    "EventCode": "0x839089",
+    "EventName": "PM_L3_SN0_ALLOC",
+    "BriefDescription": "lifetime, sample of snooper machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x839088",
+    "EventName": "PM_L3_SN0_BUSY",
+    "BriefDescription": "lifetime, sample of snooper machine 0 valid,",
+    "PublicDescription": "lifetime, sample of snooper machine 0 valid,"
+  },
+  {
+    "EventCode": "0x819080",
+    "EventName": "PM_L3_SN_USAGE",
+    "BriefDescription": "rotating sample of 8 snoop valids,",
+    "PublicDescription": "rotating sample of 8 snoop valids,"
+  },
+  {
+    "EventCode": "0x2e052",
+    "EventName": "PM_L3_ST_PREF",
+    "BriefDescription": "L3 store Prefetches,",
+    "PublicDescription": "L3 store Prefetches.,"
+  },
+  {
+    "EventCode": "0x3e052",
+    "EventName": "PM_L3_SW_PREF",
+    "BriefDescription": "Data stream touchto L3,",
+    "PublicDescription": "Data stream touchto L3.,"
+  },
+  {
+    "EventCode": "0x18081",
+    "EventName": "PM_L3_WI0_ALLOC",
+    "BriefDescription": "lifetime, sample of Write Inject machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x3c058",
+    "EventName": "PM_LARX_FIN",
+    "BriefDescription": "Larx finished,",
+    "PublicDescription": "Larx finished .,"
+  },
+  {
+    "EventCode": "0x1002e",
+    "EventName": "PM_LD_CMPL",
+    "BriefDescription": "count of Loads completed,",
+    "PublicDescription": "count of Loads completed.,"
+  },
+  {
+    "EventCode": "0x10062",
+    "EventName": "PM_LD_L3MISS_PEND_CYC",
+    "BriefDescription": "Cycles L3 miss was pending for this thread,",
+    "PublicDescription": "Cycles L3 miss was pending for this thread.,"
+  },
+  {
+    "EventCode": "0x3e054",
+    "EventName": "PM_LD_MISS_L1",
+    "BriefDescription": "Load Missed L1,",
+    "PublicDescription": "Load Missed L1.,"
+  },
+  {
+    "EventCode": "0x100ee",
+    "EventName": "PM_LD_REF_L1",
+    "BriefDescription": "All L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "Load Ref count combined for all units.,"
+  },
+  {
+    "EventCode": "0xc080",
+    "EventName": "PM_LD_REF_L1_LSU0",
+    "BriefDescription": "LS0 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS0 L1 D cache load references counted at finish, gated by rejectLSU0 L1 D cache load references,"
+  },
+  {
+    "EventCode": "0xc082",
+    "EventName": "PM_LD_REF_L1_LSU1",
+    "BriefDescription": "LS1 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS1 L1 D cache load references counted at finish, gated by rejectLSU1 L1 D cache load references,"
+  },
+  {
+    "EventCode": "0xc094",
+    "EventName": "PM_LD_REF_L1_LSU2",
+    "BriefDescription": "LS2 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS2 L1 D cache load references counted at finish, gated by reject42,"
+  },
+  {
+    "EventCode": "0xc096",
+    "EventName": "PM_LD_REF_L1_LSU3",
+    "BriefDescription": "LS3 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS3 L1 D cache load references counted at finish, gated by reject42,"
+  },
+  {
+    "EventCode": "0x509a",
+    "EventName": "PM_LINK_STACK_INVALID_PTR",
+    "BriefDescription": "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops,",
+    "PublicDescription": "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops,"
+  },
+  {
+    "EventCode": "0x5098",
+    "EventName": "PM_LINK_STACK_WRONG_ADD_PRED",
+    "BriefDescription": "Link stack predicts wrong address, because of link stack design limitation.,",
+    "PublicDescription": "Link stack predicts wrong address, because of link stack design limitation.,"
+  },
+  {
+    "EventCode": "0xe080",
+    "EventName": "PM_LS0_ERAT_MISS_PREF",
+    "BriefDescription": "LS0 Erat miss due to prefetch,",
+    "PublicDescription": "LS0 Erat miss due to prefetch42,"
+  },
+  {
+    "EventCode": "0xd0b8",
+    "EventName": "PM_LS0_L1_PREF",
+    "BriefDescription": "LS0 L1 cache data prefetches,",
+    "PublicDescription": "LS0 L1 cache data prefetches42,"
+  },
+  {
+    "EventCode": "0xc098",
+    "EventName": "PM_LS0_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches,",
+    "PublicDescription": "Software L1 Prefetches, including SW Transient Prefetches42,"
+  },
+  {
+    "EventCode": "0xe082",
+    "EventName": "PM_LS1_ERAT_MISS_PREF",
+    "BriefDescription": "LS1 Erat miss due to prefetch,",
+    "PublicDescription": "LS1 Erat miss due to prefetch42,"
+  },
+  {
+    "EventCode": "0xd0ba",
+    "EventName": "PM_LS1_L1_PREF",
+    "BriefDescription": "LS1 L1 cache data prefetches,",
+    "PublicDescription": "LS1 L1 cache data prefetches42,"
+  },
+  {
+    "EventCode": "0xc09a",
+    "EventName": "PM_LS1_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches,",
+    "PublicDescription": "Software L1 Prefetches, including SW Transient Prefetches42,"
+  },
+  {
+    "EventCode": "0xc0b0",
+    "EventName": "PM_LSU0_FLUSH_LRQ",
+    "BriefDescription": "LS0 Flush: LRQ,",
+    "PublicDescription": "LS0 Flush: LRQLSU0 LRQ flushes,"
+  },
+  {
+    "EventCode": "0xc0b8",
+    "EventName": "PM_LSU0_FLUSH_SRQ",
+    "BriefDescription": "LS0 Flush: SRQ,",
+    "PublicDescription": "LS0 Flush: SRQLSU0 SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xc0a4",
+    "EventName": "PM_LSU0_FLUSH_ULD",
+    "BriefDescription": "LS0 Flush: Unaligned Load,",
+    "PublicDescription": "LS0 Flush: Unaligned LoadLSU0 unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xc0ac",
+    "EventName": "PM_LSU0_FLUSH_UST",
+    "BriefDescription": "LS0 Flush: Unaligned Store,",
+    "PublicDescription": "LS0 Flush: Unaligned StoreLSU0 unaligned store flushes,"
+  },
+  {
+    "EventCode": "0xf088",
+    "EventName": "PM_LSU0_L1_CAM_CANCEL",
+    "BriefDescription": "ls0 l1 tm cam cancel,",
+    "PublicDescription": "ls0 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x1e056",
+    "EventName": "PM_LSU0_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe0,",
+    "PublicDescription": ".,"
+  },
+  {
+    "EventCode": "0xd08c",
+    "EventName": "PM_LSU0_LMQ_LHR_MERGE",
+    "BriefDescription": "LS0 Load Merged with another cacheline request,",
+    "PublicDescription": "LS0 Load Merged with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xc08c",
+    "EventName": "PM_LSU0_NCLD",
+    "BriefDescription": "LS0 Non-cachable Loads counted at finish,",
+    "PublicDescription": "LS0 Non-cachable Loads counted at finishLSU0 non-cacheable loads,"
+  },
+  {
+    "EventCode": "0xe090",
+    "EventName": "PM_LSU0_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x1e05a",
+    "EventName": "PM_LSU0_REJECT",
+    "BriefDescription": "LSU0 reject,",
+    "PublicDescription": "LSU0 reject .,"
+  },
+  {
+    "EventCode": "0xc09c",
+    "EventName": "PM_LSU0_SRQ_STFWD",
+    "BriefDescription": "LS0 SRQ forwarded data to a load,",
+    "PublicDescription": "LS0 SRQ forwarded data to a loadLSU0 SRQ store forwarded,"
+  },
+  {
+    "EventCode": "0xf084",
+    "EventName": "PM_LSU0_STORE_REJECT",
+    "BriefDescription": "ls0 store reject,",
+    "PublicDescription": "ls0 store reject42,"
+  },
+  {
+    "EventCode": "0xe0a8",
+    "EventName": "PM_LSU0_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe098",
+    "EventName": "PM_LSU0_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a0",
+    "EventName": "PM_LSU0_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b2",
+    "EventName": "PM_LSU1_FLUSH_LRQ",
+    "BriefDescription": "LS1 Flush: LRQ,",
+    "PublicDescription": "LS1 Flush: LRQLSU1 LRQ flushes,"
+  },
+  {
+    "EventCode": "0xc0ba",
+    "EventName": "PM_LSU1_FLUSH_SRQ",
+    "BriefDescription": "LS1 Flush: SRQ,",
+    "PublicDescription": "LS1 Flush: SRQLSU1 SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xc0a6",
+    "EventName": "PM_LSU1_FLUSH_ULD",
+    "BriefDescription": "LS 1 Flush: Unaligned Load,",
+    "PublicDescription": "LS 1 Flush: Unaligned LoadLSU1 unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xc0ae",
+    "EventName": "PM_LSU1_FLUSH_UST",
+    "BriefDescription": "LS1 Flush: Unaligned Store,",
+    "PublicDescription": "LS1 Flush: Unaligned StoreLSU1 unaligned store flushes,"
+  },
+  {
+    "EventCode": "0xf08a",
+    "EventName": "PM_LSU1_L1_CAM_CANCEL",
+    "BriefDescription": "ls1 l1 tm cam cancel,",
+    "PublicDescription": "ls1 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x2e056",
+    "EventName": "PM_LSU1_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe1,",
+    "PublicDescription": "Larx finished in LSU pipe1.,"
+  },
+  {
+    "EventCode": "0xd08e",
+    "EventName": "PM_LSU1_LMQ_LHR_MERGE",
+    "BriefDescription": "LS1 Load Merge with another cacheline request,",
+    "PublicDescription": "LS1 Load Merge with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xc08e",
+    "EventName": "PM_LSU1_NCLD",
+    "BriefDescription": "LS1 Non-cachable Loads counted at finish,",
+    "PublicDescription": "LS1 Non-cachable Loads counted at finishLSU1 non-cacheable loads,"
+  },
+  {
+    "EventCode": "0xe092",
+    "EventName": "PM_LSU1_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x2e05a",
+    "EventName": "PM_LSU1_REJECT",
+    "BriefDescription": "LSU1 reject,",
+    "PublicDescription": "LSU1 reject .,"
+  },
+  {
+    "EventCode": "0xc09e",
+    "EventName": "PM_LSU1_SRQ_STFWD",
+    "BriefDescription": "LS1 SRQ forwarded data to a load,",
+    "PublicDescription": "LS1 SRQ forwarded data to a loadLSU1 SRQ store forwarded,"
+  },
+  {
+    "EventCode": "0xf086",
+    "EventName": "PM_LSU1_STORE_REJECT",
+    "BriefDescription": "ls1 store reject,",
+    "PublicDescription": "ls1 store reject42,"
+  },
+  {
+    "EventCode": "0xe0aa",
+    "EventName": "PM_LSU1_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09a",
+    "EventName": "PM_LSU1_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a2",
+    "EventName": "PM_LSU1_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b4",
+    "EventName": "PM_LSU2_FLUSH_LRQ",
+    "BriefDescription": "LS02Flush: LRQ,",
+    "PublicDescription": "LS02Flush: LRQ42,"
+  },
+  {
+    "EventCode": "0xc0bc",
+    "EventName": "PM_LSU2_FLUSH_SRQ",
+    "BriefDescription": "LS2 Flush: SRQ,",
+    "PublicDescription": "LS2 Flush: SRQ42,"
+  },
+  {
+    "EventCode": "0xc0a8",
+    "EventName": "PM_LSU2_FLUSH_ULD",
+    "BriefDescription": "LS3 Flush: Unaligned Load,",
+    "PublicDescription": "LS3 Flush: Unaligned Load42,"
+  },
+  {
+    "EventCode": "0xf08c",
+    "EventName": "PM_LSU2_L1_CAM_CANCEL",
+    "BriefDescription": "ls2 l1 tm cam cancel,",
+    "PublicDescription": "ls2 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x3e056",
+    "EventName": "PM_LSU2_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe2,",
+    "PublicDescription": "Larx finished in LSU pipe2.,"
+  },
+  {
+    "EventCode": "0xc084",
+    "EventName": "PM_LSU2_LDF",
+    "BriefDescription": "LS2 Scalar Loads,",
+    "PublicDescription": "LS2 Scalar Loads42,"
+  },
+  {
+    "EventCode": "0xc088",
+    "EventName": "PM_LSU2_LDX",
+    "BriefDescription": "LS0 Vector Loads,",
+    "PublicDescription": "LS0 Vector Loads42,"
+  },
+  {
+    "EventCode": "0xd090",
+    "EventName": "PM_LSU2_LMQ_LHR_MERGE",
+    "BriefDescription": "LS0 Load Merged with another cacheline request,",
+    "PublicDescription": "LS0 Load Merged with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xe094",
+    "EventName": "PM_LSU2_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x3e05a",
+    "EventName": "PM_LSU2_REJECT",
+    "BriefDescription": "LSU2 reject,",
+    "PublicDescription": "LSU2 reject .,"
+  },
+  {
+    "EventCode": "0xc0a0",
+    "EventName": "PM_LSU2_SRQ_STFWD",
+    "BriefDescription": "LS2 SRQ forwarded data to a load,",
+    "PublicDescription": "LS2 SRQ forwarded data to a load42,"
+  },
+  {
+    "EventCode": "0xe0ac",
+    "EventName": "PM_LSU2_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09c",
+    "EventName": "PM_LSU2_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a4",
+    "EventName": "PM_LSU2_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b6",
+    "EventName": "PM_LSU3_FLUSH_LRQ",
+    "BriefDescription": "LS3 Flush: LRQ,",
+    "PublicDescription": "LS3 Flush: LRQ42,"
+  },
+  {
+    "EventCode": "0xc0be",
+    "EventName": "PM_LSU3_FLUSH_SRQ",
+    "BriefDescription": "LS13 Flush: SRQ,",
+    "PublicDescription": "LS13 Flush: SRQ42,"
+  },
+  {
+    "EventCode": "0xc0aa",
+    "EventName": "PM_LSU3_FLUSH_ULD",
+    "BriefDescription": "LS 14Flush: Unaligned Load,",
+    "PublicDescription": "LS 14Flush: Unaligned Load42,"
+  },
+  {
+    "EventCode": "0xf08e",
+    "EventName": "PM_LSU3_L1_CAM_CANCEL",
+    "BriefDescription": "ls3 l1 tm cam cancel,",
+    "PublicDescription": "ls3 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x4e056",
+    "EventName": "PM_LSU3_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe3,",
+    "PublicDescription": "Larx finished in LSU pipe3.,"
+  },
+  {
+    "EventCode": "0xc086",
+    "EventName": "PM_LSU3_LDF",
+    "BriefDescription": "LS3 Scalar Loads,",
+    "PublicDescription": "LS3 Scalar Loads 42,"
+  },
+  {
+    "EventCode": "0xc08a",
+    "EventName": "PM_LSU3_LDX",
+    "BriefDescription": "LS1 Vector Loads,",
+    "PublicDescription": "LS1 Vector Loads42,"
+  },
+  {
+    "EventCode": "0xd092",
+    "EventName": "PM_LSU3_LMQ_LHR_MERGE",
+    "BriefDescription": "LS1 Load Merge with another cacheline request,",
+    "PublicDescription": "LS1 Load Merge with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xe096",
+    "EventName": "PM_LSU3_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x4e05a",
+    "EventName": "PM_LSU3_REJECT",
+    "BriefDescription": "LSU3 reject,",
+    "PublicDescription": "LSU3 reject .,"
+  },
+  {
+    "EventCode": "0xc0a2",
+    "EventName": "PM_LSU3_SRQ_STFWD",
+    "BriefDescription": "LS3 SRQ forwarded data to a load,",
+    "PublicDescription": "LS3 SRQ forwarded data to a load42,"
+  },
+  {
+    "EventCode": "0xe0ae",
+    "EventName": "PM_LSU3_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09e",
+    "EventName": "PM_LSU3_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a6",
+    "EventName": "PM_LSU3_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0x200f6",
+    "EventName": "PM_LSU_DERAT_MISS",
+    "BriefDescription": "DERAT Reloaded due to a DERAT miss,",
+    "PublicDescription": "DERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0xe880",
+    "EventName": "PM_LSU_ERAT_MISS_PREF",
+    "BriefDescription": "Erat miss due to prefetch, on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0x30066",
+    "EventName": "PM_LSU_FIN",
+    "BriefDescription": "LSU Finished an instruction (up to 2 per cycle),",
+    "PublicDescription": "LSU Finished an instruction (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0xc8ac",
+    "EventName": "PM_LSU_FLUSH_UST",
+    "BriefDescription": "Unaligned Store Flush on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a4",
+    "EventName": "PM_LSU_FOUR_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when four tablewalks pending on this thread,",
+    "PublicDescription": "Cycles when four tablewalks pending on this thread42,"
+  },
+  {
+    "EventCode": "0x10066",
+    "EventName": "PM_LSU_FX_FIN",
+    "BriefDescription": "LSU Finished a FX operation (up to 2 per cycle,",
+    "PublicDescription": "LSU Finished a FX operation (up to 2 per cycle.,"
+  },
+  {
+    "EventCode": "0xd8b8",
+    "EventName": "PM_LSU_L1_PREF",
+    "BriefDescription": "hw initiated , include sw streaming forms as well , include sw streams as a separate event,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc898",
+    "EventName": "PM_LSU_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches, on both pipes,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc884",
+    "EventName": "PM_LSU_LDF",
+    "BriefDescription": "FPU loads only on LS2/LS3 ie LU0/LU1,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc888",
+    "EventName": "PM_LSU_LDX",
+    "BriefDescription": "Vector loads can issue only on LS2/LS3,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a2",
+    "EventName": "PM_LSU_LMQ_FULL_CYC",
+    "BriefDescription": "LMQ full,",
+    "PublicDescription": "LMQ fullCycles LMQ full,,"
+  },
+  {
+    "EventCode": "0xd0a1",
+    "EventName": "PM_LSU_LMQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd0a0",
+    "EventName": "PM_LSU_LMQ_S0_VALID",
+    "BriefDescription": "Slot 0 of LMQ valid,",
+    "PublicDescription": "Slot 0 of LMQ validLMQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0x3001c",
+    "EventName": "PM_LSU_LMQ_SRQ_EMPTY_ALL_CYC",
+    "BriefDescription": "ALL threads lsu empty (lmq and srq empty),",
+    "PublicDescription": "ALL threads lsu empty (lmq and srq empty). Issue HW016541,"
+  },
+  {
+    "EventCode": "0x2003e",
+    "EventName": "PM_LSU_LMQ_SRQ_EMPTY_CYC",
+    "BriefDescription": "LSU empty (lmq and srq empty),",
+    "PublicDescription": "LSU empty (lmq and srq empty).,"
+  },
+  {
+    "EventCode": "0xd09f",
+    "EventName": "PM_LSU_LRQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09e",
+    "EventName": "PM_LSU_LRQ_S0_VALID",
+    "BriefDescription": "Slot 0 of LRQ valid,",
+    "PublicDescription": "Slot 0 of LRQ validLRQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0xf091",
+    "EventName": "PM_LSU_LRQ_S43_ALLOC",
+    "BriefDescription": "LRQ slot 43 was released,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xf090",
+    "EventName": "PM_LSU_LRQ_S43_VALID",
+    "BriefDescription": "LRQ slot 43 was busy,",
+    "PublicDescription": "LRQ slot 43 was busy42,"
+  },
+  {
+    "EventCode": "0x30162",
+    "EventName": "PM_LSU_MRK_DERAT_MISS",
+    "BriefDescription": "DERAT Reloaded (Miss),",
+    "PublicDescription": "DERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0xc88c",
+    "EventName": "PM_LSU_NCLD",
+    "BriefDescription": "count at finish so can return only on ls0 or ls1,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc092",
+    "EventName": "PM_LSU_NCST",
+    "BriefDescription": "Non-cachable Stores sent to nest,",
+    "PublicDescription": "Non-cachable Stores sent to nest42,"
+  },
+  {
+    "EventCode": "0x10064",
+    "EventName": "PM_LSU_REJECT",
+    "BriefDescription": "LSU Reject (up to 4 per cycle),",
+    "PublicDescription": "LSU Reject (up to 4 per cycle).,"
+  },
+  {
+    "EventCode": "0x2e05c",
+    "EventName": "PM_LSU_REJECT_ERAT_MISS",
+    "BriefDescription": "LSU Reject due to ERAT (up to 4 per cycles),",
+    "PublicDescription": "LSU Reject due to ERAT (up to 4 per cycles).,"
+  },
+  {
+    "EventCode": "0x4e05c",
+    "EventName": "PM_LSU_REJECT_LHS",
+    "BriefDescription": "LSU Reject due to LHS (up to 4 per cycle),",
+    "PublicDescription": "LSU Reject due to LHS (up to 4 per cycle).,"
+  },
+  {
+    "EventCode": "0x1e05c",
+    "EventName": "PM_LSU_REJECT_LMQ_FULL",
+    "BriefDescription": "LSU reject due to LMQ full ( 4 per cycle),",
+    "PublicDescription": "LSU reject due to LMQ full ( 4 per cycle).,"
+  },
+  {
+    "EventCode": "0xd082",
+    "EventName": "PM_LSU_SET_MPRED",
+    "BriefDescription": "Line already in cache at reload time,",
+    "PublicDescription": "Line already in cache at reload time42,"
+  },
+  {
+    "EventCode": "0x40008",
+    "EventName": "PM_LSU_SRQ_EMPTY_CYC",
+    "BriefDescription": "ALL threads srq empty,",
+    "PublicDescription": "All threads srq empty.,"
+  },
+  {
+    "EventCode": "0x1001a",
+    "EventName": "PM_LSU_SRQ_FULL_CYC",
+    "BriefDescription": "Storage Queue is full and is blocking dispatch,",
+    "PublicDescription": "SRQ is Full.,"
+  },
+  {
+    "EventCode": "0xd09d",
+    "EventName": "PM_LSU_SRQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09c",
+    "EventName": "PM_LSU_SRQ_S0_VALID",
+    "BriefDescription": "Slot 0 of SRQ valid,",
+    "PublicDescription": "Slot 0 of SRQ validSRQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0xf093",
+    "EventName": "PM_LSU_SRQ_S39_ALLOC",
+    "BriefDescription": "SRQ slot 39 was released,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xf092",
+    "EventName": "PM_LSU_SRQ_S39_VALID",
+    "BriefDescription": "SRQ slot 39 was busy,",
+    "PublicDescription": "SRQ slot 39 was busy42,"
+  },
+  {
+    "EventCode": "0xd09b",
+    "EventName": "PM_LSU_SRQ_SYNC",
+    "BriefDescription": "A sync in the SRQ ended,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09a",
+    "EventName": "PM_LSU_SRQ_SYNC_CYC",
+    "BriefDescription": "A sync is in the SRQ (edge detect to count),",
+    "PublicDescription": "A sync is in the SRQ (edge detect to count)SRQ sync duration,"
+  },
+  {
+    "EventCode": "0xf084",
+    "EventName": "PM_LSU_STORE_REJECT",
+    "BriefDescription": "Store reject on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a6",
+    "EventName": "PM_LSU_TWO_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when two tablewalks pending on this thread,",
+    "PublicDescription": "Cycles when two tablewalks pending on this thread42,"
+  },
+  {
+    "EventCode": "0x5094",
+    "EventName": "PM_LWSYNC",
+    "BriefDescription": "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out,",
+    "PublicDescription": "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out,"
+  },
+  {
+    "EventCode": "0x209a",
+    "EventName": "PM_LWSYNC_HELD",
+    "BriefDescription": "LWSYNC held at dispatch,",
+    "PublicDescription": "LWSYNC held at dispatch,"
+  },
+  {
+    "EventCode": "0x4c058",
+    "EventName": "PM_MEM_CO",
+    "BriefDescription": "Memory castouts from this lpar,",
+    "PublicDescription": "Memory castouts from this lpar.,"
+  },
+  {
+    "EventCode": "0x10058",
+    "EventName": "PM_MEM_LOC_THRESH_IFU",
+    "BriefDescription": "Local Memory above threshold for IFU speculation control,",
+    "PublicDescription": "Local Memory above threshold for IFU speculation control.,"
+  },
+  {
+    "EventCode": "0x40056",
+    "EventName": "PM_MEM_LOC_THRESH_LSU_HIGH",
+    "BriefDescription": "Local memory above threshold for LSU medium,",
+    "PublicDescription": "Local memory above threshold for LSU medium.,"
+  },
+  {
+    "EventCode": "0x1c05e",
+    "EventName": "PM_MEM_LOC_THRESH_LSU_MED",
+    "BriefDescription": "Local memory above theshold for data prefetch,",
+    "PublicDescription": "Local memory above theshold for data prefetch.,"
+  },
+  {
+    "EventCode": "0x2c058",
+    "EventName": "PM_MEM_PREF",
+    "BriefDescription": "Memory prefetch for this lpar. Includes L4,",
+    "PublicDescription": "Memory prefetch for this lpar.,"
+  },
+  {
+    "EventCode": "0x10056",
+    "EventName": "PM_MEM_READ",
+    "BriefDescription": "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch). Includes L4,",
+    "PublicDescription": "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch).,"
+  },
+  {
+    "EventCode": "0x3c05e",
+    "EventName": "PM_MEM_RWITM",
+    "BriefDescription": "Memory rwitm for this lpar,",
+    "PublicDescription": "Memory rwitm for this lpar.,"
+  },
+  {
+    "EventCode": "0x3515e",
+    "EventName": "PM_MRK_BACK_BR_CMPL",
+    "BriefDescription": "Marked branch instruction completed with a target address less than current instruction address,",
+    "PublicDescription": "Marked branch instruction completed with a target address less than current instruction address.,"
+  },
+  {
+    "EventCode": "0x2013a",
+    "EventName": "PM_MRK_BRU_FIN",
+    "BriefDescription": "bru marked instr finish,",
+    "PublicDescription": "bru marked instr finish.,"
+  },
+  {
+    "EventCode": "0x1016e",
+    "EventName": "PM_MRK_BR_CMPL",
+    "BriefDescription": "Branch Instruction completed,",
+    "PublicDescription": "Branch Instruction completed.,"
+  },
+  {
+    "EventCode": "0x301e4",
+    "EventName": "PM_MRK_BR_MPRED_CMPL",
+    "BriefDescription": "Marked Branch Mispredicted,",
+    "PublicDescription": "Marked Branch Mispredicted.,"
+  },
+  {
+    "EventCode": "0x101e2",
+    "EventName": "PM_MRK_BR_TAKEN_CMPL",
+    "BriefDescription": "Marked Branch Taken completed,",
+    "PublicDescription": "Marked Branch Taken.,"
+  },
+  {
+    "EventCode": "0x3013a",
+    "EventName": "PM_MRK_CRU_FIN",
+    "BriefDescription": "IFU non-branch finished,",
+    "PublicDescription": "IFU non-branch marked instruction finished.,"
+  },
+  {
+    "EventCode": "0x4d148",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d128",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d148",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c128",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d14c",
+    "EventName": "PM_MRK_DATA_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c12c",
+    "EventName": "PM_MRK_DATA_FROM_DL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d14c",
+    "EventName": "PM_MRK_DATA_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12c",
+    "EventName": "PM_MRK_DATA_FROM_DMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d142",
+    "EventName": "PM_MRK_DATA_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d146",
+    "EventName": "PM_MRK_DATA_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d126",
+    "EventName": "PM_MRK_DATA_FROM_L21_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d146",
+    "EventName": "PM_MRK_DATA_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c126",
+    "EventName": "PM_MRK_DATA_FROM_L21_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14e",
+    "EventName": "PM_MRK_DATA_FROM_L2MISS",
+    "BriefDescription": "Data cache reload L2 miss,",
+    "PublicDescription": "Data cache reload L2 miss.,"
+  },
+  {
+    "EventCode": "0x4c12e",
+    "EventName": "PM_MRK_DATA_FROM_L2MISS_CYC",
+    "BriefDescription": "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c122",
+    "EventName": "PM_MRK_DATA_FROM_L2_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c120",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d120",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d120",
+    "EventName": "PM_MRK_DATA_FROM_L2_MEPF_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c120",
+    "EventName": "PM_MRK_DATA_FROM_L2_NO_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 without conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d142",
+    "EventName": "PM_MRK_DATA_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d124",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c124",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d124",
+    "EventName": "PM_MRK_DATA_FROM_L31_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d146",
+    "EventName": "PM_MRK_DATA_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c126",
+    "EventName": "PM_MRK_DATA_FROM_L31_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x201e4",
+    "EventName": "PM_MRK_DATA_FROM_L3MISS",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12e",
+    "EventName": "PM_MRK_DATA_FROM_L3MISS_CYC",
+    "BriefDescription": "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d122",
+    "EventName": "PM_MRK_DATA_FROM_L3_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d142",
+    "EventName": "PM_MRK_DATA_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c122",
+    "EventName": "PM_MRK_DATA_FROM_L3_DISP_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d142",
+    "EventName": "PM_MRK_DATA_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d122",
+    "EventName": "PM_MRK_DATA_FROM_L3_MEPF_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d144",
+    "EventName": "PM_MRK_DATA_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c124",
+    "EventName": "PM_MRK_DATA_FROM_L3_NO_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 without conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14c",
+    "EventName": "PM_MRK_DATA_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c12c",
+    "EventName": "PM_MRK_DATA_FROM_LL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from the local chip's L4 cache due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from the local chip's L4 cache due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d148",
+    "EventName": "PM_MRK_DATA_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d128",
+    "EventName": "PM_MRK_DATA_FROM_LMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from the local chip's Memory due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from the local chip's Memory due to a marked load.,"
+  },
+  {
+    "EventCode": "0x201e0",
+    "EventName": "PM_MRK_DATA_FROM_MEM",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d14c",
+    "EventName": "PM_MRK_DATA_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d12c",
+    "EventName": "PM_MRK_DATA_FROM_MEMORY_CYC",
+    "BriefDescription": "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d14a",
+    "EventName": "PM_MRK_DATA_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12a",
+    "EventName": "PM_MRK_DATA_FROM_OFF_CHIP_CACHE_CYC",
+    "BriefDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d148",
+    "EventName": "PM_MRK_DATA_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c128",
+    "EventName": "PM_MRK_DATA_FROM_ON_CHIP_CACHE_CYC",
+    "BriefDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d146",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d126",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14a",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c12a",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d14a",
+    "EventName": "PM_MRK_DATA_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d12a",
+    "EventName": "PM_MRK_DATA_FROM_RL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d14a",
+    "EventName": "PM_MRK_DATA_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c12a",
+    "EventName": "PM_MRK_DATA_FROM_RMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x40118",
+    "EventName": "PM_MRK_DCACHE_RELOAD_INTV",
+    "BriefDescription": "Combined Intervention event,",
+    "PublicDescription": "Combined Intervention event.,"
+  },
+  {
+    "EventCode": "0x301e6",
+    "EventName": "PM_MRK_DERAT_MISS",
+    "BriefDescription": "Erat Miss (TLB Access) All page sizes,",
+    "PublicDescription": "Erat Miss (TLB Access) All page sizes.,"
+  },
+  {
+    "EventCode": "0x4d154",
+    "EventName": "PM_MRK_DERAT_MISS_16G",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16G,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16G.,"
+  },
+  {
+    "EventCode": "0x3d154",
+    "EventName": "PM_MRK_DERAT_MISS_16M",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16M,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16M.,"
+  },
+  {
+    "EventCode": "0x1d156",
+    "EventName": "PM_MRK_DERAT_MISS_4K",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 4K,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 4K.,"
+  },
+  {
+    "EventCode": "0x2d154",
+    "EventName": "PM_MRK_DERAT_MISS_64K",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 64K,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 64K.,"
+  },
+  {
+    "EventCode": "0x20132",
+    "EventName": "PM_MRK_DFU_FIN",
+    "BriefDescription": "Decimal Unit marked Instruction Finish,",
+    "PublicDescription": "Decimal Unit marked Instruction Finish.,"
+  },
+  {
+    "EventCode": "0x4f148",
+    "EventName": "PM_MRK_DPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f148",
+    "EventName": "PM_MRK_DPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14e",
+    "EventName": "PM_MRK_DPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14e",
+    "EventName": "PM_MRK_DPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f148",
+    "EventName": "PM_MRK_DPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f148",
+    "EventName": "PM_MRK_DPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f146",
+    "EventName": "PM_MRK_DPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x401e4",
+    "EventName": "PM_MRK_DTLB_MISS",
+    "BriefDescription": "Marked dtlb miss,",
+    "PublicDescription": "Marked dtlb miss.,"
+  },
+  {
+    "EventCode": "0x1d158",
+    "EventName": "PM_MRK_DTLB_MISS_16G",
+    "BriefDescription": "Marked Data TLB Miss page size 16G,",
+    "PublicDescription": "Marked Data TLB Miss page size 16G.,"
+  },
+  {
+    "EventCode": "0x4d156",
+    "EventName": "PM_MRK_DTLB_MISS_16M",
+    "BriefDescription": "Marked Data TLB Miss page size 16M,",
+    "PublicDescription": "Marked Data TLB Miss page size 16M.,"
+  },
+  {
+    "EventCode": "0x2d156",
+    "EventName": "PM_MRK_DTLB_MISS_4K",
+    "BriefDescription": "Marked Data TLB Miss page size 4k,",
+    "PublicDescription": "Marked Data TLB Miss page size 4k.,"
+  },
+  {
+    "EventCode": "0x3d156",
+    "EventName": "PM_MRK_DTLB_MISS_64K",
+    "BriefDescription": "Marked Data TLB Miss page size 64K,",
+    "PublicDescription": "Marked Data TLB Miss page size 64K.,"
+  },
+  {
+    "EventCode": "0x40154",
+    "EventName": "PM_MRK_FAB_RSP_BKILL",
+    "BriefDescription": "Marked store had to do a bkill,",
+    "PublicDescription": "Marked store had to do a bkill.,"
+  },
+  {
+    "EventCode": "0x2f150",
+    "EventName": "PM_MRK_FAB_RSP_BKILL_CYC",
+    "BriefDescription": "cycles L2 RC took for a bkill,",
+    "PublicDescription": "cycles L2 RC took for a bkill.,"
+  },
+  {
+    "EventCode": "0x3015e",
+    "EventName": "PM_MRK_FAB_RSP_CLAIM_RTY",
+    "BriefDescription": "Sampled store did a rwitm and got a rty,",
+    "PublicDescription": "Sampled store did a rwitm and got a rty.,"
+  },
+  {
+    "EventCode": "0x30154",
+    "EventName": "PM_MRK_FAB_RSP_DCLAIM",
+    "BriefDescription": "Marked store had to do a dclaim,",
+    "PublicDescription": "Marked store had to do a dclaim.,"
+  },
+  {
+    "EventCode": "0x2f152",
+    "EventName": "PM_MRK_FAB_RSP_DCLAIM_CYC",
+    "BriefDescription": "cycles L2 RC took for a dclaim,",
+    "PublicDescription": "cycles L2 RC took for a dclaim.,"
+  },
+  {
+    "EventCode": "0x30156",
+    "EventName": "PM_MRK_FAB_RSP_MATCH",
+    "BriefDescription": "ttype and cresp matched as specified in MMCR1,",
+    "PublicDescription": "ttype and cresp matched as specified in MMCR1.,"
+  },
+  {
+    "EventCode": "0x4f152",
+    "EventName": "PM_MRK_FAB_RSP_MATCH_CYC",
+    "BriefDescription": "cresp/ttype match cycles,",
+    "PublicDescription": "cresp/ttype match cycles.,"
+  },
+  {
+    "EventCode": "0x4015e",
+    "EventName": "PM_MRK_FAB_RSP_RD_RTY",
+    "BriefDescription": "Sampled L2 reads retry count,",
+    "PublicDescription": "Sampled L2 reads retry count.,"
+  },
+  {
+    "EventCode": "0x1015e",
+    "EventName": "PM_MRK_FAB_RSP_RD_T_INTV",
+    "BriefDescription": "Sampled Read got a T intervention,",
+    "PublicDescription": "Sampled Read got a T intervention.,"
+  },
+  {
+    "EventCode": "0x4f150",
+    "EventName": "PM_MRK_FAB_RSP_RWITM_CYC",
+    "BriefDescription": "cycles L2 RC took for a rwitm,",
+    "PublicDescription": "cycles L2 RC took for a rwitm.,"
+  },
+  {
+    "EventCode": "0x2015e",
+    "EventName": "PM_MRK_FAB_RSP_RWITM_RTY",
+    "BriefDescription": "Sampled store did a rwitm and got a rty,",
+    "PublicDescription": "Sampled store did a rwitm and got a rty.,"
+  },
+  {
+    "EventCode": "0x2013c",
+    "EventName": "PM_MRK_FILT_MATCH",
+    "BriefDescription": "Marked filter Match,",
+    "PublicDescription": "Marked filter Match.,"
+  },
+  {
+    "EventCode": "0x1013c",
+    "EventName": "PM_MRK_FIN_STALL_CYC",
+    "BriefDescription": "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count ),",
+    "PublicDescription": "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x20134",
+    "EventName": "PM_MRK_FXU_FIN",
+    "BriefDescription": "fxu marked instr finish,",
+    "PublicDescription": "fxu marked instr finish.,"
+  },
+  {
+    "EventCode": "0x40130",
+    "EventName": "PM_MRK_GRP_CMPL",
+    "BriefDescription": "marked instruction finished (completed),",
+    "PublicDescription": "marked instruction finished (completed).,"
+  },
+  {
+    "EventCode": "0x4013a",
+    "EventName": "PM_MRK_GRP_IC_MISS",
+    "BriefDescription": "Marked Group experienced I cache miss,",
+    "PublicDescription": "Marked Group experienced I cache miss.,"
+  },
+  {
+    "EventCode": "0x3013c",
+    "EventName": "PM_MRK_GRP_NTC",
+    "BriefDescription": "Marked group ntc cycles.,",
+    "PublicDescription": "Marked group ntc cycles.,"
+  },
+  {
+    "EventCode": "0x401e0",
+    "EventName": "PM_MRK_INST_CMPL",
+    "BriefDescription": "marked instruction completed,",
+    "PublicDescription": "marked instruction completed.,"
+  },
+  {
+    "EventCode": "0x20130",
+    "EventName": "PM_MRK_INST_DECODED",
+    "BriefDescription": "marked instruction decoded,",
+    "PublicDescription": "marked instruction decoded. Name from ISU?,"
+  },
+  {
+    "EventCode": "0x101e0",
+    "EventName": "PM_MRK_INST_DISP",
+    "BriefDescription": "The thread has dispatched a randomly sampled marked instruction,",
+    "PublicDescription": "Marked Instruction dispatched.,"
+  },
+  {
+    "EventCode": "0x30130",
+    "EventName": "PM_MRK_INST_FIN",
+    "BriefDescription": "marked instruction finished,",
+    "PublicDescription": "marked instr finish any unit .,"
+  },
+  {
+    "EventCode": "0x401e6",
+    "EventName": "PM_MRK_INST_FROM_L3MISS",
+    "BriefDescription": "Marked instruction was reloaded from a location beyond the local chiplet,",
+    "PublicDescription": "n/a,"
+  },
+  {
+    "EventCode": "0x10132",
+    "EventName": "PM_MRK_INST_ISSUED",
+    "BriefDescription": "Marked instruction issued,",
+    "PublicDescription": "Marked instruction issued.,"
+  },
+  {
+    "EventCode": "0x40134",
+    "EventName": "PM_MRK_INST_TIMEO",
+    "BriefDescription": "marked Instruction finish timeout (instruction lost),",
+    "PublicDescription": "marked Instruction finish timeout (instruction lost).,"
+  },
+  {
+    "EventCode": "0x101e4",
+    "EventName": "PM_MRK_L1_ICACHE_MISS",
+    "BriefDescription": "sampled Instruction suffered an icache Miss,",
+    "PublicDescription": "Marked L1 Icache Miss.,"
+  },
+  {
+    "EventCode": "0x101ea",
+    "EventName": "PM_MRK_L1_RELOAD_VALID",
+    "BriefDescription": "Marked demand reload,",
+    "PublicDescription": "Marked demand reload.,"
+  },
+  {
+    "EventCode": "0x20114",
+    "EventName": "PM_MRK_L2_RC_DISP",
+    "BriefDescription": "Marked Instruction RC dispatched in L2,",
+    "PublicDescription": "Marked Instruction RC dispatched in L2.,"
+  },
+  {
+    "EventCode": "0x3012a",
+    "EventName": "PM_MRK_L2_RC_DONE",
+    "BriefDescription": "Marked RC done,",
+    "PublicDescription": "Marked RC done.,"
+  },
+  {
+    "EventCode": "0x40116",
+    "EventName": "PM_MRK_LARX_FIN",
+    "BriefDescription": "Larx finished,",
+    "PublicDescription": "Larx finished .,"
+  },
+  {
+    "EventCode": "0x1013f",
+    "EventName": "PM_MRK_LD_MISS_EXPOSED",
+    "BriefDescription": "Marked Load exposed Miss (exposed period ended),",
+    "PublicDescription": "Marked Load exposed Miss (use edge detect to count #),"
+  },
+  {
+    "EventCode": "0x1013e",
+    "EventName": "PM_MRK_LD_MISS_EXPOSED_CYC",
+    "BriefDescription": "Marked Load exposed Miss cycles,",
+    "PublicDescription": "Marked Load exposed Miss (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x201e2",
+    "EventName": "PM_MRK_LD_MISS_L1",
+    "BriefDescription": "Marked DL1 Demand Miss counted at exec time,",
+    "PublicDescription": "Marked DL1 Demand Miss counted at exec time.,"
+  },
+  {
+    "EventCode": "0x4013e",
+    "EventName": "PM_MRK_LD_MISS_L1_CYC",
+    "BriefDescription": "Marked ld latency,",
+    "PublicDescription": "Marked ld latency.,"
+  },
+  {
+    "EventCode": "0x40132",
+    "EventName": "PM_MRK_LSU_FIN",
+    "BriefDescription": "lsu marked instr finish,",
+    "PublicDescription": "lsu marked instr finish.,"
+  },
+  {
+    "EventCode": "0xd180",
+    "EventName": "PM_MRK_LSU_FLUSH",
+    "BriefDescription": "Flush: (marked) : All Cases,",
+    "PublicDescription": "Flush: (marked) : All Cases42,"
+  },
+  {
+    "EventCode": "0xd188",
+    "EventName": "PM_MRK_LSU_FLUSH_LRQ",
+    "BriefDescription": "Flush: (marked) LRQ,",
+    "PublicDescription": "Flush: (marked) LRQMarked LRQ flushes,"
+  },
+  {
+    "EventCode": "0xd18a",
+    "EventName": "PM_MRK_LSU_FLUSH_SRQ",
+    "BriefDescription": "Flush: (marked) SRQ,",
+    "PublicDescription": "Flush: (marked) SRQMarked SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xd184",
+    "EventName": "PM_MRK_LSU_FLUSH_ULD",
+    "BriefDescription": "Flush: (marked) Unaligned Load,",
+    "PublicDescription": "Flush: (marked) Unaligned LoadMarked unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xd186",
+    "EventName": "PM_MRK_LSU_FLUSH_UST",
+    "BriefDescription": "Flush: (marked) Unaligned Store,",
+    "PublicDescription": "Flush: (marked) Unaligned StoreMarked unaligned store flushes,"
+  },
+  {
+    "EventCode": "0x40164",
+    "EventName": "PM_MRK_LSU_REJECT",
+    "BriefDescription": "LSU marked reject (up to 2 per cycle),",
+    "PublicDescription": "LSU marked reject (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0x30164",
+    "EventName": "PM_MRK_LSU_REJECT_ERAT_MISS",
+    "BriefDescription": "LSU marked reject due to ERAT (up to 2 per cycle),",
+    "PublicDescription": "LSU marked reject due to ERAT (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0x20112",
+    "EventName": "PM_MRK_NTF_FIN",
+    "BriefDescription": "Marked next to finish instruction finished,",
+    "PublicDescription": "Marked next to finish instruction finished.,"
+  },
+  {
+    "EventCode": "0x1d15e",
+    "EventName": "PM_MRK_RUN_CYC",
+    "BriefDescription": "Marked run cycles,",
+    "PublicDescription": "Marked run cycles.,"
+  },
+  {
+    "EventCode": "0x1d15a",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_EFF",
+    "BriefDescription": "Marked src pref track was effective,",
+    "PublicDescription": "Marked src pref track was effective.,"
+  },
+  {
+    "EventCode": "0x3d15a",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective for marked src,",
+    "PublicDescription": "Prefetch tracked was ineffective for marked src.,"
+  },
+  {
+    "EventCode": "0x4d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate for marked src,",
+    "PublicDescription": "Prefetch tracked was moderate for marked src.,"
+  },
+  {
+    "EventCode": "0x1d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Marked src Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Marked src Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3 hit) for marked src,",
+    "PublicDescription": "Prefetch tracked was moderate (L3 hit) for marked src.,"
+  },
+  {
+    "EventCode": "0x3013e",
+    "EventName": "PM_MRK_STALL_CMPLU_CYC",
+    "BriefDescription": "Marked Group completion Stall,",
+    "PublicDescription": "Marked Group Completion Stall cycles (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x3e158",
+    "EventName": "PM_MRK_STCX_FAIL",
+    "BriefDescription": "marked stcx failed,",
+    "PublicDescription": "marked stcx failed.,"
+  },
+  {
+    "EventCode": "0x10134",
+    "EventName": "PM_MRK_ST_CMPL",
+    "BriefDescription": "marked store completed and sent to nest,",
+    "PublicDescription": "Marked store completed.,"
+  },
+  {
+    "EventCode": "0x30134",
+    "EventName": "PM_MRK_ST_CMPL_INT",
+    "BriefDescription": "marked store finished with intervention,",
+    "PublicDescription": "marked store complete (data home) with intervention.,"
+  },
+  {
+    "EventCode": "0x3f150",
+    "EventName": "PM_MRK_ST_DRAIN_TO_L2DISP_CYC",
+    "BriefDescription": "cycles to drain st from core to L2,",
+    "PublicDescription": "cycles to drain st from core to L2.,"
+  },
+  {
+    "EventCode": "0x3012c",
+    "EventName": "PM_MRK_ST_FWD",
+    "BriefDescription": "Marked st forwards,",
+    "PublicDescription": "Marked st forwards.,"
+  },
+  {
+    "EventCode": "0x1f150",
+    "EventName": "PM_MRK_ST_L2DISP_TO_CMPL_CYC",
+    "BriefDescription": "cycles from L2 rc disp to l2 rc completion,",
+    "PublicDescription": "cycles from L2 rc disp to l2 rc completion.,"
+  },
+  {
+    "EventCode": "0x20138",
+    "EventName": "PM_MRK_ST_NEST",
+    "BriefDescription": "Marked store sent to nest,",
+    "PublicDescription": "Marked store sent to nest.,"
+  },
+  {
+    "EventCode": "0x1c15a",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_EFF",
+    "BriefDescription": "Marked target pref track was effective,",
+    "PublicDescription": "Marked target pref track was effective.,"
+  },
+  {
+    "EventCode": "0x3c15a",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective for marked target,",
+    "PublicDescription": "Prefetch tracked was ineffective for marked target.,"
+  },
+  {
+    "EventCode": "0x4c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate for marked target,",
+    "PublicDescription": "Prefetch tracked was moderate for marked target.,"
+  },
+  {
+    "EventCode": "0x1c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Marked target Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Marked target Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3 hit) for marked target,",
+    "PublicDescription": "Prefetch tracked was moderate (L3 hit) for marked target.,"
+  },
+  {
+    "EventCode": "0x30132",
+    "EventName": "PM_MRK_VSU_FIN",
+    "BriefDescription": "VSU marked instr finish,",
+    "PublicDescription": "vsu (fpu) marked instr finish.,"
+  },
+  {
+    "EventCode": "0x3d15e",
+    "EventName": "PM_MULT_MRK",
+    "BriefDescription": "mult marked instr,",
+    "PublicDescription": "mult marked instr.,"
+  },
+  {
+    "EventCode": "0x20b0",
+    "EventName": "PM_NESTED_TEND",
+    "BriefDescription": "Completion time nested tend,",
+    "PublicDescription": "Completion time nested tend,"
+  },
+  {
+    "EventCode": "0x3006e",
+    "EventName": "PM_NEST_REF_CLK",
+    "BriefDescription": "Multiply by 4 to obtain the number of PB cycles,",
+    "PublicDescription": "Nest reference clocks.,"
+  },
+  {
+    "EventCode": "0x20b6",
+    "EventName": "PM_NON_FAV_TBEGIN",
+    "BriefDescription": "Dispatch time non favored tbegin,",
+    "PublicDescription": "Dispatch time non favored tbegin,"
+  },
+  {
+    "EventCode": "0x2001a",
+    "EventName": "PM_NTCG_ALL_FIN",
+    "BriefDescription": "Cycles after all instructions have finished to group completed,",
+    "PublicDescription": "Ccycles after all instructions have finished to group completed.,"
+  },
+  {
+    "EventCode": "0x20ac",
+    "EventName": "PM_OUTER_TBEGIN",
+    "BriefDescription": "Completion time outer tbegin,",
+    "PublicDescription": "Completion time outer tbegin,"
+  },
+  {
+    "EventCode": "0x20ae",
+    "EventName": "PM_OUTER_TEND",
+    "BriefDescription": "Completion time outer tend,",
+    "PublicDescription": "Completion time outer tend,"
+  },
+  {
+    "EventCode": "0x20010",
+    "EventName": "PM_PMC1_OVERFLOW",
+    "BriefDescription": "Overflow from counter 1,",
+    "PublicDescription": "Overflow from counter 1.,"
+  },
+  {
+    "EventCode": "0x30010",
+    "EventName": "PM_PMC2_OVERFLOW",
+    "BriefDescription": "Overflow from counter 2,",
+    "PublicDescription": "Overflow from counter 2.,"
+  },
+  {
+    "EventCode": "0x30020",
+    "EventName": "PM_PMC2_REWIND",
+    "BriefDescription": "PMC2 Rewind Event (did not match condition),",
+    "PublicDescription": "PMC2 Rewind Event (did not match condition).,"
+  },
+  {
+    "EventCode": "0x10022",
+    "EventName": "PM_PMC2_SAVED",
+    "BriefDescription": "PMC2 Rewind Value saved,",
+    "PublicDescription": "PMC2 Rewind Value saved (matched condition).,"
+  },
+  {
+    "EventCode": "0x40010",
+    "EventName": "PM_PMC3_OVERFLOW",
+    "BriefDescription": "Overflow from counter 3,",
+    "PublicDescription": "Overflow from counter 3.,"
+  },
+  {
+    "EventCode": "0x10010",
+    "EventName": "PM_PMC4_OVERFLOW",
+    "BriefDescription": "Overflow from counter 4,",
+    "PublicDescription": "Overflow from counter 4.,"
+  },
+  {
+    "EventCode": "0x10020",
+    "EventName": "PM_PMC4_REWIND",
+    "BriefDescription": "PMC4 Rewind Event,",
+    "PublicDescription": "PMC4 Rewind Event (did not match condition).,"
+  },
+  {
+    "EventCode": "0x30022",
+    "EventName": "PM_PMC4_SAVED",
+    "BriefDescription": "PMC4 Rewind Value saved (matched condition),",
+    "PublicDescription": "PMC4 Rewind Value saved (matched condition).,"
+  },
+  {
+    "EventCode": "0x10024",
+    "EventName": "PM_PMC5_OVERFLOW",
+    "BriefDescription": "Overflow from counter 5,",
+    "PublicDescription": "Overflow from counter 5.,"
+  },
+  {
+    "EventCode": "0x30024",
+    "EventName": "PM_PMC6_OVERFLOW",
+    "BriefDescription": "Overflow from counter 6,",
+    "PublicDescription": "Overflow from counter 6.,"
+  },
+  {
+    "EventCode": "0x2005a",
+    "EventName": "PM_PREF_TRACKED",
+    "BriefDescription": "Total number of Prefetch Operations that were tracked,",
+    "PublicDescription": "Total number of Prefetch Operations that were tracked.,"
+  },
+  {
+    "EventCode": "0x1005a",
+    "EventName": "PM_PREF_TRACK_EFF",
+    "BriefDescription": "Prefetch Tracked was effective,",
+    "PublicDescription": "Prefetch Tracked was effective.,"
+  },
+  {
+    "EventCode": "0x3005a",
+    "EventName": "PM_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective,",
+    "PublicDescription": "Prefetch tracked was ineffective.,"
+  },
+  {
+    "EventCode": "0x4005a",
+    "EventName": "PM_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate,",
+    "PublicDescription": "Prefetch tracked was moderate.,"
+  },
+  {
+    "EventCode": "0x1005c",
+    "EventName": "PM_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3005c",
+    "EventName": "PM_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3),",
+    "PublicDescription": "Prefetch tracked was moderate (L3).,"
+  },
+  {
+    "EventCode": "0x40014",
+    "EventName": "PM_PROBE_NOP_DISP",
+    "BriefDescription": "ProbeNops dispatched,",
+    "PublicDescription": "ProbeNops dispatched.,"
+  },
+  {
+    "EventCode": "0xe084",
+    "EventName": "PM_PTE_PREFETCH",
+    "BriefDescription": "PTE prefetches,",
+    "PublicDescription": "PTE prefetches42,"
+  },
+  {
+    "EventCode": "0x10054",
+    "EventName": "PM_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x40052",
+    "EventName": "PM_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x16081",
+    "EventName": "PM_RC0_ALLOC",
+    "BriefDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x16080",
+    "EventName": "PM_RC0_BUSY",
+    "BriefDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0x200301ea",
+    "EventName": "PM_RC_LIFETIME_EXC_1024",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 1024 cycles,",
+    "PublicDescription": "Reload latency exceeded 1024 cyc,"
+  },
+  {
+    "EventCode": "0x200401ec",
+    "EventName": "PM_RC_LIFETIME_EXC_2048",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 2048 cycles,",
+    "PublicDescription": "Threshold counter exceeded a value of 2048,"
+  },
+  {
+    "EventCode": "0x200101e8",
+    "EventName": "PM_RC_LIFETIME_EXC_256",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 256 cycles,",
+    "PublicDescription": "Threshold counter exceed a count of 256,"
+  },
+  {
+    "EventCode": "0x200201e6",
+    "EventName": "PM_RC_LIFETIME_EXC_32",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 32 cycles,",
+    "PublicDescription": "Reload latency exceeded 32 cyc,"
+  },
+  {
+    "EventCode": "0x36088",
+    "EventName": "PM_RC_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x20004",
+    "EventName": "PM_REAL_SRQ_FULL",
+    "BriefDescription": "Out of real srq entries,",
+    "PublicDescription": "Out of real srq entries.,"
+  },
+  {
+    "EventCode": "0x600f4",
+    "EventName": "PM_RUN_CYC",
+    "BriefDescription": "Run_cycles,",
+    "PublicDescription": "Run_cycles.,"
+  },
+  {
+    "EventCode": "0x3006c",
+    "EventName": "PM_RUN_CYC_SMT2_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT2 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2 mode.,"
+  },
+  {
+    "EventCode": "0x2006a",
+    "EventName": "PM_RUN_CYC_SMT2_SHRD_MODE",
+    "BriefDescription": "cycles this threads run latch is set and the core is in SMT2 shared mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2-shared mode.,"
+  },
+  {
+    "EventCode": "0x1006a",
+    "EventName": "PM_RUN_CYC_SMT2_SPLIT_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT2-split mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2-split mode.,"
+  },
+  {
+    "EventCode": "0x2006c",
+    "EventName": "PM_RUN_CYC_SMT4_MODE",
+    "BriefDescription": "cycles this threads run latch is set and the core is in SMT4 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT4 mode.,"
+  },
+  {
+    "EventCode": "0x4006c",
+    "EventName": "PM_RUN_CYC_SMT8_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT8 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT8 mode.,"
+  },
+  {
+    "EventCode": "0x1006c",
+    "EventName": "PM_RUN_CYC_ST_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in ST mode,",
+    "PublicDescription": "Cycles run latch is set and core is in ST mode.,"
+  },
+  {
+    "EventCode": "0x500fa",
+    "EventName": "PM_RUN_INST_CMPL",
+    "BriefDescription": "Run_Instructions,",
+    "PublicDescription": "Run_Instructions.,"
+  },
+  {
+    "EventCode": "0x400f4",
+    "EventName": "PM_RUN_PURR",
+    "BriefDescription": "Run_PURR,",
+    "PublicDescription": "Run_PURR.,"
+  },
+  {
+    "EventCode": "0x10008",
+    "EventName": "PM_RUN_SPURR",
+    "BriefDescription": "Run SPURR,",
+    "PublicDescription": "Run SPURR.,"
+  },
+  {
+    "EventCode": "0xf082",
+    "EventName": "PM_SEC_ERAT_HIT",
+    "BriefDescription": "secondary ERAT Hit,",
+    "PublicDescription": "secondary ERAT Hit42,"
+  },
+  {
+    "EventCode": "0x508c",
+    "EventName": "PM_SHL_CREATED",
+    "BriefDescription": "Store-Hit-Load Table Entry Created,",
+    "PublicDescription": "Store-Hit-Load Table Entry Created,"
+  },
+  {
+    "EventCode": "0x508e",
+    "EventName": "PM_SHL_ST_CONVERT",
+    "BriefDescription": "Store-Hit-Load Table Read Hit with entry Enabled,",
+    "PublicDescription": "Store-Hit-Load Table Read Hit with entry Enabled,"
+  },
+  {
+    "EventCode": "0x5090",
+    "EventName": "PM_SHL_ST_DISABLE",
+    "BriefDescription": "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush),",
+    "PublicDescription": "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush),"
+  },
+  {
+    "EventCode": "0x26085",
+    "EventName": "PM_SN0_ALLOC",
+    "BriefDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x26084",
+    "EventName": "PM_SN0_BUSY",
+    "BriefDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0xd0b2",
+    "EventName": "PM_SNOOP_TLBIE",
+    "BriefDescription": "TLBIE snoop,",
+    "PublicDescription": "TLBIE snoopSnoop TLBIE,"
+  },
+  {
+    "EventCode": "0x4608c",
+    "EventName": "PM_SN_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x10028",
+    "EventName": "PM_STALL_END_GCT_EMPTY",
+    "BriefDescription": "Count ended because GCT went empty,",
+    "PublicDescription": "Count ended because GCT went empty.,"
+  },
+  {
+    "EventCode": "0x1e058",
+    "EventName": "PM_STCX_FAIL",
+    "BriefDescription": "stcx failed,",
+    "PublicDescription": "stcx failed .,"
+  },
+  {
+    "EventCode": "0xc090",
+    "EventName": "PM_STCX_LSU",
+    "BriefDescription": "STCX executed reported at sent to nest,",
+    "PublicDescription": "STCX executed reported at sent to nest42,"
+  },
+  {
+    "EventCode": "0x20016",
+    "EventName": "PM_ST_CMPL",
+    "BriefDescription": "Store completion count,",
+    "PublicDescription": "Store completion count.,"
+  },
+  {
+    "EventCode": "0x200f0",
+    "EventName": "PM_ST_FIN",
+    "BriefDescription": "Store Instructions Finished,",
+    "PublicDescription": "Store Instructions Finished (store sent to nest).,"
+  },
+  {
+    "EventCode": "0x20018",
+    "EventName": "PM_ST_FWD",
+    "BriefDescription": "Store forwards that finished,",
+    "PublicDescription": "Store forwards that finished.,"
+  },
+  {
+    "EventCode": "0x300f0",
+    "EventName": "PM_ST_MISS_L1",
+    "BriefDescription": "Store Missed L1,",
+    "PublicDescription": "Store Missed L1.,"
+  },
+  {
+    "EventCode": "0x0",
+    "EventName": "PM_SUSPENDED",
+    "BriefDescription": "Counter OFF,",
+    "PublicDescription": "Counter OFF.,"
+  },
+  {
+    "EventCode": "0x3090",
+    "EventName": "PM_SWAP_CANCEL",
+    "BriefDescription": "SWAP cancel , rtag not available,",
+    "PublicDescription": "SWAP cancel , rtag not available,"
+  },
+  {
+    "EventCode": "0x3092",
+    "EventName": "PM_SWAP_CANCEL_GPR",
+    "BriefDescription": "SWAP cancel , rtag not available for gpr,",
+    "PublicDescription": "SWAP cancel , rtag not available for gpr,"
+  },
+  {
+    "EventCode": "0x308c",
+    "EventName": "PM_SWAP_COMPLETE",
+    "BriefDescription": "swap cast in completed,",
+    "PublicDescription": "swap cast in completed,"
+  },
+  {
+    "EventCode": "0x308e",
+    "EventName": "PM_SWAP_COMPLETE_GPR",
+    "BriefDescription": "swap cast in completed fpr gpr,",
+    "PublicDescription": "swap cast in completed fpr gpr,"
+  },
+  {
+    "EventCode": "0x15152",
+    "EventName": "PM_SYNC_MRK_BR_LINK",
+    "BriefDescription": "Marked Branch and link branch that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked Branch and link branch that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x1515c",
+    "EventName": "PM_SYNC_MRK_BR_MPRED",
+    "BriefDescription": "Marked Branch mispredict that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked Branch mispredict that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15156",
+    "EventName": "PM_SYNC_MRK_FX_DIVIDE",
+    "BriefDescription": "Marked fixed point divide that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked fixed point divide that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15158",
+    "EventName": "PM_SYNC_MRK_L2HIT",
+    "BriefDescription": "Marked L2 Hits that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L2 Hits that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x1515a",
+    "EventName": "PM_SYNC_MRK_L2MISS",
+    "BriefDescription": "Marked L2 Miss that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L2 Miss that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15154",
+    "EventName": "PM_SYNC_MRK_L3MISS",
+    "BriefDescription": "Marked L3 misses that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L3 misses that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15150",
+    "EventName": "PM_SYNC_MRK_PROBE_NOP",
+    "BriefDescription": "Marked probeNops which can cause synchronous interrupts,",
+    "PublicDescription": "Marked probeNops which can cause synchronous interrupts.,"
+  },
+  {
+    "EventCode": "0x30050",
+    "EventName": "PM_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x30052",
+    "EventName": "PM_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x40050",
+    "EventName": "PM_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x10026",
+    "EventName": "PM_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when a tablewalk (I or D) is active,",
+    "PublicDescription": "Tablewalk Active.,"
+  },
+  {
+    "EventCode": "0xe086",
+    "EventName": "PM_TABLEWALK_CYC_PREF",
+    "BriefDescription": "tablewalk qualified for pte prefetches,",
+    "PublicDescription": "tablewalk qualified for pte prefetches42,"
+  },
+  {
+    "EventCode": "0x20b2",
+    "EventName": "PM_TABORT_TRECLAIM",
+    "BriefDescription": "Completion time tabortnoncd, tabortcd, treclaim,",
+    "PublicDescription": "Completion time tabortnoncd, tabortcd, treclaim,"
+  },
+  {
+    "EventCode": "0x300f8",
+    "EventName": "PM_TB_BIT_TRANS",
+    "BriefDescription": "timebase event,",
+    "PublicDescription": "timebase event.,"
+  },
+  {
+    "EventCode": "0xe0ba",
+    "EventName": "PM_TEND_PEND_CYC",
+    "BriefDescription": "TEND latency per thread,",
+    "PublicDescription": "TEND latency per thread42,"
+  },
+  {
+    "EventCode": "0x2000c",
+    "EventName": "PM_THRD_ALL_RUN_CYC",
+    "BriefDescription": "All Threads in Run_cycles (was both threads in run_cycles),",
+    "PublicDescription": "All Threads in Run_cycles (was both threads in run_cycles).,"
+  },
+  {
+    "EventCode": "0x300f4",
+    "EventName": "PM_THRD_CONC_RUN_INST",
+    "BriefDescription": "PPC Instructions Finished when both threads in run_cycles,",
+    "PublicDescription": "Concurrent Run Instructions.,"
+  },
+  {
+    "EventCode": "0x10012",
+    "EventName": "PM_THRD_GRP_CMPL_BOTH_CYC",
+    "BriefDescription": "Cycles group completed on both completion slots by any thread,",
+    "PublicDescription": "Two threads finished same cycle (gated by run latch).,"
+  },
+  {
+    "EventCode": "0x40bc",
+    "EventName": "PM_THRD_PRIO_0_1_CYC",
+    "BriefDescription": "Cycles thread running at priority level 0 or 1,",
+    "PublicDescription": "Cycles thread running at priority level 0 or 1,"
+  },
+  {
+    "EventCode": "0x40be",
+    "EventName": "PM_THRD_PRIO_2_3_CYC",
+    "BriefDescription": "Cycles thread running at priority level 2 or 3,",
+    "PublicDescription": "Cycles thread running at priority level 2 or 3,"
+  },
+  {
+    "EventCode": "0x5080",
+    "EventName": "PM_THRD_PRIO_4_5_CYC",
+    "BriefDescription": "Cycles thread running at priority level 4 or 5,",
+    "PublicDescription": "Cycles thread running at priority level 4 or 5,"
+  },
+  {
+    "EventCode": "0x5082",
+    "EventName": "PM_THRD_PRIO_6_7_CYC",
+    "BriefDescription": "Cycles thread running at priority level 6 or 7,",
+    "PublicDescription": "Cycles thread running at priority level 6 or 7,"
+  },
+  {
+    "EventCode": "0x3098",
+    "EventName": "PM_THRD_REBAL_CYC",
+    "BriefDescription": "cycles rebalance was active,",
+    "PublicDescription": "cycles rebalance was active,"
+  },
+  {
+    "EventCode": "0x301ea",
+    "EventName": "PM_THRESH_EXC_1024",
+    "BriefDescription": "Threshold counter exceeded a value of 1024,",
+    "PublicDescription": "Threshold counter exceeded a value of 1024.,"
+  },
+  {
+    "EventCode": "0x401ea",
+    "EventName": "PM_THRESH_EXC_128",
+    "BriefDescription": "Threshold counter exceeded a value of 128,",
+    "PublicDescription": "Threshold counter exceeded a value of 128.,"
+  },
+  {
+    "EventCode": "0x401ec",
+    "EventName": "PM_THRESH_EXC_2048",
+    "BriefDescription": "Threshold counter exceeded a value of 2048,",
+    "PublicDescription": "Threshold counter exceeded a value of 2048.,"
+  },
+  {
+    "EventCode": "0x101e8",
+    "EventName": "PM_THRESH_EXC_256",
+    "BriefDescription": "Threshold counter exceed a count of 256,",
+    "PublicDescription": "Threshold counter exceed a count of 256.,"
+  },
+  {
+    "EventCode": "0x201e6",
+    "EventName": "PM_THRESH_EXC_32",
+    "BriefDescription": "Threshold counter exceeded a value of 32,",
+    "PublicDescription": "Threshold counter exceeded a value of 32.,"
+  },
+  {
+    "EventCode": "0x101e6",
+    "EventName": "PM_THRESH_EXC_4096",
+    "BriefDescription": "Threshold counter exceed a count of 4096,",
+    "PublicDescription": "Threshold counter exceed a count of 4096.,"
+  },
+  {
+    "EventCode": "0x201e8",
+    "EventName": "PM_THRESH_EXC_512",
+    "BriefDescription": "Threshold counter exceeded a value of 512,",
+    "PublicDescription": "Threshold counter exceeded a value of 512.,"
+  },
+  {
+    "EventCode": "0x301e8",
+    "EventName": "PM_THRESH_EXC_64",
+    "BriefDescription": "IFU non-branch finished,",
+    "PublicDescription": "Threshold counter exceeded a value of 64.,"
+  },
+  {
+    "EventCode": "0x101ec",
+    "EventName": "PM_THRESH_MET",
+    "BriefDescription": "threshold exceeded,",
+    "PublicDescription": "threshold exceeded.,"
+  },
+  {
+    "EventCode": "0x4016e",
+    "EventName": "PM_THRESH_NOT_MET",
+    "BriefDescription": "Threshold counter did not meet threshold,",
+    "PublicDescription": "Threshold counter did not meet threshold.,"
+  },
+  {
+    "EventCode": "0x30058",
+    "EventName": "PM_TLBIE_FIN",
+    "BriefDescription": "tlbie finished,",
+    "PublicDescription": "tlbie finished.,"
+  },
+  {
+    "EventCode": "0x20066",
+    "EventName": "PM_TLB_MISS",
+    "BriefDescription": "TLB Miss (I + D),",
+    "PublicDescription": "TLB Miss (I + D).,"
+  },
+  {
+    "EventCode": "0x20b8",
+    "EventName": "PM_TM_BEGIN_ALL",
+    "BriefDescription": "Tm any tbegin,",
+    "PublicDescription": "Tm any tbegin,"
+  },
+  {
+    "EventCode": "0x20ba",
+    "EventName": "PM_TM_END_ALL",
+    "BriefDescription": "Tm any tend,",
+    "PublicDescription": "Tm any tend,"
+  },
+  {
+    "EventCode": "0x3086",
+    "EventName": "PM_TM_FAIL_CONF_NON_TM",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0x3088",
+    "EventName": "PM_TM_FAIL_CON_TM",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b2",
+    "EventName": "PM_TM_FAIL_DISALLOW",
+    "BriefDescription": "TM fail disallow,",
+    "PublicDescription": "TM fail disallow42,"
+  },
+  {
+    "EventCode": "0x3084",
+    "EventName": "PM_TM_FAIL_FOOTPRINT_OVERFLOW",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b8",
+    "EventName": "PM_TM_FAIL_NON_TX_CONFLICT",
+    "BriefDescription": "Non transactional conflict from LSU whtver gets repoted to texas,",
+    "PublicDescription": "Non transactional conflict from LSU whtver gets repoted to texas42,"
+  },
+  {
+    "EventCode": "0x308a",
+    "EventName": "PM_TM_FAIL_SELF",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b4",
+    "EventName": "PM_TM_FAIL_TLBIE",
+    "BriefDescription": "TLBIE hit bloom filter,",
+    "PublicDescription": "TLBIE hit bloom filter42,"
+  },
+  {
+    "EventCode": "0xe0b6",
+    "EventName": "PM_TM_FAIL_TX_CONFLICT",
+    "BriefDescription": "Transactional conflict from LSU, whatever gets reported to texas,",
+    "PublicDescription": "Transactional conflict from LSU, whatever gets reported to texas 42,"
+  },
+  {
+    "EventCode": "0x20bc",
+    "EventName": "PM_TM_TBEGIN",
+    "BriefDescription": "Tm nested tbegin,",
+    "PublicDescription": "Tm nested tbegin,"
+  },
+  {
+    "EventCode": "0x10060",
+    "EventName": "PM_TM_TRANS_RUN_CYC",
+    "BriefDescription": "run cycles in transactional state,",
+    "PublicDescription": "run cycles in transactional state.,"
+  },
+  {
+    "EventCode": "0x30060",
+    "EventName": "PM_TM_TRANS_RUN_INST",
+    "BriefDescription": "Instructions completed in transactional state,",
+    "PublicDescription": "Instructions completed in transactional state.,"
+  },
+  {
+    "EventCode": "0x3080",
+    "EventName": "PM_TM_TRESUME",
+    "BriefDescription": "Tm resume,",
+    "PublicDescription": "Tm resume,"
+  },
+  {
+    "EventCode": "0x20be",
+    "EventName": "PM_TM_TSUSPEND",
+    "BriefDescription": "Tm suspend,",
+    "PublicDescription": "Tm suspend,"
+  },
+  {
+    "EventCode": "0x2e012",
+    "EventName": "PM_TM_TX_PASS_RUN_CYC",
+    "BriefDescription": "cycles spent in successful transactions,",
+    "PublicDescription": "run cycles spent in successful transactions.,"
+  },
+  {
+    "EventCode": "0x4e014",
+    "EventName": "PM_TM_TX_PASS_RUN_INST",
+    "BriefDescription": "run instructions spent in successful transactions.,",
+    "PublicDescription": "run instructions spent in successful transactions.,"
+  },
+  {
+    "EventCode": "0xe08c",
+    "EventName": "PM_UP_PREF_L3",
+    "BriefDescription": "Micropartition prefetch,",
+    "PublicDescription": "Micropartition prefetch42,"
+  },
+  {
+    "EventCode": "0xe08e",
+    "EventName": "PM_UP_PREF_POINTER",
+    "BriefDescription": "Micrpartition pointer prefetches,",
+    "PublicDescription": "Micrpartition pointer prefetches42,"
+  },
+  {
+    "EventCode": "0xa0a4",
+    "EventName": "PM_VSU0_16FLOP",
+    "BriefDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),",
+    "PublicDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),"
+  },
+  {
+    "EventCode": "0xa080",
+    "EventName": "PM_VSU0_1FLOP",
+    "BriefDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,",
+    "PublicDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finishedDecode into 1,2,4 FLOP according to instr IOP, multiplied by #vector elements according to route( eg x1, x2, x4) Only if instr sends finish to ISU,"
+  },
+  {
+    "EventCode": "0xa098",
+    "EventName": "PM_VSU0_2FLOP",
+    "BriefDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),",
+    "PublicDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa09c",
+    "EventName": "PM_VSU0_4FLOP",
+    "BriefDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),",
+    "PublicDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa0a0",
+    "EventName": "PM_VSU0_8FLOP",
+    "BriefDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),",
+    "PublicDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),"
+  },
+  {
+    "EventCode": "0xb0a4",
+    "EventName": "PM_VSU0_COMPLEX_ISSUED",
+    "BriefDescription": "Complex VMX instruction issued,",
+    "PublicDescription": "Complex VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xb0b4",
+    "EventName": "PM_VSU0_CY_ISSUED",
+    "BriefDescription": "Cryptographic instruction RFC02196 Issued,",
+    "PublicDescription": "Cryptographic instruction RFC02196 Issued,"
+  },
+  {
+    "EventCode": "0xb0a8",
+    "EventName": "PM_VSU0_DD_ISSUED",
+    "BriefDescription": "64BIT Decimal Issued,",
+    "PublicDescription": "64BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xa08c",
+    "EventName": "PM_VSU0_DP_2FLOP",
+    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,",
+    "PublicDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,"
+  },
+  {
+    "EventCode": "0xa090",
+    "EventName": "PM_VSU0_DP_FMA",
+    "BriefDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,",
+    "PublicDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,"
+  },
+  {
+    "EventCode": "0xa094",
+    "EventName": "PM_VSU0_DP_FSQRT_FDIV",
+    "BriefDescription": "DP vector versions of fdiv,fsqrt,",
+    "PublicDescription": "DP vector versions of fdiv,fsqrt,"
+  },
+  {
+    "EventCode": "0xb0ac",
+    "EventName": "PM_VSU0_DQ_ISSUED",
+    "BriefDescription": "128BIT Decimal Issued,",
+    "PublicDescription": "128BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xb0b0",
+    "EventName": "PM_VSU0_EX_ISSUED",
+    "BriefDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,",
+    "PublicDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,"
+  },
+  {
+    "EventCode": "0xa0bc",
+    "EventName": "PM_VSU0_FIN",
+    "BriefDescription": "VSU0 Finished an instruction,",
+    "PublicDescription": "VSU0 Finished an instruction,"
+  },
+  {
+    "EventCode": "0xa084",
+    "EventName": "PM_VSU0_FMA",
+    "BriefDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,",
+    "PublicDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,"
+  },
+  {
+    "EventCode": "0xb098",
+    "EventName": "PM_VSU0_FPSCR",
+    "BriefDescription": "Move to/from FPSCR type instruction issued on Pipe 0,",
+    "PublicDescription": "Move to/from FPSCR type instruction issued on Pipe 0,"
+  },
+  {
+    "EventCode": "0xa088",
+    "EventName": "PM_VSU0_FSQRT_FDIV",
+    "BriefDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,",
+    "PublicDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,"
+  },
+  {
+    "EventCode": "0xb090",
+    "EventName": "PM_VSU0_PERMUTE_ISSUED",
+    "BriefDescription": "Permute VMX Instruction Issued,",
+    "PublicDescription": "Permute VMX Instruction Issued,"
+  },
+  {
+    "EventCode": "0xb088",
+    "EventName": "PM_VSU0_SCALAR_DP_ISSUED",
+    "BriefDescription": "Double Precision scalar instruction issued on Pipe0,",
+    "PublicDescription": "Double Precision scalar instruction issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb094",
+    "EventName": "PM_VSU0_SIMPLE_ISSUED",
+    "BriefDescription": "Simple VMX instruction issued,",
+    "PublicDescription": "Simple VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xa0a8",
+    "EventName": "PM_VSU0_SINGLE",
+    "BriefDescription": "FPU single precision,",
+    "PublicDescription": "FPU single precision,"
+  },
+  {
+    "EventCode": "0xb09c",
+    "EventName": "PM_VSU0_SQ",
+    "BriefDescription": "Store Vector Issued,",
+    "PublicDescription": "Store Vector Issued,"
+  },
+  {
+    "EventCode": "0xb08c",
+    "EventName": "PM_VSU0_STF",
+    "BriefDescription": "FPU store (SP or DP) issued on Pipe0,",
+    "PublicDescription": "FPU store (SP or DP) issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb080",
+    "EventName": "PM_VSU0_VECTOR_DP_ISSUED",
+    "BriefDescription": "Double Precision vector instruction issued on Pipe0,",
+    "PublicDescription": "Double Precision vector instruction issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb084",
+    "EventName": "PM_VSU0_VECTOR_SP_ISSUED",
+    "BriefDescription": "Single Precision vector instruction issued (executed),",
+    "PublicDescription": "Single Precision vector instruction issued (executed),"
+  },
+  {
+    "EventCode": "0xa0a6",
+    "EventName": "PM_VSU1_16FLOP",
+    "BriefDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),",
+    "PublicDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),"
+  },
+  {
+    "EventCode": "0xa082",
+    "EventName": "PM_VSU1_1FLOP",
+    "BriefDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,",
+    "PublicDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,"
+  },
+  {
+    "EventCode": "0xa09a",
+    "EventName": "PM_VSU1_2FLOP",
+    "BriefDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),",
+    "PublicDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa09e",
+    "EventName": "PM_VSU1_4FLOP",
+    "BriefDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),",
+    "PublicDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa0a2",
+    "EventName": "PM_VSU1_8FLOP",
+    "BriefDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),",
+    "PublicDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),"
+  },
+  {
+    "EventCode": "0xb0a6",
+    "EventName": "PM_VSU1_COMPLEX_ISSUED",
+    "BriefDescription": "Complex VMX instruction issued,",
+    "PublicDescription": "Complex VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xb0b6",
+    "EventName": "PM_VSU1_CY_ISSUED",
+    "BriefDescription": "Cryptographic instruction RFC02196 Issued,",
+    "PublicDescription": "Cryptographic instruction RFC02196 Issued,"
+  },
+  {
+    "EventCode": "0xb0aa",
+    "EventName": "PM_VSU1_DD_ISSUED",
+    "BriefDescription": "64BIT Decimal Issued,",
+    "PublicDescription": "64BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xa08e",
+    "EventName": "PM_VSU1_DP_2FLOP",
+    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,",
+    "PublicDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,"
+  },
+  {
+    "EventCode": "0xa092",
+    "EventName": "PM_VSU1_DP_FMA",
+    "BriefDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,",
+    "PublicDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,"
+  },
+  {
+    "EventCode": "0xa096",
+    "EventName": "PM_VSU1_DP_FSQRT_FDIV",
+    "BriefDescription": "DP vector versions of fdiv,fsqrt,",
+    "PublicDescription": "DP vector versions of fdiv,fsqrt,"
+  },
+  {
+    "EventCode": "0xb0ae",
+    "EventName": "PM_VSU1_DQ_ISSUED",
+    "BriefDescription": "128BIT Decimal Issued,",
+    "PublicDescription": "128BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xb0b2",
+    "EventName": "PM_VSU1_EX_ISSUED",
+    "BriefDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,",
+    "PublicDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,"
+  },
+  {
+    "EventCode": "0xa0be",
+    "EventName": "PM_VSU1_FIN",
+    "BriefDescription": "VSU1 Finished an instruction,",
+    "PublicDescription": "VSU1 Finished an instruction,"
+  },
+  {
+    "EventCode": "0xa086",
+    "EventName": "PM_VSU1_FMA",
+    "BriefDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,",
+    "PublicDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,"
+  },
+  {
+    "EventCode": "0xb09a",
+    "EventName": "PM_VSU1_FPSCR",
+    "BriefDescription": "Move to/from FPSCR type instruction issued on Pipe 0,",
+    "PublicDescription": "Move to/from FPSCR type instruction issued on Pipe 0,"
+  },
+  {
+    "EventCode": "0xa08a",
+    "EventName": "PM_VSU1_FSQRT_FDIV",
+    "BriefDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,",
+    "PublicDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,"
+  },
+  {
+    "EventCode": "0xb092",
+    "EventName": "PM_VSU1_PERMUTE_ISSUED",
+    "BriefDescription": "Permute VMX Instruction Issued,",
+    "PublicDescription": "Permute VMX Instruction Issued,"
+  },
+  {
+    "EventCode": "0xb08a",
+    "EventName": "PM_VSU1_SCALAR_DP_ISSUED",
+    "BriefDescription": "Double Precision scalar instruction issued on Pipe1,",
+    "PublicDescription": "Double Precision scalar instruction issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb096",
+    "EventName": "PM_VSU1_SIMPLE_ISSUED",
+    "BriefDescription": "Simple VMX instruction issued,",
+    "PublicDescription": "Simple VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xa0aa",
+    "EventName": "PM_VSU1_SINGLE",
+    "BriefDescription": "FPU single precision,",
+    "PublicDescription": "FPU single precision,"
+  },
+  {
+    "EventCode": "0xb09e",
+    "EventName": "PM_VSU1_SQ",
+    "BriefDescription": "Store Vector Issued,",
+    "PublicDescription": "Store Vector Issued,"
+  },
+  {
+    "EventCode": "0xb08e",
+    "EventName": "PM_VSU1_STF",
+    "BriefDescription": "FPU store (SP or DP) issued on Pipe1,",
+    "PublicDescription": "FPU store (SP or DP) issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb082",
+    "EventName": "PM_VSU1_VECTOR_DP_ISSUED",
+    "BriefDescription": "Double Precision vector instruction issued on Pipe1,",
+    "PublicDescription": "Double Precision vector instruction issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb086",
+    "EventName": "PM_VSU1_VECTOR_SP_ISSUED",
+    "BriefDescription": "Single Precision vector instruction issued (executed),",
+    "PublicDescription": "Single Precision vector instruction issued (executed),"
+  }
+]
diff --git a/tools/perf/pmu-events/arch/powerpc/mapfile.csv b/tools/perf/pmu-events/arch/powerpc/mapfile.csv
new file mode 100644
index 0000000..579c622
--- /dev/null
+++ b/tools/perf/pmu-events/arch/powerpc/mapfile.csv
@@ -0,0 +1 @@
+IBM-Power8-9188,004d0100,004d0100-core.json,core
diff --git a/tools/perf/pmu-events/arch/powerpc/power8.json b/tools/perf/pmu-events/arch/powerpc/power8.json
new file mode 100644
index 0000000..1511138
--- /dev/null
+++ b/tools/perf/pmu-events/arch/powerpc/power8.json
@@ -0,0 +1,5766 @@
+[
+  {
+    "EventCode": "0x1f05e",
+    "EventName": "PM_1LPAR_CYC",
+    "PEBS" : "1",
+    "Umask": "0x01",
+    "MSRIndex": "0",
+    "MSRValue": "1",
+    "BriefDescription": "Number of cycles in single lpar mode. All threads in the core are assigned to the same lpar (Precise Event),",
+    "PublicDescription": "Number of cycles in single lpar mode. (Precise Event),"
+  },
+  {
+    "EventCode": "0x100f2",
+    "EventName": "PM_1PLUS_PPC_CMPL",
+    "BriefDescription": "1 or more ppc insts finished,",
+    "PublicDescription": "1 or more ppc insts finished (completed).,"
+  },
+  {
+    "EventCode": "0x400f2",
+    "EventName": "PM_1PLUS_PPC_DISP",
+    "BriefDescription": "Cycles at least one Instr Dispatched,",
+    "PublicDescription": "Cycles at least one Instr Dispatched. Could be a group with only microcode. Issue HW016521,"
+  },
+  {
+    "EventCode": "0x2006e",
+    "EventName": "PM_2LPAR_CYC",
+    "BriefDescription": "Cycles in 2-lpar mode. Threads 0-3 belong to Lpar0 and threads 4-7 belong to Lpar1,",
+    "PublicDescription": "Number of cycles in 2 lpar mode.,"
+  },
+  {
+    "EventCode": "0x4e05e",
+    "EventName": "PM_4LPAR_CYC",
+    "BriefDescription": "Number of cycles in 4 LPAR mode. Threads 0-1 belong to lpar0, threads 2-3 belong to lpar1, threads 4-5 belong to lpar2, and threads 6-7 belong to lpar3,",
+    "PublicDescription": "Number of cycles in 4 LPAR mode.,"
+  },
+  {
+    "EventCode": "0x610050",
+    "EventName": "PM_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d),"
+  },
+  {
+    "EventCode": "0x520050",
+    "EventName": "PM_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x620052",
+    "EventName": "PM_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x610052",
+    "EventName": "PM_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x610054",
+    "EventName": "PM_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x640052",
+    "EventName": "PM_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x630050",
+    "EventName": "PM_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x630052",
+    "EventName": "PM_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x640050",
+    "EventName": "PM_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types (demand load,data prefetch,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),"
+  },
+  {
+    "EventCode": "0x100fa",
+    "EventName": "PM_ANY_THRD_RUN_CYC",
+    "BriefDescription": "One of threads in run_cycles,",
+    "PublicDescription": "Any thread in run_cycles (was one thread in run_cycles).,"
+  },
+  {
+    "EventCode": "0x2505e",
+    "EventName": "PM_BACK_BR_CMPL",
+    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
+    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"
+  },
+  {
+    "EventCode": "0x4082",
+    "EventName": "PM_BANK_CONFLICT",
+    "BriefDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,",
+    "PublicDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,"
+  },
+  {
+    "EventCode": "0x10068",
+    "EventName": "PM_BRU_FIN",
+    "BriefDescription": "Branch Instruction Finished,",
+    "PublicDescription": "Branch Instruction Finished .,"
+  },
+  {
+    "EventCode": "0x20036",
+    "EventName": "PM_BR_2PATH",
+    "BriefDescription": "two path branch,",
+    "PublicDescription": "two path branch.,"
+  },
+  {
+    "EventCode": "0x5086",
+    "EventName": "PM_BR_BC_8",
+    "BriefDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,",
+    "PublicDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,"
+  },
+  {
+    "EventCode": "0x5084",
+    "EventName": "PM_BR_BC_8_CONV",
+    "BriefDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,",
+    "PublicDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,"
+  },
+  {
+    "EventCode": "0x40060",
+    "EventName": "PM_BR_CMPL",
+    "BriefDescription": "Branch Instruction completed,",
+    "PublicDescription": "Branch Instruction completed.,"
+  },
+  {
+    "EventCode": "0x40ac",
+    "EventName": "PM_BR_MPRED_CCACHE",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Count Cache Target Prediction,"
+  },
+  {
+    "EventCode": "0x400f6",
+    "EventName": "PM_BR_MPRED_CMPL",
+    "BriefDescription": "Number of Branch Mispredicts,",
+    "PublicDescription": "Number of Branch Mispredicts.,"
+  },
+  {
+    "EventCode": "0x40b8",
+    "EventName": "PM_BR_MPRED_CR",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the BHT Direction Prediction (taken/not taken).,"
+  },
+  {
+    "EventCode": "0x40ae",
+    "EventName": "PM_BR_MPRED_LSTACK",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Link Stack Target Prediction,"
+  },
+  {
+    "EventCode": "0x40ba",
+    "EventName": "PM_BR_MPRED_TA",
+    "BriefDescription": "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.,",
+    "PublicDescription": "Conditional Branch Completed that was Mispredicted due to the Target Address Prediction from the Count Cache or Link Stack. Only XL-form branches that resolved Taken set this event.,"
+  },
+  {
+    "EventCode": "0x10138",
+    "EventName": "PM_BR_MRK_2PATH",
+    "BriefDescription": "marked two path branch,",
+    "PublicDescription": "marked two path branch.,"
+  },
+  {
+    "EventCode": "0x409c",
+    "EventName": "PM_BR_PRED_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target,",
+    "PublicDescription": "Conditional Branch Completed on BR0 (1st branch in group) in which the HW predicted the Direction or Target,"
+  },
+  {
+    "EventCode": "0x409e",
+    "EventName": "PM_BR_PRED_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.,",
+    "PublicDescription": "Conditional Branch Completed on BR1 (2nd branch in group) in which the HW predicted the Direction or Target. Note: BR1 can only be used in Single Thread Mode. In all of the SMT modes, only one branch can complete, thus BR1 is unused.,"
+  },
+  {
+    "EventCode": "0x489c",
+    "EventName": "PM_BR_PRED_BR_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) OR if_pc_br0_br_pred(1).,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40a4",
+    "EventName": "PM_BR_PRED_CCACHE_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that used the Count Cache for Target Prediction,"
+  },
+  {
+    "EventCode": "0x40a6",
+    "EventName": "PM_BR_PRED_CCACHE_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that used the Count Cache for Target Prediction,"
+  },
+  {
+    "EventCode": "0x48a4",
+    "EventName": "PM_BR_PRED_CCACHE_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND if_pc_br0_pred_type.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40b0",
+    "EventName": "PM_BR_PRED_CR_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra,"
+  },
+  {
+    "EventCode": "0x40b2",
+    "EventName": "PM_BR_PRED_CR_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and branches,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that had its direction predicted. I-form branches do not set this event. In addition, B-form branches which do not use the BHT do not set this event - these are branches with BO-field set to 'always taken' and bra,"
+  },
+  {
+    "EventCode": "0x48b0",
+    "EventName": "PM_BR_PRED_CR_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(1)='1'.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40a8",
+    "EventName": "PM_BR_PRED_LSTACK_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that used the Link Stack for Target Prediction,"
+  },
+  {
+    "EventCode": "0x40aa",
+    "EventName": "PM_BR_PRED_LSTACK_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that used the Link Stack for Target Prediction,"
+  },
+  {
+    "EventCode": "0x48a8",
+    "EventName": "PM_BR_PRED_LSTACK_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0) AND (not if_pc_br0_pred_type).,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x40b4",
+    "EventName": "PM_BR_PRED_TA_BR0",
+    "BriefDescription": "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.,",
+    "PublicDescription": "Conditional Branch Completed on BR0 that had its target address predicted. Only XL-form branches set this event.,"
+  },
+  {
+    "EventCode": "0x40b6",
+    "EventName": "PM_BR_PRED_TA_BR1",
+    "BriefDescription": "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.,",
+    "PublicDescription": "Conditional Branch Completed on BR1 that had its target address predicted. Only XL-form branches set this event.,"
+  },
+  {
+    "EventCode": "0x48b4",
+    "EventName": "PM_BR_PRED_TA_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred(0)='1'.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x200fa",
+    "EventName": "PM_BR_TAKEN_CMPL",
+    "BriefDescription": "New event for Branch Taken,",
+    "PublicDescription": "Branch Taken.,"
+  },
+  {
+    "EventCode": "0x40a0",
+    "EventName": "PM_BR_UNCOND_BR0",
+    "BriefDescription": "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,",
+    "PublicDescription": "Unconditional Branch Completed on BR0. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,"
+  },
+  {
+    "EventCode": "0x40a2",
+    "EventName": "PM_BR_UNCOND_BR1",
+    "BriefDescription": "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,",
+    "PublicDescription": "Unconditional Branch Completed on BR1. HW branch prediction was not used for this branch. This can be an I-form branch, a B-form branch with BO-field set to branch always, or a B-form branch which was coverted to a Resolve.,"
+  },
+  {
+    "EventCode": "0x48a0",
+    "EventName": "PM_BR_UNCOND_CMPL",
+    "BriefDescription": "Completion Time Event. This event can also be calculated from the direct bus as follows: if_pc_br0_br_pred=00 AND if_pc_br0_completed.,",
+    "PublicDescription": "IFU,"
+  },
+  {
+    "EventCode": "0x3094",
+    "EventName": "PM_CASTOUT_ISSUED",
+    "BriefDescription": "Castouts issued,",
+    "PublicDescription": "Castouts issued,"
+  },
+  {
+    "EventCode": "0x3096",
+    "EventName": "PM_CASTOUT_ISSUED_GPR",
+    "BriefDescription": "Castouts issued GPR,",
+    "PublicDescription": "Castouts issued GPR,"
+  },
+  {
+    "EventCode": "0x10050",
+    "EventName": "PM_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for all data types ( demand load,data,inst prefetch,inst fetch,xlate (I or d).,"
+  },
+  {
+    "EventCode": "0x2090",
+    "EventName": "PM_CLB_HELD",
+    "BriefDescription": "CLB Hold: Any Reason,",
+    "PublicDescription": "CLB Hold: Any Reason,"
+  },
+  {
+    "EventCode": "0x4000a",
+    "EventName": "PM_CMPLU_STALL",
+    "BriefDescription": "Completion stall,",
+    "PublicDescription": "Completion stall.,"
+  },
+  {
+    "EventCode": "0x4d018",
+    "EventName": "PM_CMPLU_STALL_BRU",
+    "BriefDescription": "Completion stall due to a Branch Unit,",
+    "PublicDescription": "Completion stall due to a Branch Unit.,"
+  },
+  {
+    "EventCode": "0x2d018",
+    "EventName": "PM_CMPLU_STALL_BRU_CRU",
+    "BriefDescription": "Completion stall due to IFU,",
+    "PublicDescription": "Completion stall due to IFU.,"
+  },
+  {
+    "EventCode": "0x30026",
+    "EventName": "PM_CMPLU_STALL_COQ_FULL",
+    "BriefDescription": "Completion stall due to CO q full,",
+    "PublicDescription": "Completion stall due to CO q full.,"
+  },
+  {
+    "EventCode": "0x2c012",
+    "EventName": "PM_CMPLU_STALL_DCACHE_MISS",
+    "BriefDescription": "Completion stall by Dcache miss,",
+    "PublicDescription": "Completion stall by Dcache miss.,"
+  },
+  {
+    "EventCode": "0x2c018",
+    "EventName": "PM_CMPLU_STALL_DMISS_L21_L31",
+    "BriefDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3),",
+    "PublicDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).,"
+  },
+  {
+    "EventCode": "0x2c016",
+    "EventName": "PM_CMPLU_STALL_DMISS_L2L3",
+    "BriefDescription": "Completion stall by Dcache miss which resolved in L2/L3,",
+    "PublicDescription": "Completion stall by Dcache miss which resolved in L2/L3.,"
+  },
+  {
+    "EventCode": "0x4c016",
+    "EventName": "PM_CMPLU_STALL_DMISS_L2L3_CONFLICT",
+    "BriefDescription": "Completion stall due to cache miss that resolves in the L2 or L3 with a conflict,",
+    "PublicDescription": "Completion stall due to cache miss resolving in core's L2/L3 with a conflict.,"
+  },
+  {
+    "EventCode": "0x4c01a",
+    "EventName": "PM_CMPLU_STALL_DMISS_L3MISS",
+    "BriefDescription": "Completion stall due to cache miss resolving missed the L3,",
+    "PublicDescription": "Completion stall due to cache miss resolving missed the L3.,"
+  },
+  {
+    "EventCode": "0x4c018",
+    "EventName": "PM_CMPLU_STALL_DMISS_LMEM",
+    "BriefDescription": "Completion stall due to cache miss that resolves in local memory,",
+    "PublicDescription": "Completion stall due to cache miss resolving in core's Local Memory.,"
+  },
+  {
+    "EventCode": "0x2c01c",
+    "EventName": "PM_CMPLU_STALL_DMISS_REMOTE",
+    "BriefDescription": "Completion stall by Dcache miss which resolved from remote chip (cache or memory),",
+    "PublicDescription": "Completion stall by Dcache miss which resolved on chip ( excluding local L2/L3).,"
+  },
+  {
+    "EventCode": "0x4c012",
+    "EventName": "PM_CMPLU_STALL_ERAT_MISS",
+    "BriefDescription": "Completion stall due to LSU reject ERAT miss,",
+    "PublicDescription": "Completion stall due to LSU reject ERAT miss.,"
+  },
+  {
+    "EventCode": "0x30038",
+    "EventName": "PM_CMPLU_STALL_FLUSH",
+    "BriefDescription": "completion stall due to flush by own thread,",
+    "PublicDescription": "completion stall due to flush by own thread.,"
+  },
+  {
+    "EventCode": "0x4d016",
+    "EventName": "PM_CMPLU_STALL_FXLONG",
+    "BriefDescription": "Completion stall due to a long latency fixed point instruction,",
+    "PublicDescription": "Completion stall due to a long latency fixed point instruction.,"
+  },
+  {
+    "EventCode": "0x2d016",
+    "EventName": "PM_CMPLU_STALL_FXU",
+    "BriefDescription": "Completion stall due to FXU,",
+    "PublicDescription": "Completion stall due to FXU.,"
+  },
+  {
+    "EventCode": "0x30036",
+    "EventName": "PM_CMPLU_STALL_HWSYNC",
+    "BriefDescription": "completion stall due to hwsync,",
+    "PublicDescription": "completion stall due to hwsync.,"
+  },
+  {
+    "EventCode": "0x4d014",
+    "EventName": "PM_CMPLU_STALL_LOAD_FINISH",
+    "BriefDescription": "Completion stall due to a Load finish,",
+    "PublicDescription": "Completion stall due to a Load finish.,"
+  },
+  {
+    "EventCode": "0x2c010",
+    "EventName": "PM_CMPLU_STALL_LSU",
+    "BriefDescription": "Completion stall by LSU instruction,",
+    "PublicDescription": "Completion stall by LSU instruction.,"
+  },
+  {
+    "EventCode": "0x10036",
+    "EventName": "PM_CMPLU_STALL_LWSYNC",
+    "BriefDescription": "completion stall due to isync/lwsync,",
+    "PublicDescription": "completion stall due to isync/lwsync.,"
+  },
+  {
+    "EventCode": "0x30028",
+    "EventName": "PM_CMPLU_STALL_MEM_ECC_DELAY",
+    "BriefDescription": "Completion stall due to mem ECC delay,",
+    "PublicDescription": "Completion stall due to mem ECC delay.,"
+  },
+  {
+    "EventCode": "0x2e01c",
+    "EventName": "PM_CMPLU_STALL_NO_NTF",
+    "BriefDescription": "Completion stall due to nop,",
+    "PublicDescription": "Completion stall due to nop.,"
+  },
+  {
+    "EventCode": "0x2e01e",
+    "EventName": "PM_CMPLU_STALL_NTCG_FLUSH",
+    "BriefDescription": "Completion stall due to ntcg flush,",
+    "PublicDescription": "Completion stall due to reject (load hit store).,"
+  },
+  {
+    "EventCode": "0x30006",
+    "EventName": "PM_CMPLU_STALL_OTHER_CMPL",
+    "BriefDescription": "Instructions core completed while this tread was stalled,",
+    "PublicDescription": "Instructions core completed while this thread was stalled.,"
+  },
+  {
+    "EventCode": "0x4c010",
+    "EventName": "PM_CMPLU_STALL_REJECT",
+    "BriefDescription": "Completion stall due to LSU reject,",
+    "PublicDescription": "Completion stall due to LSU reject.,"
+  },
+  {
+    "EventCode": "0x2c01a",
+    "EventName": "PM_CMPLU_STALL_REJECT_LHS",
+    "BriefDescription": "Completion stall due to reject (load hit store),",
+    "PublicDescription": "Completion stall due to reject (load hit store).,"
+  },
+  {
+    "EventCode": "0x4c014",
+    "EventName": "PM_CMPLU_STALL_REJ_LMQ_FULL",
+    "BriefDescription": "Completion stall due to LSU reject LMQ full,",
+    "PublicDescription": "Completion stall due to LSU reject LMQ full.,"
+  },
+  {
+    "EventCode": "0x4d010",
+    "EventName": "PM_CMPLU_STALL_SCALAR",
+    "BriefDescription": "Completion stall due to VSU scalar instruction,",
+    "PublicDescription": "Completion stall due to VSU scalar instruction.,"
+  },
+  {
+    "EventCode": "0x2d010",
+    "EventName": "PM_CMPLU_STALL_SCALAR_LONG",
+    "BriefDescription": "Completion stall due to VSU scalar long latency instruction,",
+    "PublicDescription": "Completion stall due to VSU scalar long latency instruction.,"
+  },
+  {
+    "EventCode": "0x2c014",
+    "EventName": "PM_CMPLU_STALL_STORE",
+    "BriefDescription": "Completion stall by stores this includes store agen finishes in pipe LS0/LS1 and store data finishes in LS2/LS3,",
+    "PublicDescription": "Completion stall by stores.,"
+  },
+  {
+    "EventCode": "0x4c01c",
+    "EventName": "PM_CMPLU_STALL_ST_FWD",
+    "BriefDescription": "Completion stall due to store forward,",
+    "PublicDescription": "Completion stall due to store forward.,"
+  },
+  {
+    "EventCode": "0x1001c",
+    "EventName": "PM_CMPLU_STALL_THRD",
+    "BriefDescription": "Completion Stalled due to thread conflict. Group ready to complete but it was another thread's turn,",
+    "PublicDescription": "Completion stall due to thread conflict.,"
+  },
+  {
+    "EventCode": "0x2d014",
+    "EventName": "PM_CMPLU_STALL_VECTOR",
+    "BriefDescription": "Completion stall due to VSU vector instruction,",
+    "PublicDescription": "Completion stall due to VSU vector instruction.,"
+  },
+  {
+    "EventCode": "0x4d012",
+    "EventName": "PM_CMPLU_STALL_VECTOR_LONG",
+    "BriefDescription": "Completion stall due to VSU vector long instruction,",
+    "PublicDescription": "Completion stall due to VSU vector long instruction.,"
+  },
+  {
+    "EventCode": "0x2d012",
+    "EventName": "PM_CMPLU_STALL_VSU",
+    "BriefDescription": "Completion stall due to VSU instruction,",
+    "PublicDescription": "Completion stall due to VSU instruction.,"
+  },
+  {
+    "EventCode": "0x16083",
+    "EventName": "PM_CO0_ALLOC",
+    "BriefDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x16082",
+    "EventName": "PM_CO0_BUSY",
+    "BriefDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "CO mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0x3608a",
+    "EventName": "PM_CO_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 CO machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x40066",
+    "EventName": "PM_CRU_FIN",
+    "BriefDescription": "IFU Finished a (non-branch) instruction,",
+    "PublicDescription": "IFU Finished a (non-branch) instruction.,"
+  },
+  {
+    "EventCode": "0x1e",
+    "EventName": "PM_CYC",
+    "BriefDescription": "Cycles,",
+    "PublicDescription": "Cycles .,"
+  },
+  {
+    "EventCode": "0x61c050",
+    "EventName": "PM_DATA_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load,"
+  },
+  {
+    "EventCode": "0x64c048",
+    "EventName": "PM_DATA_ALL_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c048",
+    "EventName": "PM_DATA_ALL_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c04c",
+    "EventName": "PM_DATA_ALL_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04c",
+    "EventName": "PM_DATA_ALL_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c042",
+    "EventName": "PM_DATA_ALL_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c046",
+    "EventName": "PM_DATA_ALL_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c046",
+    "EventName": "PM_DATA_ALL_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04e",
+    "EventName": "PM_DATA_ALL_FROM_L2MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c040",
+    "EventName": "PM_DATA_ALL_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c042",
+    "EventName": "PM_DATA_ALL_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c044",
+    "EventName": "PM_DATA_ALL_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c046",
+    "EventName": "PM_DATA_ALL_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04e",
+    "EventName": "PM_DATA_ALL_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c042",
+    "EventName": "PM_DATA_ALL_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c042",
+    "EventName": "PM_DATA_ALL_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c044",
+    "EventName": "PM_DATA_ALL_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04c",
+    "EventName": "PM_DATA_ALL_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c048",
+    "EventName": "PM_DATA_ALL_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c04c",
+    "EventName": "PM_DATA_ALL_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x64c04a",
+    "EventName": "PM_DATA_ALL_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c048",
+    "EventName": "PM_DATA_ALL_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c046",
+    "EventName": "PM_DATA_ALL_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x61c04a",
+    "EventName": "PM_DATA_ALL_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c04a",
+    "EventName": "PM_DATA_ALL_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x63c04a",
+    "EventName": "PM_DATA_ALL_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either demand loads or data prefetch,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1,"
+  },
+  {
+    "EventCode": "0x62c050",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load,"
+  },
+  {
+    "EventCode": "0x62c052",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x61c052",
+    "EventName": "PM_DATA_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load,"
+  },
+  {
+    "EventCode": "0x61c054",
+    "EventName": "PM_DATA_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for either demand loads or data prefetch,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumps for a demand load,"
+  },
+  {
+    "EventCode": "0x64c052",
+    "EventName": "PM_DATA_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for either demand loads or data prefetch,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor a demand load,"
+  },
+  {
+    "EventCode": "0x63c050",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for either demand loads or data prefetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load,"
+  },
+  {
+    "EventCode": "0x63c052",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x64c050",
+    "EventName": "PM_DATA_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for either demand loads or data prefetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load,"
+  },
+  {
+    "EventCode": "0x1c050",
+    "EventName": "PM_DATA_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for a demand load.,"
+  },
+  {
+    "EventCode": "0x4c048",
+    "EventName": "PM_DATA_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c048",
+    "EventName": "PM_DATA_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c04c",
+    "EventName": "PM_DATA_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c04c",
+    "EventName": "PM_DATA_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c042",
+    "EventName": "PM_DATA_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c046",
+    "EventName": "PM_DATA_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c046",
+    "EventName": "PM_DATA_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x200fe",
+    "EventName": "PM_DATA_FROM_L2MISS",
+    "BriefDescription": "Demand LD - L2 Miss (not L2 hit),",
+    "PublicDescription": "Demand LD - L2 Miss (not L2 hit).,"
+  },
+  {
+    "EventCode": "0x1c04e",
+    "EventName": "PM_DATA_FROM_L2MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L2 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c040",
+    "EventName": "PM_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c040",
+    "EventName": "PM_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c040",
+    "EventName": "PM_DATA_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c040",
+    "EventName": "PM_DATA_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1 .,"
+  },
+  {
+    "EventCode": "0x4c042",
+    "EventName": "PM_DATA_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c044",
+    "EventName": "PM_DATA_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c044",
+    "EventName": "PM_DATA_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c044",
+    "EventName": "PM_DATA_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c046",
+    "EventName": "PM_DATA_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x300fe",
+    "EventName": "PM_DATA_FROM_L3MISS",
+    "BriefDescription": "Demand LD - L3 Miss (not L2 hit and not L3 hit),",
+    "PublicDescription": "Demand LD - L3 Miss (not L2 hit and not L3 hit).,"
+  },
+  {
+    "EventCode": "0x4c04e",
+    "EventName": "PM_DATA_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c042",
+    "EventName": "PM_DATA_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c042",
+    "EventName": "PM_DATA_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c044",
+    "EventName": "PM_DATA_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c04c",
+    "EventName": "PM_DATA_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c048",
+    "EventName": "PM_DATA_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x400fe",
+    "EventName": "PM_DATA_FROM_MEM",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load,",
+    "PublicDescription": "Data cache reload from memory (including L4).,"
+  },
+  {
+    "EventCode": "0x2c04c",
+    "EventName": "PM_DATA_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x4c04a",
+    "EventName": "PM_DATA_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c048",
+    "EventName": "PM_DATA_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c046",
+    "EventName": "PM_DATA_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x1c04a",
+    "EventName": "PM_DATA_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c04a",
+    "EventName": "PM_DATA_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x3c04a",
+    "EventName": "PM_DATA_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a demand load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either only demand loads or demand loads plus prefetches if MMCR1[16] is 1.,"
+  },
+  {
+    "EventCode": "0x2c050",
+    "EventName": "PM_DATA_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for a demand load.,"
+  },
+  {
+    "EventCode": "0x2c052",
+    "EventName": "PM_DATA_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for a demand load,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x1c052",
+    "EventName": "PM_DATA_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for a demand load,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor a demand load.,"
+  },
+  {
+    "EventCode": "0x1c054",
+    "EventName": "PM_DATA_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for a demand load,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumps for a demand load.,"
+  },
+  {
+    "EventCode": "0x4c052",
+    "EventName": "PM_DATA_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for a demand load,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor a demand load.,"
+  },
+  {
+    "EventCode": "0x3c050",
+    "EventName": "PM_DATA_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for a demand load,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for a demand load.,"
+  },
+  {
+    "EventCode": "0x3c052",
+    "EventName": "PM_DATA_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for a demand load,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x4c050",
+    "EventName": "PM_DATA_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for a demand load,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for a demand load.,"
+  },
+  {
+    "EventCode": "0x3001a",
+    "EventName": "PM_DATA_TABLEWALK_CYC",
+    "BriefDescription": "Tablwalk Cycles (could be 1 or 2 active),",
+    "PublicDescription": "Data Tablewalk Active.,"
+  },
+  {
+    "EventCode": "0xe0bc",
+    "EventName": "PM_DC_COLLISIONS",
+    "BriefDescription": "DATA Cache collisions,",
+    "PublicDescription": "DATA Cache collisions42,"
+  },
+  {
+    "EventCode": "0x1e050",
+    "EventName": "PM_DC_PREF_STREAM_ALLOC",
+    "BriefDescription": "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1,",
+    "PublicDescription": "Stream marked valid. The stream could have been allocated through the hardware prefetch mechanism or through software. This is combined ls0 and ls1.,"
+  },
+  {
+    "EventCode": "0x2e050",
+    "EventName": "PM_DC_PREF_STREAM_CONF",
+    "BriefDescription": "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down,",
+    "PublicDescription": "A demand load referenced a line in an active prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software. Combine up + down.,"
+  },
+  {
+    "EventCode": "0x4e050",
+    "EventName": "PM_DC_PREF_STREAM_FUZZY_CONF",
+    "BriefDescription": "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up),",
+    "PublicDescription": "A demand load referenced a line in an active fuzzy prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.Fuzzy stream confirm (out of order effects, or pf cant keep up).,"
+  },
+  {
+    "EventCode": "0x3e050",
+    "EventName": "PM_DC_PREF_STREAM_STRIDED_CONF",
+    "BriefDescription": "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software.,",
+    "PublicDescription": "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software..,"
+  },
+  {
+    "EventCode": "0x4c054",
+    "EventName": "PM_DERAT_MISS_16G",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16G,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 16G.,"
+  },
+  {
+    "EventCode": "0x3c054",
+    "EventName": "PM_DERAT_MISS_16M",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16M,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 16M.,"
+  },
+  {
+    "EventCode": "0x1c056",
+    "EventName": "PM_DERAT_MISS_4K",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 4K,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 4K.,"
+  },
+  {
+    "EventCode": "0x2c054",
+    "EventName": "PM_DERAT_MISS_64K",
+    "BriefDescription": "Data ERAT Miss (Data TLB Access) page size 64K,",
+    "PublicDescription": "Data ERAT Miss (Data TLB Access) page size 64K.,"
+  },
+  {
+    "EventCode": "0xb0ba",
+    "EventName": "PM_DFU",
+    "BriefDescription": "Finish DFU (all finish),",
+    "PublicDescription": "Finish DFU (all finish),"
+  },
+  {
+    "EventCode": "0xb0be",
+    "EventName": "PM_DFU_DCFFIX",
+    "BriefDescription": "Convert from fixed opcode finish (dcffix,dcffixq),",
+    "PublicDescription": "Convert from fixed opcode finish (dcffix,dcffixq),"
+  },
+  {
+    "EventCode": "0xb0bc",
+    "EventName": "PM_DFU_DENBCD",
+    "BriefDescription": "BCD->DPD opcode finish (denbcd, denbcdq),",
+    "PublicDescription": "BCD->DPD opcode finish (denbcd, denbcdq),"
+  },
+  {
+    "EventCode": "0xb0b8",
+    "EventName": "PM_DFU_MC",
+    "BriefDescription": "Finish DFU multicycle,",
+    "PublicDescription": "Finish DFU multicycle,"
+  },
+  {
+    "EventCode": "0x2092",
+    "EventName": "PM_DISP_CLB_HELD_BAL",
+    "BriefDescription": "Dispatch/CLB Hold: Balance,",
+    "PublicDescription": "Dispatch/CLB Hold: Balance,"
+  },
+  {
+    "EventCode": "0x2094",
+    "EventName": "PM_DISP_CLB_HELD_RES",
+    "BriefDescription": "Dispatch/CLB Hold: Resource,",
+    "PublicDescription": "Dispatch/CLB Hold: Resource,"
+  },
+  {
+    "EventCode": "0x20a8",
+    "EventName": "PM_DISP_CLB_HELD_SB",
+    "BriefDescription": "Dispatch/CLB Hold: Scoreboard,",
+    "PublicDescription": "Dispatch/CLB Hold: Scoreboard,"
+  },
+  {
+    "EventCode": "0x2098",
+    "EventName": "PM_DISP_CLB_HELD_SYNC",
+    "BriefDescription": "Dispatch/CLB Hold: Sync type instruction,",
+    "PublicDescription": "Dispatch/CLB Hold: Sync type instruction,"
+  },
+  {
+    "EventCode": "0x2096",
+    "EventName": "PM_DISP_CLB_HELD_TLBIE",
+    "BriefDescription": "Dispatch Hold: Due to TLBIE,",
+    "PublicDescription": "Dispatch Hold: Due to TLBIE,"
+  },
+  {
+    "EventCode": "0x10006",
+    "EventName": "PM_DISP_HELD",
+    "BriefDescription": "Dispatch Held,",
+    "PublicDescription": "Dispatch Held.,"
+  },
+  {
+    "EventCode": "0x20006",
+    "EventName": "PM_DISP_HELD_IQ_FULL",
+    "BriefDescription": "Dispatch held due to Issue q full,",
+    "PublicDescription": "Dispatch held due to Issue q full.,"
+  },
+  {
+    "EventCode": "0x1002a",
+    "EventName": "PM_DISP_HELD_MAP_FULL",
+    "BriefDescription": "Dispatch for this thread was held because the Mappers were full,",
+    "PublicDescription": "Dispatch held due to Mapper full.,"
+  },
+  {
+    "EventCode": "0x30018",
+    "EventName": "PM_DISP_HELD_SRQ_FULL",
+    "BriefDescription": "Dispatch held due SRQ no room,",
+    "PublicDescription": "Dispatch held due SRQ no room.,"
+  },
+  {
+    "EventCode": "0x4003c",
+    "EventName": "PM_DISP_HELD_SYNC_HOLD",
+    "BriefDescription": "Dispatch held due to SYNC hold,",
+    "PublicDescription": "Dispatch held due to SYNC hold.,"
+  },
+  {
+    "EventCode": "0x30a6",
+    "EventName": "PM_DISP_HOLD_GCT_FULL",
+    "BriefDescription": "Dispatch Hold Due to no space in the GCT,",
+    "PublicDescription": "Dispatch Hold Due to no space in the GCT,"
+  },
+  {
+    "EventCode": "0x30008",
+    "EventName": "PM_DISP_WT",
+    "BriefDescription": "Dispatched Starved,",
+    "PublicDescription": "Dispatched Starved (not held, nothing to dispatch).,"
+  },
+  {
+    "EventCode": "0x4e048",
+    "EventName": "PM_DPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e048",
+    "EventName": "PM_DPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e04c",
+    "EventName": "PM_DPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04c",
+    "EventName": "PM_DPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e042",
+    "EventName": "PM_DPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e046",
+    "EventName": "PM_DPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e046",
+    "EventName": "PM_DPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04e",
+    "EventName": "PM_DPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e040",
+    "EventName": "PM_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e040",
+    "EventName": "PM_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e040",
+    "EventName": "PM_DPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e040",
+    "EventName": "PM_DPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e042",
+    "EventName": "PM_DPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e044",
+    "EventName": "PM_DPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e044",
+    "EventName": "PM_DPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e044",
+    "EventName": "PM_DPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e046",
+    "EventName": "PM_DPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04e",
+    "EventName": "PM_DPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e042",
+    "EventName": "PM_DPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e042",
+    "EventName": "PM_DPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e044",
+    "EventName": "PM_DPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04c",
+    "EventName": "PM_DPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e048",
+    "EventName": "PM_DPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e04c",
+    "EventName": "PM_DPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a data side request.,"
+  },
+  {
+    "EventCode": "0x4e04a",
+    "EventName": "PM_DPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e048",
+    "EventName": "PM_DPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e046",
+    "EventName": "PM_DPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x1e04a",
+    "EventName": "PM_DPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a data side request.,"
+  },
+  {
+    "EventCode": "0x2e04a",
+    "EventName": "PM_DPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a data side request.,"
+  },
+  {
+    "EventCode": "0x3e04a",
+    "EventName": "PM_DPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a data side request.,"
+  },
+  {
+    "EventCode": "0xd094",
+    "EventName": "PM_DSLB_MISS",
+    "BriefDescription": "Data SLB Miss - Total of all segment sizes,",
+    "PublicDescription": "Data SLB Miss - Total of all segment sizesData SLB misses,"
+  },
+  {
+    "EventCode": "0x300fc",
+    "EventName": "PM_DTLB_MISS",
+    "BriefDescription": "Data PTEG reload,",
+    "PublicDescription": "Data PTEG Reloaded (DTLB Miss).,"
+  },
+  {
+    "EventCode": "0x1c058",
+    "EventName": "PM_DTLB_MISS_16G",
+    "BriefDescription": "Data TLB Miss page size 16G,",
+    "PublicDescription": "Data TLB Miss page size 16G.,"
+  },
+  {
+    "EventCode": "0x4c056",
+    "EventName": "PM_DTLB_MISS_16M",
+    "BriefDescription": "Data TLB Miss page size 16M,",
+    "PublicDescription": "Data TLB Miss page size 16M.,"
+  },
+  {
+    "EventCode": "0x2c056",
+    "EventName": "PM_DTLB_MISS_4K",
+    "BriefDescription": "Data TLB Miss page size 4k,",
+    "PublicDescription": "Data TLB Miss page size 4k.,"
+  },
+  {
+    "EventCode": "0x3c056",
+    "EventName": "PM_DTLB_MISS_64K",
+    "BriefDescription": "Data TLB Miss page size 64K,",
+    "PublicDescription": "Data TLB Miss page size 64K.,"
+  },
+  {
+    "EventCode": "0x50a8",
+    "EventName": "PM_EAT_FORCE_MISPRED",
+    "BriefDescription": "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is issue,",
+    "PublicDescription": "XL-form branch was mispredicted due to the predicted target address missing from EAT. The EAT forces a mispredict in this case since there is no predicated target to validate. This is a rare case that may occur when the EAT is full and a branch is,"
+  },
+  {
+    "EventCode": "0x4084",
+    "EventName": "PM_EAT_FULL_CYC",
+    "BriefDescription": "Cycles No room in EAT,",
+    "PublicDescription": "Cycles No room in EATSet on bank conflict and case where no ibuffers available.,"
+  },
+  {
+    "EventCode": "0x2080",
+    "EventName": "PM_EE_OFF_EXT_INT",
+    "BriefDescription": "Ee off and external interrupt,",
+    "PublicDescription": "Ee off and external interrupt,"
+  },
+  {
+    "EventCode": "0x200f8",
+    "EventName": "PM_EXT_INT",
+    "BriefDescription": "external interrupt,",
+    "PublicDescription": "external interrupt.,"
+  },
+  {
+    "EventCode": "0x20b4",
+    "EventName": "PM_FAV_TBEGIN",
+    "BriefDescription": "Dispatch time Favored tbegin,",
+    "PublicDescription": "Dispatch time Favored tbegin,"
+  },
+  {
+    "EventCode": "0x100f4",
+    "EventName": "PM_FLOP",
+    "BriefDescription": "Floating Point Operation Finished,",
+    "PublicDescription": "Floating Point Operations Finished.,"
+  },
+  {
+    "EventCode": "0xa0ae",
+    "EventName": "PM_FLOP_SUM_SCALAR",
+    "BriefDescription": "flops summary scalar instructions,",
+    "PublicDescription": "flops summary scalar instructions,"
+  },
+  {
+    "EventCode": "0xa0ac",
+    "EventName": "PM_FLOP_SUM_VEC",
+    "BriefDescription": "flops summary vector instructions,",
+    "PublicDescription": "flops summary vector instructions,"
+  },
+  {
+    "EventCode": "0x400f8",
+    "EventName": "PM_FLUSH",
+    "BriefDescription": "Flush (any type),",
+    "PublicDescription": "Flush (any type).,"
+  },
+  {
+    "EventCode": "0x2084",
+    "EventName": "PM_FLUSH_BR_MPRED",
+    "BriefDescription": "Flush caused by branch mispredict,",
+    "PublicDescription": "Flush caused by branch mispredict,"
+  },
+  {
+    "EventCode": "0x30012",
+    "EventName": "PM_FLUSH_COMPLETION",
+    "BriefDescription": "Completion Flush,",
+    "PublicDescription": "Completion Flush.,"
+  },
+  {
+    "EventCode": "0x2082",
+    "EventName": "PM_FLUSH_DISP",
+    "BriefDescription": "Dispatch flush,",
+    "PublicDescription": "Dispatch flush,"
+  },
+  {
+    "EventCode": "0x208c",
+    "EventName": "PM_FLUSH_DISP_SB",
+    "BriefDescription": "Dispatch Flush: Scoreboard,",
+    "PublicDescription": "Dispatch Flush: Scoreboard,"
+  },
+  {
+    "EventCode": "0x2088",
+    "EventName": "PM_FLUSH_DISP_SYNC",
+    "BriefDescription": "Dispatch Flush: Sync,",
+    "PublicDescription": "Dispatch Flush: Sync,"
+  },
+  {
+    "EventCode": "0x208a",
+    "EventName": "PM_FLUSH_DISP_TLBIE",
+    "BriefDescription": "Dispatch Flush: TLBIE,",
+    "PublicDescription": "Dispatch Flush: TLBIE,"
+  },
+  {
+    "EventCode": "0x208e",
+    "EventName": "PM_FLUSH_LSU",
+    "BriefDescription": "Flush initiated by LSU,",
+    "PublicDescription": "Flush initiated by LSU,"
+  },
+  {
+    "EventCode": "0x2086",
+    "EventName": "PM_FLUSH_PARTIAL",
+    "BriefDescription": "Partial flush,",
+    "PublicDescription": "Partial flush,"
+  },
+  {
+    "EventCode": "0xa0b0",
+    "EventName": "PM_FPU0_FCONV",
+    "BriefDescription": "Convert instruction executed,",
+    "PublicDescription": "Convert instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b8",
+    "EventName": "PM_FPU0_FEST",
+    "BriefDescription": "Estimate instruction executed,",
+    "PublicDescription": "Estimate instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b4",
+    "EventName": "PM_FPU0_FRSP",
+    "BriefDescription": "Round to single precision instruction executed,",
+    "PublicDescription": "Round to single precision instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b2",
+    "EventName": "PM_FPU1_FCONV",
+    "BriefDescription": "Convert instruction executed,",
+    "PublicDescription": "Convert instruction executed,"
+  },
+  {
+    "EventCode": "0xa0ba",
+    "EventName": "PM_FPU1_FEST",
+    "BriefDescription": "Estimate instruction executed,",
+    "PublicDescription": "Estimate instruction executed,"
+  },
+  {
+    "EventCode": "0xa0b6",
+    "EventName": "PM_FPU1_FRSP",
+    "BriefDescription": "Round to single precision instruction executed,",
+    "PublicDescription": "Round to single precision instruction executed,"
+  },
+  {
+    "EventCode": "0x3000c",
+    "EventName": "PM_FREQ_DOWN",
+    "BriefDescription": "Power Management: Below Threshold B,",
+    "PublicDescription": "Frequency is being slewed down due to Power Management.,"
+  },
+  {
+    "EventCode": "0x4000c",
+    "EventName": "PM_FREQ_UP",
+    "BriefDescription": "Power Management: Above Threshold A,",
+    "PublicDescription": "Frequency is being slewed up due to Power Management.,"
+  },
+  {
+    "EventCode": "0x50b0",
+    "EventName": "PM_FUSION_TOC_GRP0_1",
+    "BriefDescription": "One pair of instructions fused with TOC in Group0,",
+    "PublicDescription": "One pair of instructions fused with TOC in Group0,"
+  },
+  {
+    "EventCode": "0x50ae",
+    "EventName": "PM_FUSION_TOC_GRP0_2",
+    "BriefDescription": "Two pairs of instructions fused with TOCin Group0,",
+    "PublicDescription": "Two pairs of instructions fused with TOCin Group0,"
+  },
+  {
+    "EventCode": "0x50ac",
+    "EventName": "PM_FUSION_TOC_GRP0_3",
+    "BriefDescription": "Three pairs of instructions fused with TOC in Group0,",
+    "PublicDescription": "Three pairs of instructions fused with TOC in Group0,"
+  },
+  {
+    "EventCode": "0x50b2",
+    "EventName": "PM_FUSION_TOC_GRP1_1",
+    "BriefDescription": "One pair of instructions fused with TOX in Group1,",
+    "PublicDescription": "One pair of instructions fused with TOX in Group1,"
+  },
+  {
+    "EventCode": "0x50b8",
+    "EventName": "PM_FUSION_VSX_GRP0_1",
+    "BriefDescription": "One pair of instructions fused with VSX in Group0,",
+    "PublicDescription": "One pair of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50b6",
+    "EventName": "PM_FUSION_VSX_GRP0_2",
+    "BriefDescription": "Two pairs of instructions fused with VSX in Group0,",
+    "PublicDescription": "Two pairs of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50b4",
+    "EventName": "PM_FUSION_VSX_GRP0_3",
+    "BriefDescription": "Three pairs of instructions fused with VSX in Group0,",
+    "PublicDescription": "Three pairs of instructions fused with VSX in Group0,"
+  },
+  {
+    "EventCode": "0x50ba",
+    "EventName": "PM_FUSION_VSX_GRP1_1",
+    "BriefDescription": "One pair of instructions fused with VSX in Group1,",
+    "PublicDescription": "One pair of instructions fused with VSX in Group1,"
+  },
+  {
+    "EventCode": "0x3000e",
+    "EventName": "PM_FXU0_BUSY_FXU1_IDLE",
+    "BriefDescription": "fxu0 busy and fxu1 idle,",
+    "PublicDescription": "fxu0 busy and fxu1 idle.,"
+  },
+  {
+    "EventCode": "0x10004",
+    "EventName": "PM_FXU0_FIN",
+    "BriefDescription": "The fixed point unit Unit 0 finished an instruction. Instructions that finish may not necessary complete.,",
+    "PublicDescription": "FXU0 Finished.,"
+  },
+  {
+    "EventCode": "0x4000e",
+    "EventName": "PM_FXU1_BUSY_FXU0_IDLE",
+    "BriefDescription": "fxu0 idle and fxu1 busy.,",
+    "PublicDescription": "fxu0 idle and fxu1 busy. .,"
+  },
+  {
+    "EventCode": "0x40004",
+    "EventName": "PM_FXU1_FIN",
+    "BriefDescription": "FXU1 Finished,",
+    "PublicDescription": "FXU1 Finished.,"
+  },
+  {
+    "EventCode": "0x2000e",
+    "EventName": "PM_FXU_BUSY",
+    "BriefDescription": "fxu0 busy and fxu1 busy.,",
+    "PublicDescription": "fxu0 busy and fxu1 busy..,"
+  },
+  {
+    "EventCode": "0x1000e",
+    "EventName": "PM_FXU_IDLE",
+    "BriefDescription": "fxu0 idle and fxu1 idle,",
+    "PublicDescription": "fxu0 idle and fxu1 idle.,"
+  },
+  {
+    "EventCode": "0x20008",
+    "EventName": "PM_GCT_EMPTY_CYC",
+    "BriefDescription": "No itags assigned either thread (GCT Empty),",
+    "PublicDescription": "No itags assigned either thread (GCT Empty).,"
+  },
+  {
+    "EventCode": "0x30a4",
+    "EventName": "PM_GCT_MERGE",
+    "BriefDescription": "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread,",
+    "PublicDescription": "Group dispatched on a merged GCT empty. GCT entries can be merged only within the same thread,"
+  },
+  {
+    "EventCode": "0x4d01e",
+    "EventName": "PM_GCT_NOSLOT_BR_MPRED",
+    "BriefDescription": "Gct empty for this thread due to branch mispred,",
+    "PublicDescription": "Gct empty for this thread due to branch mispred.,"
+  },
+  {
+    "EventCode": "0x4d01a",
+    "EventName": "PM_GCT_NOSLOT_BR_MPRED_ICMISS",
+    "BriefDescription": "Gct empty for this thread due to Icache Miss and branch mispred,",
+    "PublicDescription": "Gct empty for this thread due to Icache Miss and branch mispred.,"
+  },
+  {
+    "EventCode": "0x100f8",
+    "EventName": "PM_GCT_NOSLOT_CYC",
+    "BriefDescription": "No itags assigned,",
+    "PublicDescription": "Pipeline empty (No itags assigned , no GCT slots used).,"
+  },
+  {
+    "EventCode": "0x2d01e",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_ISSQ",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to Issue q full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to Issue q full.,"
+  },
+  {
+    "EventCode": "0x4d01c",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_MAP",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to Mapper full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to Mapper full.,"
+  },
+  {
+    "EventCode": "0x2e010",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_OTHER",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to sync,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to sync.,"
+  },
+  {
+    "EventCode": "0x2d01c",
+    "EventName": "PM_GCT_NOSLOT_DISP_HELD_SRQ",
+    "BriefDescription": "Gct empty for this thread due to dispatch hold on this thread due to SRQ full,",
+    "PublicDescription": "Gct empty for this thread due to dispatch hold on this thread due to SRQ full.,"
+  },
+  {
+    "EventCode": "0x4e010",
+    "EventName": "PM_GCT_NOSLOT_IC_L3MISS",
+    "BriefDescription": "Gct empty for this thread due to icach l3 miss,",
+    "PublicDescription": "Gct empty for this thread due to icach l3 miss.,"
+  },
+  {
+    "EventCode": "0x2d01a",
+    "EventName": "PM_GCT_NOSLOT_IC_MISS",
+    "BriefDescription": "Gct empty for this thread due to Icache Miss,",
+    "PublicDescription": "Gct empty for this thread due to Icache Miss.,"
+  },
+  {
+    "EventCode": "0x20a2",
+    "EventName": "PM_GCT_UTIL_11_14_ENTRIES",
+    "BriefDescription": "GCT Utilization 11-14 entries,",
+    "PublicDescription": "GCT Utilization 11-14 entries,"
+  },
+  {
+    "EventCode": "0x20a4",
+    "EventName": "PM_GCT_UTIL_15_17_ENTRIES",
+    "BriefDescription": "GCT Utilization 15-17 entries,",
+    "PublicDescription": "GCT Utilization 15-17 entries,"
+  },
+  {
+    "EventCode": "0x20a6",
+    "EventName": "PM_GCT_UTIL_18_ENTRIES",
+    "BriefDescription": "GCT Utilization 18+ entries,",
+    "PublicDescription": "GCT Utilization 18+ entries,"
+  },
+  {
+    "EventCode": "0x209c",
+    "EventName": "PM_GCT_UTIL_1_2_ENTRIES",
+    "BriefDescription": "GCT Utilization 1-2 entries,",
+    "PublicDescription": "GCT Utilization 1-2 entries,"
+  },
+  {
+    "EventCode": "0x209e",
+    "EventName": "PM_GCT_UTIL_3_6_ENTRIES",
+    "BriefDescription": "GCT Utilization 3-6 entries,",
+    "PublicDescription": "GCT Utilization 3-6 entries,"
+  },
+  {
+    "EventCode": "0x20a0",
+    "EventName": "PM_GCT_UTIL_7_10_ENTRIES",
+    "BriefDescription": "GCT Utilization 7-10 entries,",
+    "PublicDescription": "GCT Utilization 7-10 entries,"
+  },
+  {
+    "EventCode": "0x1000a",
+    "EventName": "PM_GRP_BR_MPRED_NONSPEC",
+    "BriefDescription": "Group experienced non-speculative branch redirect,",
+    "PublicDescription": "Group experienced Non-speculative br mispredicct.,"
+  },
+  {
+    "EventCode": "0x30004",
+    "EventName": "PM_GRP_CMPL",
+    "BriefDescription": "group completed,",
+    "PublicDescription": "group completed.,"
+  },
+  {
+    "EventCode": "0x3000a",
+    "EventName": "PM_GRP_DISP",
+    "BriefDescription": "group dispatch,",
+    "PublicDescription": "dispatch_success (Group Dispatched).,"
+  },
+  {
+    "EventCode": "0x1000c",
+    "EventName": "PM_GRP_IC_MISS_NONSPEC",
+    "BriefDescription": "Group experienced non-speculative I cache miss,",
+    "PublicDescription": "Group experi enced Non-specu lative I cache miss.,"
+  },
+  {
+    "EventCode": "0x10130",
+    "EventName": "PM_GRP_MRK",
+    "BriefDescription": "Instruction Marked,",
+    "PublicDescription": "Instruction marked in idu.,"
+  },
+  {
+    "EventCode": "0x509c",
+    "EventName": "PM_GRP_NON_FULL_GROUP",
+    "BriefDescription": "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches,",
+    "PublicDescription": "GROUPs where we did not have 6 non branch instructions in the group(ST mode), in SMT mode 3 non branches,"
+  },
+  {
+    "EventCode": "0x20050",
+    "EventName": "PM_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x20052",
+    "EventName": "PM_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x10052",
+    "EventName": "PM_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x50a4",
+    "EventName": "PM_GRP_TERM_2ND_BRANCH",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but 2nd branch ends group,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but 2nd branch ends group,"
+  },
+  {
+    "EventCode": "0x50a6",
+    "EventName": "PM_GRP_TERM_FPU_AFTER_BR",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but FPU OP IN same group after a branch terminates a group, cant do partial flushes,"
+  },
+  {
+    "EventCode": "0x509e",
+    "EventName": "PM_GRP_TERM_NOINST",
+    "BriefDescription": "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).,",
+    "PublicDescription": "Do not fill every slot in the group, Not enough instructions in the Ibuffer. This includes cases where the group started with enough instructions, but some got knocked out by a cache miss or branch redirect (which would also empty the Ibuffer).,"
+  },
+  {
+    "EventCode": "0x50a0",
+    "EventName": "PM_GRP_TERM_OTHER",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but the group terminated early for some other reason, most likely due to a First or Last.,"
+  },
+  {
+    "EventCode": "0x50a2",
+    "EventName": "PM_GRP_TERM_SLOT_LIMIT",
+    "BriefDescription": "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination,",
+    "PublicDescription": "There were enough instructions in the Ibuffer, but 3 src RA/RB/RC , 2 way crack caused a group termination,"
+  },
+  {
+    "EventCode": "0x2000a",
+    "EventName": "PM_HV_CYC",
+    "BriefDescription": "Cycles in which msr_hv is high. Note that this event does not take msr_pr into consideration,",
+    "PublicDescription": "cycles in hypervisor mode .,"
+  },
+  {
+    "EventCode": "0x4086",
+    "EventName": "PM_IBUF_FULL_CYC",
+    "BriefDescription": "Cycles No room in ibuff,",
+    "PublicDescription": "Cycles No room in ibufffully qualified tranfer (if5 valid).,"
+  },
+  {
+    "EventCode": "0x10018",
+    "EventName": "PM_IC_DEMAND_CYC",
+    "BriefDescription": "Cycles when a demand ifetch was pending,",
+    "PublicDescription": "Demand ifetch pending.,"
+  },
+  {
+    "EventCode": "0x4098",
+    "EventName": "PM_IC_DEMAND_L2_BHT_REDIRECT",
+    "BriefDescription": "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles),",
+    "PublicDescription": "L2 I cache demand request due to BHT redirect, branch redirect ( 2 bubbles 3 cycles),"
+  },
+  {
+    "EventCode": "0x409a",
+    "EventName": "PM_IC_DEMAND_L2_BR_REDIRECT",
+    "BriefDescription": "L2 I cache demand request due to branch Mispredict ( 15 cycle path),",
+    "PublicDescription": "L2 I cache demand request due to branch Mispredict ( 15 cycle path),"
+  },
+  {
+    "EventCode": "0x4088",
+    "EventName": "PM_IC_DEMAND_REQ",
+    "BriefDescription": "Demand Instruction fetch request,",
+    "PublicDescription": "Demand Instruction fetch request,"
+  },
+  {
+    "EventCode": "0x508a",
+    "EventName": "PM_IC_INVALIDATE",
+    "BriefDescription": "Ic line invalidated,",
+    "PublicDescription": "Ic line invalidated,"
+  },
+  {
+    "EventCode": "0x4092",
+    "EventName": "PM_IC_PREF_CANCEL_HIT",
+    "BriefDescription": "Prefetch Canceled due to icache hit,",
+    "PublicDescription": "Prefetch Canceled due to icache hit,"
+  },
+  {
+    "EventCode": "0x4094",
+    "EventName": "PM_IC_PREF_CANCEL_L2",
+    "BriefDescription": "L2 Squashed request,",
+    "PublicDescription": "L2 Squashed request,"
+  },
+  {
+    "EventCode": "0x4090",
+    "EventName": "PM_IC_PREF_CANCEL_PAGE",
+    "BriefDescription": "Prefetch Canceled due to page boundary,",
+    "PublicDescription": "Prefetch Canceled due to page boundary,"
+  },
+  {
+    "EventCode": "0x408a",
+    "EventName": "PM_IC_PREF_REQ",
+    "BriefDescription": "Instruction prefetch requests,",
+    "PublicDescription": "Instruction prefetch requests,"
+  },
+  {
+    "EventCode": "0x408e",
+    "EventName": "PM_IC_PREF_WRITE",
+    "BriefDescription": "Instruction prefetch written into IL1,",
+    "PublicDescription": "Instruction prefetch written into IL1,"
+  },
+  {
+    "EventCode": "0x4096",
+    "EventName": "PM_IC_RELOAD_PRIVATE",
+    "BriefDescription": "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was invalidat,",
+    "PublicDescription": "Reloading line was brought in private for a specific thread. Most lines are brought in shared for all eight thrreads. If RA does not match then invalidates and then brings it shared to other thread. In P7 line brought in private , then line was inv,"
+  },
+  {
+    "EventCode": "0x100f6",
+    "EventName": "PM_IERAT_RELOAD",
+    "BriefDescription": "Number of I-ERAT reloads,",
+    "PublicDescription": "IERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0x4006a",
+    "EventName": "PM_IERAT_RELOAD_16M",
+    "BriefDescription": "IERAT Reloaded (Miss) for a 16M page,",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 16M page.,"
+  },
+  {
+    "EventCode": "0x20064",
+    "EventName": "PM_IERAT_RELOAD_4K",
+    "BriefDescription": "IERAT Miss (Not implemented as DI on POWER6),",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 4k page.,"
+  },
+  {
+    "EventCode": "0x3006a",
+    "EventName": "PM_IERAT_RELOAD_64K",
+    "BriefDescription": "IERAT Reloaded (Miss) for a 64k page,",
+    "PublicDescription": "IERAT Reloaded (Miss) for a 64k page.,"
+  },
+  {
+    "EventCode": "0x3405e",
+    "EventName": "PM_IFETCH_THROTTLE",
+    "BriefDescription": "Cycles in which Instruction fetch throttle was active,",
+    "PublicDescription": "Cycles instruction fecth was throttled in IFU.,"
+  },
+  {
+    "EventCode": "0x5088",
+    "EventName": "PM_IFU_L2_TOUCH",
+    "BriefDescription": "L2 touch to update MRU on a line,",
+    "PublicDescription": "L2 touch to update MRU on a line,"
+  },
+  {
+    "EventCode": "0x514050",
+    "EventName": "PM_INST_ALL_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x544048",
+    "EventName": "PM_INST_ALL_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534048",
+    "EventName": "PM_INST_ALL_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x53404c",
+    "EventName": "PM_INST_ALL_FROM_DL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404c",
+    "EventName": "PM_INST_ALL_FROM_DMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514042",
+    "EventName": "PM_INST_ALL_FROM_L2",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544046",
+    "EventName": "PM_INST_ALL_FROM_L21_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534046",
+    "EventName": "PM_INST_ALL_FROM_L21_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404e",
+    "EventName": "PM_INST_ALL_FROM_L2MISS",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534040",
+    "EventName": "PM_INST_ALL_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544040",
+    "EventName": "PM_INST_ALL_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524040",
+    "EventName": "PM_INST_ALL_FROM_L2_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514040",
+    "EventName": "PM_INST_ALL_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544042",
+    "EventName": "PM_INST_ALL_FROM_L3",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x544044",
+    "EventName": "PM_INST_ALL_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534044",
+    "EventName": "PM_INST_ALL_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524044",
+    "EventName": "PM_INST_ALL_FROM_L31_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514046",
+    "EventName": "PM_INST_ALL_FROM_L31_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404e",
+    "EventName": "PM_INST_ALL_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x534042",
+    "EventName": "PM_INST_ALL_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524042",
+    "EventName": "PM_INST_ALL_FROM_L3_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514044",
+    "EventName": "PM_INST_ALL_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404c",
+    "EventName": "PM_INST_ALL_FROM_LL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524048",
+    "EventName": "PM_INST_ALL_FROM_LMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x52404c",
+    "EventName": "PM_INST_ALL_FROM_MEMORY",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x54404a",
+    "EventName": "PM_INST_ALL_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x514048",
+    "EventName": "PM_INST_ALL_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524046",
+    "EventName": "PM_INST_ALL_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x51404a",
+    "EventName": "PM_INST_ALL_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x52404a",
+    "EventName": "PM_INST_ALL_FROM_RL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x53404a",
+    "EventName": "PM_INST_ALL_FROM_RMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to instruction fetches and prefetches,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1,"
+  },
+  {
+    "EventCode": "0x524050",
+    "EventName": "PM_INST_ALL_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x524052",
+    "EventName": "PM_INST_ALL_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x514052",
+    "EventName": "PM_INST_ALL_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x514054",
+    "EventName": "PM_INST_ALL_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for instruction fetches and prefetches,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x544052",
+    "EventName": "PM_INST_ALL_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for instruction fetches and prefetches,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch,"
+  },
+  {
+    "EventCode": "0x534050",
+    "EventName": "PM_INST_ALL_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for instruction fetches and prefetches,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x534052",
+    "EventName": "PM_INST_ALL_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x544050",
+    "EventName": "PM_INST_ALL_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for instruction fetches and prefetches,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch,"
+  },
+  {
+    "EventCode": "0x14050",
+    "EventName": "PM_INST_CHIP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was chip pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was chip pump (prediction=correct) for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x2",
+    "EventName": "PM_INST_CMPL",
+    "BriefDescription": "Number of PowerPC Instructions that completed.,",
+    "PublicDescription": "PPC Instructions Finished (completed).,"
+  },
+  {
+    "EventCode": "0x200f2",
+    "EventName": "PM_INST_DISP",
+    "BriefDescription": "PPC Dispatched,",
+    "PublicDescription": "PPC Dispatched.,"
+  },
+  {
+    "EventCode": "0x44048",
+    "EventName": "PM_INST_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34048",
+    "EventName": "PM_INST_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x3404c",
+    "EventName": "PM_INST_FROM_DL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4404c",
+    "EventName": "PM_INST_FROM_DMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group (Distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4080",
+    "EventName": "PM_INST_FROM_L1",
+    "BriefDescription": "Instruction fetches from L1,",
+    "PublicDescription": "Instruction fetches from L1,"
+  },
+  {
+    "EventCode": "0x14042",
+    "EventName": "PM_INST_FROM_L2",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44046",
+    "EventName": "PM_INST_FROM_L21_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34046",
+    "EventName": "PM_INST_FROM_L21_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L2 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404e",
+    "EventName": "PM_INST_FROM_L2MISS",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L2 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34040",
+    "EventName": "PM_INST_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with load hit store conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44040",
+    "EventName": "PM_INST_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24040",
+    "EventName": "PM_INST_FROM_L2_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14040",
+    "EventName": "PM_INST_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L2 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44042",
+    "EventName": "PM_INST_FROM_L3",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x44044",
+    "EventName": "PM_INST_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34044",
+    "EventName": "PM_INST_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24044",
+    "EventName": "PM_INST_FROM_L31_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14046",
+    "EventName": "PM_INST_FROM_L31_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another core's L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x300fa",
+    "EventName": "PM_INST_FROM_L3MISS",
+    "BriefDescription": "Marked instruction was reloaded from a location beyond the local chiplet,",
+    "PublicDescription": "Inst from L3 miss.,"
+  },
+  {
+    "EventCode": "0x4404e",
+    "EventName": "PM_INST_FROM_L3MISS_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to a instruction fetch,",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a localtion other than the local core's L3 due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x34042",
+    "EventName": "PM_INST_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 with dispatch conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24042",
+    "EventName": "PM_INST_FROM_L3_MEPF",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14044",
+    "EventName": "PM_INST_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from local core's L3 without conflict due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404c",
+    "EventName": "PM_INST_FROM_LL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's L4 cache due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24048",
+    "EventName": "PM_INST_FROM_LMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from the local chip's Memory due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x2404c",
+    "EventName": "PM_INST_FROM_MEMORY",
+    "BriefDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from a memory location including L4 from local remote or distant due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x4404a",
+    "EventName": "PM_INST_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x14048",
+    "EventName": "PM_INST_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24046",
+    "EventName": "PM_INST_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x1404a",
+    "EventName": "PM_INST_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x2404a",
+    "EventName": "PM_INST_FROM_RL4",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x3404a",
+    "EventName": "PM_INST_FROM_RMEM",
+    "BriefDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to an instruction fetch (not prefetch),",
+    "PublicDescription": "The processor's Instruction cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to either an instruction fetch or instruction fetch plus prefetch if MMCR1[17] is 1 .,"
+  },
+  {
+    "EventCode": "0x24050",
+    "EventName": "PM_INST_GRP_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was group pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was group pump for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x24052",
+    "EventName": "PM_INST_GRP_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (Group) ended up either larger or smaller than Initial Pump Scope for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope OR Final Pump Scope(Group) got data from source that was at smaller scope(Chip) Final pump was group pump and initial pump was chip or final and initial pump was gro,"
+  },
+  {
+    "EventCode": "0x14052",
+    "EventName": "PM_INST_GRP_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (Group) ended up larger than Initial Pump Scope (Chip) for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(Group) to get data sourced, ended up larger than Initial Pump Scope (Chip) Final pump was group pump and initial pump was chip pumpfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x1003a",
+    "EventName": "PM_INST_IMC_MATCH_CMPL",
+    "BriefDescription": "IMC Match Count ( Not architected in P8),",
+    "PublicDescription": "IMC Match Count.,"
+  },
+  {
+    "EventCode": "0x30016",
+    "EventName": "PM_INST_IMC_MATCH_DISP",
+    "BriefDescription": "Matched Instructions Dispatched,",
+    "PublicDescription": "IMC Matches dispatched.,"
+  },
+  {
+    "EventCode": "0x14054",
+    "EventName": "PM_INST_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for an instruction fetch,",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x44052",
+    "EventName": "PM_INST_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for an instruction fetch,",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x34050",
+    "EventName": "PM_INST_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump (prediction=correct) for an instruction fetch,",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x34052",
+    "EventName": "PM_INST_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x44050",
+    "EventName": "PM_INST_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for an instruction fetch,",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for an instruction fetch.,"
+  },
+  {
+    "EventCode": "0x10014",
+    "EventName": "PM_IOPS_CMPL",
+    "BriefDescription": "Internal Operations completed,",
+    "PublicDescription": "IOPS Completed.,"
+  },
+  {
+    "EventCode": "0x30014",
+    "EventName": "PM_IOPS_DISP",
+    "BriefDescription": "Internal Operations dispatched,",
+    "PublicDescription": "IOPS dispatched.,"
+  },
+  {
+    "EventCode": "0x45048",
+    "EventName": "PM_IPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35048",
+    "EventName": "PM_IPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x3504c",
+    "EventName": "PM_IPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504c",
+    "EventName": "PM_IPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15042",
+    "EventName": "PM_IPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45046",
+    "EventName": "PM_IPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35046",
+    "EventName": "PM_IPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504e",
+    "EventName": "PM_IPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35040",
+    "EventName": "PM_IPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45040",
+    "EventName": "PM_IPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25040",
+    "EventName": "PM_IPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15040",
+    "EventName": "PM_IPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45042",
+    "EventName": "PM_IPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x45044",
+    "EventName": "PM_IPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35044",
+    "EventName": "PM_IPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25044",
+    "EventName": "PM_IPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15046",
+    "EventName": "PM_IPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504e",
+    "EventName": "PM_IPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x35042",
+    "EventName": "PM_IPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25042",
+    "EventName": "PM_IPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15044",
+    "EventName": "PM_IPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504c",
+    "EventName": "PM_IPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25048",
+    "EventName": "PM_IPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x2504c",
+    "EventName": "PM_IPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4504a",
+    "EventName": "PM_IPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x15048",
+    "EventName": "PM_IPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x25046",
+    "EventName": "PM_IPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x1504a",
+    "EventName": "PM_IPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x2504a",
+    "EventName": "PM_IPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x3504a",
+    "EventName": "PM_IPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a instruction side request.,"
+  },
+  {
+    "EventCode": "0x4608e",
+    "EventName": "PM_ISIDE_L2MEMACC",
+    "BriefDescription": "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4),",
+    "PublicDescription": "valid when first beat of data comes in for an i-side fetch where data came from mem(or L4),"
+  },
+  {
+    "EventCode": "0xd096",
+    "EventName": "PM_ISLB_MISS",
+    "BriefDescription": "I SLB Miss.,",
+    "PublicDescription": "I SLB Miss.,"
+  },
+  {
+    "EventCode": "0x30ac",
+    "EventName": "PM_ISU_REF_FX0",
+    "BriefDescription": "FX0 ISU reject,",
+    "PublicDescription": "FX0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30ae",
+    "EventName": "PM_ISU_REF_FX1",
+    "BriefDescription": "FX1 ISU reject,",
+    "PublicDescription": "FX1 ISU reject,"
+  },
+  {
+    "EventCode": "0x38ac",
+    "EventName": "PM_ISU_REF_FXU",
+    "BriefDescription": "FXU ISU reject from either pipe,",
+    "PublicDescription": "ISU,"
+  },
+  {
+    "EventCode": "0x30b0",
+    "EventName": "PM_ISU_REF_LS0",
+    "BriefDescription": "LS0 ISU reject,",
+    "PublicDescription": "LS0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b2",
+    "EventName": "PM_ISU_REF_LS1",
+    "BriefDescription": "LS1 ISU reject,",
+    "PublicDescription": "LS1 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b4",
+    "EventName": "PM_ISU_REF_LS2",
+    "BriefDescription": "LS2 ISU reject,",
+    "PublicDescription": "LS2 ISU reject,"
+  },
+  {
+    "EventCode": "0x30b6",
+    "EventName": "PM_ISU_REF_LS3",
+    "BriefDescription": "LS3 ISU reject,",
+    "PublicDescription": "LS3 ISU reject,"
+  },
+  {
+    "EventCode": "0x309c",
+    "EventName": "PM_ISU_REJECTS_ALL",
+    "BriefDescription": "All isu rejects could be more than 1 per cycle,",
+    "PublicDescription": "All isu rejects could be more than 1 per cycle,"
+  },
+  {
+    "EventCode": "0x30a2",
+    "EventName": "PM_ISU_REJECT_RES_NA",
+    "BriefDescription": "ISU reject due to resource not available,",
+    "PublicDescription": "ISU reject due to resource not available,"
+  },
+  {
+    "EventCode": "0x309e",
+    "EventName": "PM_ISU_REJECT_SAR_BYPASS",
+    "BriefDescription": "Reject because of SAR bypass,",
+    "PublicDescription": "Reject because of SAR bypass,"
+  },
+  {
+    "EventCode": "0x30a0",
+    "EventName": "PM_ISU_REJECT_SRC_NA",
+    "BriefDescription": "ISU reject due to source not available,",
+    "PublicDescription": "ISU reject due to source not available,"
+  },
+  {
+    "EventCode": "0x30a8",
+    "EventName": "PM_ISU_REJ_VS0",
+    "BriefDescription": "VS0 ISU reject,",
+    "PublicDescription": "VS0 ISU reject,"
+  },
+  {
+    "EventCode": "0x30aa",
+    "EventName": "PM_ISU_REJ_VS1",
+    "BriefDescription": "VS1 ISU reject,",
+    "PublicDescription": "VS1 ISU reject,"
+  },
+  {
+    "EventCode": "0x38a8",
+    "EventName": "PM_ISU_REJ_VSU",
+    "BriefDescription": "VSU ISU reject from either pipe,",
+    "PublicDescription": "ISU,"
+  },
+  {
+    "EventCode": "0x30b8",
+    "EventName": "PM_ISYNC",
+    "BriefDescription": "Isync count per thread,",
+    "PublicDescription": "Isync count per thread,"
+  },
+  {
+    "EventCode": "0x400fc",
+    "EventName": "PM_ITLB_MISS",
+    "BriefDescription": "ITLB Reloaded (always zero on POWER6),",
+    "PublicDescription": "ITLB Reloaded.,"
+  },
+  {
+    "EventCode": "0x200301ea",
+    "EventName": "PM_L1MISS_LAT_EXC_1024",
+    "BriefDescription": "L1 misses that took longer than 1024 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 1024 cyc,"
+  },
+  {
+    "EventCode": "0x200401ec",
+    "EventName": "PM_L1MISS_LAT_EXC_2048",
+    "BriefDescription": "L1 misses that took longer than 2048 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 2048 cyc,"
+  },
+  {
+    "EventCode": "0x200101e8",
+    "EventName": "PM_L1MISS_LAT_EXC_256",
+    "BriefDescription": "L1 misses that took longer than 256 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 256 cyc,"
+  },
+  {
+    "EventCode": "0x200201e6",
+    "EventName": "PM_L1MISS_LAT_EXC_32",
+    "BriefDescription": "L1 misses that took longer than 32 cyles to resolve (miss to reload),",
+    "PublicDescription": "Reload latency exceeded 32 cyc,"
+  },
+  {
+    "EventCode": "0x26086",
+    "EventName": "PM_L1PF_L2MEMACC",
+    "BriefDescription": "valid when first beat of data comes in for an L1pref where data came from mem(or L4),",
+    "PublicDescription": "valid when first beat of data comes in for an L1pref where data came from mem(or L4),"
+  },
+  {
+    "EventCode": "0x1002c",
+    "EventName": "PM_L1_DCACHE_RELOADED_ALL",
+    "BriefDescription": "L1 data cache reloaded for demand or prefetch,",
+    "PublicDescription": "L1 data cache reloaded for demand or prefetch .,"
+  },
+  {
+    "EventCode": "0x300f6",
+    "EventName": "PM_L1_DCACHE_RELOAD_VALID",
+    "BriefDescription": "DL1 reloaded due to Demand Load,",
+    "PublicDescription": "DL1 reloaded due to Demand Load .,"
+  },
+  {
+    "EventCode": "0x408c",
+    "EventName": "PM_L1_DEMAND_WRITE",
+    "BriefDescription": "Instruction Demand sectors wriittent into IL1,",
+    "PublicDescription": "Instruction Demand sectors wriittent into IL1,"
+  },
+  {
+    "EventCode": "0x200fd",
+    "EventName": "PM_L1_ICACHE_MISS",
+    "BriefDescription": "Demand iCache Miss,",
+    "PublicDescription": "Demand iCache Miss.,"
+  },
+  {
+    "EventCode": "0x40012",
+    "EventName": "PM_L1_ICACHE_RELOADED_ALL",
+    "BriefDescription": "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch,",
+    "PublicDescription": "Counts all Icache reloads includes demand, prefetchm prefetch turned into demand and demand turned into prefetch.,"
+  },
+  {
+    "EventCode": "0x30068",
+    "EventName": "PM_L1_ICACHE_RELOADED_PREF",
+    "BriefDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch),",
+    "PublicDescription": "Counts all Icache prefetch reloads ( includes demand turned into prefetch).,"
+  },
+  {
+    "EventCode": "0x27084",
+    "EventName": "PM_L2_CHIP_PUMP",
+    "BriefDescription": "RC requests that were local on chip pump attempts,",
+    "PublicDescription": "RC requests that were local on chip pump attempts,"
+  },
+  {
+    "EventCode": "0x27086",
+    "EventName": "PM_L2_GROUP_PUMP",
+    "BriefDescription": "RC requests that were on Node Pump attempts,",
+    "PublicDescription": "RC requests that were on Node Pump attempts,"
+  },
+  {
+    "EventCode": "0x3708a",
+    "EventName": "PM_L2_RTY_ST",
+    "BriefDescription": "RC retries on PB for any store from core,",
+    "PublicDescription": "RC retries on PB for any store from core,"
+  },
+  {
+    "EventCode": "0x17080",
+    "EventName": "PM_L2_ST",
+    "BriefDescription": "All successful D-side store dispatches for this thread,",
+    "PublicDescription": "All successful D-side store dispatches for this thread,"
+  },
+  {
+    "EventCode": "0x17082",
+    "EventName": "PM_L2_ST_MISS",
+    "BriefDescription": "All successful D-side store dispatches for this thread that were L2 Miss,",
+    "PublicDescription": "All successful D-side store dispatches for this thread that were L2 Miss,"
+  },
+  {
+    "EventCode": "0x1e05e",
+    "EventName": "PM_L2_TM_REQ_ABORT",
+    "BriefDescription": "TM abort,",
+    "PublicDescription": "TM abort.,"
+  },
+  {
+    "EventCode": "0x3e05c",
+    "EventName": "PM_L2_TM_ST_ABORT_SISTER",
+    "BriefDescription": "TM marked store abort,",
+    "PublicDescription": "TM marked store abort.,"
+  },
+  {
+    "EventCode": "0x819082",
+    "EventName": "PM_L3_CI_USAGE",
+    "BriefDescription": "rotating sample of 16 CI or CO actives,",
+    "PublicDescription": "rotating sample of 16 CI or CO actives,"
+  },
+  {
+    "EventCode": "0x83908b",
+    "EventName": "PM_L3_CO0_ALLOC",
+    "BriefDescription": "lifetime, sample of CO machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x83908a",
+    "EventName": "PM_L3_CO0_BUSY",
+    "BriefDescription": "lifetime, sample of CO machine 0 valid,",
+    "PublicDescription": "lifetime, sample of CO machine 0 valid,"
+  },
+  {
+    "EventCode": "0x28086",
+    "EventName": "PM_L3_CO_L31",
+    "BriefDescription": "L3 CO to L3.1 OR of port 0 and 1 ( lossy),",
+    "PublicDescription": "L3 CO to L3.1 OR of port 0 and 1 ( lossy),"
+  },
+  {
+    "EventCode": "0x28084",
+    "EventName": "PM_L3_CO_MEM",
+    "BriefDescription": "L3 CO to memory OR of port 0 and 1 ( lossy),",
+    "PublicDescription": "L3 CO to memory OR of port 0 and 1 ( lossy),"
+  },
+  {
+    "EventCode": "0x18082",
+    "EventName": "PM_L3_CO_MEPF",
+    "BriefDescription": "L3 CO of line in Mep state ( includes casthrough,",
+    "PublicDescription": "L3 CO of line in Mep state ( includes casthrough,"
+  },
+  {
+    "EventCode": "0x1e052",
+    "EventName": "PM_L3_LD_PREF",
+    "BriefDescription": "L3 Load Prefetches,",
+    "PublicDescription": "L3 Load Prefetches.,"
+  },
+  {
+    "EventCode": "0x84908d",
+    "EventName": "PM_L3_PF0_ALLOC",
+    "BriefDescription": "lifetime, sample of PF machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x84908c",
+    "EventName": "PM_L3_PF0_BUSY",
+    "BriefDescription": "lifetime, sample of PF machine 0 valid,",
+    "PublicDescription": "lifetime, sample of PF machine 0 valid,"
+  },
+  {
+    "EventCode": "0x18080",
+    "EventName": "PM_L3_PF_MISS_L3",
+    "BriefDescription": "L3 Prefetch missed in L3,",
+    "PublicDescription": "L3 Prefetch missed in L3,"
+  },
+  {
+    "EventCode": "0x3808a",
+    "EventName": "PM_L3_PF_OFF_CHIP_CACHE",
+    "BriefDescription": "L3 Prefetch from Off chip cache,",
+    "PublicDescription": "L3 Prefetch from Off chip cache,"
+  },
+  {
+    "EventCode": "0x4808e",
+    "EventName": "PM_L3_PF_OFF_CHIP_MEM",
+    "BriefDescription": "L3 Prefetch from Off chip memory,",
+    "PublicDescription": "L3 Prefetch from Off chip memory,"
+  },
+  {
+    "EventCode": "0x38088",
+    "EventName": "PM_L3_PF_ON_CHIP_CACHE",
+    "BriefDescription": "L3 Prefetch from On chip cache,",
+    "PublicDescription": "L3 Prefetch from On chip cache,"
+  },
+  {
+    "EventCode": "0x4808c",
+    "EventName": "PM_L3_PF_ON_CHIP_MEM",
+    "BriefDescription": "L3 Prefetch from On chip memory,",
+    "PublicDescription": "L3 Prefetch from On chip memory,"
+  },
+  {
+    "EventCode": "0x829084",
+    "EventName": "PM_L3_PF_USAGE",
+    "BriefDescription": "rotating sample of 32 PF actives,",
+    "PublicDescription": "rotating sample of 32 PF actives,"
+  },
+  {
+    "EventCode": "0x4e052",
+    "EventName": "PM_L3_PREF_ALL",
+    "BriefDescription": "Total HW L3 prefetches(Load+store),",
+    "PublicDescription": "Total HW L3 prefetches(Load+store).,"
+  },
+  {
+    "EventCode": "0x84908f",
+    "EventName": "PM_L3_RD0_ALLOC",
+    "BriefDescription": "lifetime, sample of RD machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x84908e",
+    "EventName": "PM_L3_RD0_BUSY",
+    "BriefDescription": "lifetime, sample of RD machine 0 valid,",
+    "PublicDescription": "lifetime, sample of RD machine 0 valid,"
+  },
+  {
+    "EventCode": "0x829086",
+    "EventName": "PM_L3_RD_USAGE",
+    "BriefDescription": "rotating sample of 16 RD actives,",
+    "PublicDescription": "rotating sample of 16 RD actives,"
+  },
+  {
+    "EventCode": "0x839089",
+    "EventName": "PM_L3_SN0_ALLOC",
+    "BriefDescription": "lifetime, sample of snooper machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x839088",
+    "EventName": "PM_L3_SN0_BUSY",
+    "BriefDescription": "lifetime, sample of snooper machine 0 valid,",
+    "PublicDescription": "lifetime, sample of snooper machine 0 valid,"
+  },
+  {
+    "EventCode": "0x819080",
+    "EventName": "PM_L3_SN_USAGE",
+    "BriefDescription": "rotating sample of 8 snoop valids,",
+    "PublicDescription": "rotating sample of 8 snoop valids,"
+  },
+  {
+    "EventCode": "0x2e052",
+    "EventName": "PM_L3_ST_PREF",
+    "BriefDescription": "L3 store Prefetches,",
+    "PublicDescription": "L3 store Prefetches.,"
+  },
+  {
+    "EventCode": "0x3e052",
+    "EventName": "PM_L3_SW_PREF",
+    "BriefDescription": "Data stream touchto L3,",
+    "PublicDescription": "Data stream touchto L3.,"
+  },
+  {
+    "EventCode": "0x18081",
+    "EventName": "PM_L3_WI0_ALLOC",
+    "BriefDescription": "lifetime, sample of Write Inject machine 0 valid,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x3c058",
+    "EventName": "PM_LARX_FIN",
+    "BriefDescription": "Larx finished,",
+    "PublicDescription": "Larx finished .,"
+  },
+  {
+    "EventCode": "0x1002e",
+    "EventName": "PM_LD_CMPL",
+    "BriefDescription": "count of Loads completed,",
+    "PublicDescription": "count of Loads completed.,"
+  },
+  {
+    "EventCode": "0x10062",
+    "EventName": "PM_LD_L3MISS_PEND_CYC",
+    "BriefDescription": "Cycles L3 miss was pending for this thread,",
+    "PublicDescription": "Cycles L3 miss was pending for this thread.,"
+  },
+  {
+    "EventCode": "0x3e054",
+    "EventName": "PM_LD_MISS_L1",
+    "BriefDescription": "Load Missed L1,",
+    "PublicDescription": "Load Missed L1.,"
+  },
+  {
+    "EventCode": "0x100ee",
+    "EventName": "PM_LD_REF_L1",
+    "BriefDescription": "All L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "Load Ref count combined for all units.,"
+  },
+  {
+    "EventCode": "0xc080",
+    "EventName": "PM_LD_REF_L1_LSU0",
+    "BriefDescription": "LS0 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS0 L1 D cache load references counted at finish, gated by rejectLSU0 L1 D cache load references,"
+  },
+  {
+    "EventCode": "0xc082",
+    "EventName": "PM_LD_REF_L1_LSU1",
+    "BriefDescription": "LS1 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS1 L1 D cache load references counted at finish, gated by rejectLSU1 L1 D cache load references,"
+  },
+  {
+    "EventCode": "0xc094",
+    "EventName": "PM_LD_REF_L1_LSU2",
+    "BriefDescription": "LS2 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS2 L1 D cache load references counted at finish, gated by reject42,"
+  },
+  {
+    "EventCode": "0xc096",
+    "EventName": "PM_LD_REF_L1_LSU3",
+    "BriefDescription": "LS3 L1 D cache load references counted at finish, gated by reject,",
+    "PublicDescription": "LS3 L1 D cache load references counted at finish, gated by reject42,"
+  },
+  {
+    "EventCode": "0x509a",
+    "EventName": "PM_LINK_STACK_INVALID_PTR",
+    "BriefDescription": "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops,",
+    "PublicDescription": "A flush were LS ptr is invalid, results in a pop , A lot of interrupts between push and pops,"
+  },
+  {
+    "EventCode": "0x5098",
+    "EventName": "PM_LINK_STACK_WRONG_ADD_PRED",
+    "BriefDescription": "Link stack predicts wrong address, because of link stack design limitation.,",
+    "PublicDescription": "Link stack predicts wrong address, because of link stack design limitation.,"
+  },
+  {
+    "EventCode": "0xe080",
+    "EventName": "PM_LS0_ERAT_MISS_PREF",
+    "BriefDescription": "LS0 Erat miss due to prefetch,",
+    "PublicDescription": "LS0 Erat miss due to prefetch42,"
+  },
+  {
+    "EventCode": "0xd0b8",
+    "EventName": "PM_LS0_L1_PREF",
+    "BriefDescription": "LS0 L1 cache data prefetches,",
+    "PublicDescription": "LS0 L1 cache data prefetches42,"
+  },
+  {
+    "EventCode": "0xc098",
+    "EventName": "PM_LS0_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches,",
+    "PublicDescription": "Software L1 Prefetches, including SW Transient Prefetches42,"
+  },
+  {
+    "EventCode": "0xe082",
+    "EventName": "PM_LS1_ERAT_MISS_PREF",
+    "BriefDescription": "LS1 Erat miss due to prefetch,",
+    "PublicDescription": "LS1 Erat miss due to prefetch42,"
+  },
+  {
+    "EventCode": "0xd0ba",
+    "EventName": "PM_LS1_L1_PREF",
+    "BriefDescription": "LS1 L1 cache data prefetches,",
+    "PublicDescription": "LS1 L1 cache data prefetches42,"
+  },
+  {
+    "EventCode": "0xc09a",
+    "EventName": "PM_LS1_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches,",
+    "PublicDescription": "Software L1 Prefetches, including SW Transient Prefetches42,"
+  },
+  {
+    "EventCode": "0xc0b0",
+    "EventName": "PM_LSU0_FLUSH_LRQ",
+    "BriefDescription": "LS0 Flush: LRQ,",
+    "PublicDescription": "LS0 Flush: LRQLSU0 LRQ flushes,"
+  },
+  {
+    "EventCode": "0xc0b8",
+    "EventName": "PM_LSU0_FLUSH_SRQ",
+    "BriefDescription": "LS0 Flush: SRQ,",
+    "PublicDescription": "LS0 Flush: SRQLSU0 SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xc0a4",
+    "EventName": "PM_LSU0_FLUSH_ULD",
+    "BriefDescription": "LS0 Flush: Unaligned Load,",
+    "PublicDescription": "LS0 Flush: Unaligned LoadLSU0 unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xc0ac",
+    "EventName": "PM_LSU0_FLUSH_UST",
+    "BriefDescription": "LS0 Flush: Unaligned Store,",
+    "PublicDescription": "LS0 Flush: Unaligned StoreLSU0 unaligned store flushes,"
+  },
+  {
+    "EventCode": "0xf088",
+    "EventName": "PM_LSU0_L1_CAM_CANCEL",
+    "BriefDescription": "ls0 l1 tm cam cancel,",
+    "PublicDescription": "ls0 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x1e056",
+    "EventName": "PM_LSU0_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe0,",
+    "PublicDescription": ".,"
+  },
+  {
+    "EventCode": "0xd08c",
+    "EventName": "PM_LSU0_LMQ_LHR_MERGE",
+    "BriefDescription": "LS0 Load Merged with another cacheline request,",
+    "PublicDescription": "LS0 Load Merged with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xc08c",
+    "EventName": "PM_LSU0_NCLD",
+    "BriefDescription": "LS0 Non-cachable Loads counted at finish,",
+    "PublicDescription": "LS0 Non-cachable Loads counted at finishLSU0 non-cacheable loads,"
+  },
+  {
+    "EventCode": "0xe090",
+    "EventName": "PM_LSU0_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x1e05a",
+    "EventName": "PM_LSU0_REJECT",
+    "BriefDescription": "LSU0 reject,",
+    "PublicDescription": "LSU0 reject .,"
+  },
+  {
+    "EventCode": "0xc09c",
+    "EventName": "PM_LSU0_SRQ_STFWD",
+    "BriefDescription": "LS0 SRQ forwarded data to a load,",
+    "PublicDescription": "LS0 SRQ forwarded data to a loadLSU0 SRQ store forwarded,"
+  },
+  {
+    "EventCode": "0xf084",
+    "EventName": "PM_LSU0_STORE_REJECT",
+    "BriefDescription": "ls0 store reject,",
+    "PublicDescription": "ls0 store reject42,"
+  },
+  {
+    "EventCode": "0xe0a8",
+    "EventName": "PM_LSU0_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe098",
+    "EventName": "PM_LSU0_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a0",
+    "EventName": "PM_LSU0_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b2",
+    "EventName": "PM_LSU1_FLUSH_LRQ",
+    "BriefDescription": "LS1 Flush: LRQ,",
+    "PublicDescription": "LS1 Flush: LRQLSU1 LRQ flushes,"
+  },
+  {
+    "EventCode": "0xc0ba",
+    "EventName": "PM_LSU1_FLUSH_SRQ",
+    "BriefDescription": "LS1 Flush: SRQ,",
+    "PublicDescription": "LS1 Flush: SRQLSU1 SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xc0a6",
+    "EventName": "PM_LSU1_FLUSH_ULD",
+    "BriefDescription": "LS 1 Flush: Unaligned Load,",
+    "PublicDescription": "LS 1 Flush: Unaligned LoadLSU1 unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xc0ae",
+    "EventName": "PM_LSU1_FLUSH_UST",
+    "BriefDescription": "LS1 Flush: Unaligned Store,",
+    "PublicDescription": "LS1 Flush: Unaligned StoreLSU1 unaligned store flushes,"
+  },
+  {
+    "EventCode": "0xf08a",
+    "EventName": "PM_LSU1_L1_CAM_CANCEL",
+    "BriefDescription": "ls1 l1 tm cam cancel,",
+    "PublicDescription": "ls1 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x2e056",
+    "EventName": "PM_LSU1_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe1,",
+    "PublicDescription": "Larx finished in LSU pipe1.,"
+  },
+  {
+    "EventCode": "0xd08e",
+    "EventName": "PM_LSU1_LMQ_LHR_MERGE",
+    "BriefDescription": "LS1 Load Merge with another cacheline request,",
+    "PublicDescription": "LS1 Load Merge with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xc08e",
+    "EventName": "PM_LSU1_NCLD",
+    "BriefDescription": "LS1 Non-cachable Loads counted at finish,",
+    "PublicDescription": "LS1 Non-cachable Loads counted at finishLSU1 non-cacheable loads,"
+  },
+  {
+    "EventCode": "0xe092",
+    "EventName": "PM_LSU1_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x2e05a",
+    "EventName": "PM_LSU1_REJECT",
+    "BriefDescription": "LSU1 reject,",
+    "PublicDescription": "LSU1 reject .,"
+  },
+  {
+    "EventCode": "0xc09e",
+    "EventName": "PM_LSU1_SRQ_STFWD",
+    "BriefDescription": "LS1 SRQ forwarded data to a load,",
+    "PublicDescription": "LS1 SRQ forwarded data to a loadLSU1 SRQ store forwarded,"
+  },
+  {
+    "EventCode": "0xf086",
+    "EventName": "PM_LSU1_STORE_REJECT",
+    "BriefDescription": "ls1 store reject,",
+    "PublicDescription": "ls1 store reject42,"
+  },
+  {
+    "EventCode": "0xe0aa",
+    "EventName": "PM_LSU1_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09a",
+    "EventName": "PM_LSU1_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a2",
+    "EventName": "PM_LSU1_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b4",
+    "EventName": "PM_LSU2_FLUSH_LRQ",
+    "BriefDescription": "LS02Flush: LRQ,",
+    "PublicDescription": "LS02Flush: LRQ42,"
+  },
+  {
+    "EventCode": "0xc0bc",
+    "EventName": "PM_LSU2_FLUSH_SRQ",
+    "BriefDescription": "LS2 Flush: SRQ,",
+    "PublicDescription": "LS2 Flush: SRQ42,"
+  },
+  {
+    "EventCode": "0xc0a8",
+    "EventName": "PM_LSU2_FLUSH_ULD",
+    "BriefDescription": "LS3 Flush: Unaligned Load,",
+    "PublicDescription": "LS3 Flush: Unaligned Load42,"
+  },
+  {
+    "EventCode": "0xf08c",
+    "EventName": "PM_LSU2_L1_CAM_CANCEL",
+    "BriefDescription": "ls2 l1 tm cam cancel,",
+    "PublicDescription": "ls2 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x3e056",
+    "EventName": "PM_LSU2_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe2,",
+    "PublicDescription": "Larx finished in LSU pipe2.,"
+  },
+  {
+    "EventCode": "0xc084",
+    "EventName": "PM_LSU2_LDF",
+    "BriefDescription": "LS2 Scalar Loads,",
+    "PublicDescription": "LS2 Scalar Loads42,"
+  },
+  {
+    "EventCode": "0xc088",
+    "EventName": "PM_LSU2_LDX",
+    "BriefDescription": "LS0 Vector Loads,",
+    "PublicDescription": "LS0 Vector Loads42,"
+  },
+  {
+    "EventCode": "0xd090",
+    "EventName": "PM_LSU2_LMQ_LHR_MERGE",
+    "BriefDescription": "LS0 Load Merged with another cacheline request,",
+    "PublicDescription": "LS0 Load Merged with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xe094",
+    "EventName": "PM_LSU2_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x3e05a",
+    "EventName": "PM_LSU2_REJECT",
+    "BriefDescription": "LSU2 reject,",
+    "PublicDescription": "LSU2 reject .,"
+  },
+  {
+    "EventCode": "0xc0a0",
+    "EventName": "PM_LSU2_SRQ_STFWD",
+    "BriefDescription": "LS2 SRQ forwarded data to a load,",
+    "PublicDescription": "LS2 SRQ forwarded data to a load42,"
+  },
+  {
+    "EventCode": "0xe0ac",
+    "EventName": "PM_LSU2_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09c",
+    "EventName": "PM_LSU2_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a4",
+    "EventName": "PM_LSU2_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0xc0b6",
+    "EventName": "PM_LSU3_FLUSH_LRQ",
+    "BriefDescription": "LS3 Flush: LRQ,",
+    "PublicDescription": "LS3 Flush: LRQ42,"
+  },
+  {
+    "EventCode": "0xc0be",
+    "EventName": "PM_LSU3_FLUSH_SRQ",
+    "BriefDescription": "LS13 Flush: SRQ,",
+    "PublicDescription": "LS13 Flush: SRQ42,"
+  },
+  {
+    "EventCode": "0xc0aa",
+    "EventName": "PM_LSU3_FLUSH_ULD",
+    "BriefDescription": "LS 14Flush: Unaligned Load,",
+    "PublicDescription": "LS 14Flush: Unaligned Load42,"
+  },
+  {
+    "EventCode": "0xf08e",
+    "EventName": "PM_LSU3_L1_CAM_CANCEL",
+    "BriefDescription": "ls3 l1 tm cam cancel,",
+    "PublicDescription": "ls3 l1 tm cam cancel42,"
+  },
+  {
+    "EventCode": "0x4e056",
+    "EventName": "PM_LSU3_LARX_FIN",
+    "BriefDescription": "Larx finished in LSU pipe3,",
+    "PublicDescription": "Larx finished in LSU pipe3.,"
+  },
+  {
+    "EventCode": "0xc086",
+    "EventName": "PM_LSU3_LDF",
+    "BriefDescription": "LS3 Scalar Loads,",
+    "PublicDescription": "LS3 Scalar Loads 42,"
+  },
+  {
+    "EventCode": "0xc08a",
+    "EventName": "PM_LSU3_LDX",
+    "BriefDescription": "LS1 Vector Loads,",
+    "PublicDescription": "LS1 Vector Loads42,"
+  },
+  {
+    "EventCode": "0xd092",
+    "EventName": "PM_LSU3_LMQ_LHR_MERGE",
+    "BriefDescription": "LS1 Load Merge with another cacheline request,",
+    "PublicDescription": "LS1 Load Merge with another cacheline request42,"
+  },
+  {
+    "EventCode": "0xe096",
+    "EventName": "PM_LSU3_PRIMARY_ERAT_HIT",
+    "BriefDescription": "Primary ERAT hit,",
+    "PublicDescription": "Primary ERAT hit42,"
+  },
+  {
+    "EventCode": "0x4e05a",
+    "EventName": "PM_LSU3_REJECT",
+    "BriefDescription": "LSU3 reject,",
+    "PublicDescription": "LSU3 reject .,"
+  },
+  {
+    "EventCode": "0xc0a2",
+    "EventName": "PM_LSU3_SRQ_STFWD",
+    "BriefDescription": "LS3 SRQ forwarded data to a load,",
+    "PublicDescription": "LS3 SRQ forwarded data to a load42,"
+  },
+  {
+    "EventCode": "0xe0ae",
+    "EventName": "PM_LSU3_TMA_REQ_L2",
+    "BriefDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding,",
+    "PublicDescription": "addrs only req to L2 only on the first one,Indication that Load footprint is not expanding42,"
+  },
+  {
+    "EventCode": "0xe09e",
+    "EventName": "PM_LSU3_TM_L1_HIT",
+    "BriefDescription": "Load tm hit in L1,",
+    "PublicDescription": "Load tm hit in L142,"
+  },
+  {
+    "EventCode": "0xe0a6",
+    "EventName": "PM_LSU3_TM_L1_MISS",
+    "BriefDescription": "Load tm L1 miss,",
+    "PublicDescription": "Load tm L1 miss42,"
+  },
+  {
+    "EventCode": "0x200f6",
+    "EventName": "PM_LSU_DERAT_MISS",
+    "BriefDescription": "DERAT Reloaded due to a DERAT miss,",
+    "PublicDescription": "DERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0xe880",
+    "EventName": "PM_LSU_ERAT_MISS_PREF",
+    "BriefDescription": "Erat miss due to prefetch, on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0x30066",
+    "EventName": "PM_LSU_FIN",
+    "BriefDescription": "LSU Finished an instruction (up to 2 per cycle),",
+    "PublicDescription": "LSU Finished an instruction (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0xc8ac",
+    "EventName": "PM_LSU_FLUSH_UST",
+    "BriefDescription": "Unaligned Store Flush on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a4",
+    "EventName": "PM_LSU_FOUR_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when four tablewalks pending on this thread,",
+    "PublicDescription": "Cycles when four tablewalks pending on this thread42,"
+  },
+  {
+    "EventCode": "0x10066",
+    "EventName": "PM_LSU_FX_FIN",
+    "BriefDescription": "LSU Finished a FX operation (up to 2 per cycle,",
+    "PublicDescription": "LSU Finished a FX operation (up to 2 per cycle.,"
+  },
+  {
+    "EventCode": "0xd8b8",
+    "EventName": "PM_LSU_L1_PREF",
+    "BriefDescription": "hw initiated , include sw streaming forms as well , include sw streams as a separate event,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc898",
+    "EventName": "PM_LSU_L1_SW_PREF",
+    "BriefDescription": "Software L1 Prefetches, including SW Transient Prefetches, on both pipes,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc884",
+    "EventName": "PM_LSU_LDF",
+    "BriefDescription": "FPU loads only on LS2/LS3 ie LU0/LU1,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc888",
+    "EventName": "PM_LSU_LDX",
+    "BriefDescription": "Vector loads can issue only on LS2/LS3,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a2",
+    "EventName": "PM_LSU_LMQ_FULL_CYC",
+    "BriefDescription": "LMQ full,",
+    "PublicDescription": "LMQ fullCycles LMQ full,,"
+  },
+  {
+    "EventCode": "0xd0a1",
+    "EventName": "PM_LSU_LMQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd0a0",
+    "EventName": "PM_LSU_LMQ_S0_VALID",
+    "BriefDescription": "Slot 0 of LMQ valid,",
+    "PublicDescription": "Slot 0 of LMQ validLMQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0x3001c",
+    "EventName": "PM_LSU_LMQ_SRQ_EMPTY_ALL_CYC",
+    "BriefDescription": "ALL threads lsu empty (lmq and srq empty),",
+    "PublicDescription": "ALL threads lsu empty (lmq and srq empty). Issue HW016541,"
+  },
+  {
+    "EventCode": "0x2003e",
+    "EventName": "PM_LSU_LMQ_SRQ_EMPTY_CYC",
+    "BriefDescription": "LSU empty (lmq and srq empty),",
+    "PublicDescription": "LSU empty (lmq and srq empty).,"
+  },
+  {
+    "EventCode": "0xd09f",
+    "EventName": "PM_LSU_LRQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09e",
+    "EventName": "PM_LSU_LRQ_S0_VALID",
+    "BriefDescription": "Slot 0 of LRQ valid,",
+    "PublicDescription": "Slot 0 of LRQ validLRQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0xf091",
+    "EventName": "PM_LSU_LRQ_S43_ALLOC",
+    "BriefDescription": "LRQ slot 43 was released,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xf090",
+    "EventName": "PM_LSU_LRQ_S43_VALID",
+    "BriefDescription": "LRQ slot 43 was busy,",
+    "PublicDescription": "LRQ slot 43 was busy42,"
+  },
+  {
+    "EventCode": "0x30162",
+    "EventName": "PM_LSU_MRK_DERAT_MISS",
+    "BriefDescription": "DERAT Reloaded (Miss),",
+    "PublicDescription": "DERAT Reloaded (Miss).,"
+  },
+  {
+    "EventCode": "0xc88c",
+    "EventName": "PM_LSU_NCLD",
+    "BriefDescription": "count at finish so can return only on ls0 or ls1,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xc092",
+    "EventName": "PM_LSU_NCST",
+    "BriefDescription": "Non-cachable Stores sent to nest,",
+    "PublicDescription": "Non-cachable Stores sent to nest42,"
+  },
+  {
+    "EventCode": "0x10064",
+    "EventName": "PM_LSU_REJECT",
+    "BriefDescription": "LSU Reject (up to 4 per cycle),",
+    "PublicDescription": "LSU Reject (up to 4 per cycle).,"
+  },
+  {
+    "EventCode": "0x2e05c",
+    "EventName": "PM_LSU_REJECT_ERAT_MISS",
+    "BriefDescription": "LSU Reject due to ERAT (up to 4 per cycles),",
+    "PublicDescription": "LSU Reject due to ERAT (up to 4 per cycles).,"
+  },
+  {
+    "EventCode": "0x4e05c",
+    "EventName": "PM_LSU_REJECT_LHS",
+    "BriefDescription": "LSU Reject due to LHS (up to 4 per cycle),",
+    "PublicDescription": "LSU Reject due to LHS (up to 4 per cycle).,"
+  },
+  {
+    "EventCode": "0x1e05c",
+    "EventName": "PM_LSU_REJECT_LMQ_FULL",
+    "BriefDescription": "LSU reject due to LMQ full ( 4 per cycle),",
+    "PublicDescription": "LSU reject due to LMQ full ( 4 per cycle).,"
+  },
+  {
+    "EventCode": "0xd082",
+    "EventName": "PM_LSU_SET_MPRED",
+    "BriefDescription": "Line already in cache at reload time,",
+    "PublicDescription": "Line already in cache at reload time42,"
+  },
+  {
+    "EventCode": "0x40008",
+    "EventName": "PM_LSU_SRQ_EMPTY_CYC",
+    "BriefDescription": "ALL threads srq empty,",
+    "PublicDescription": "All threads srq empty.,"
+  },
+  {
+    "EventCode": "0x1001a",
+    "EventName": "PM_LSU_SRQ_FULL_CYC",
+    "BriefDescription": "Storage Queue is full and is blocking dispatch,",
+    "PublicDescription": "SRQ is Full.,"
+  },
+  {
+    "EventCode": "0xd09d",
+    "EventName": "PM_LSU_SRQ_S0_ALLOC",
+    "BriefDescription": "Per thread - use edge detect to count allocates On a per thread basis, level signal indicating Slot 0 is valid. By instrumenting a single slot we can calculate service time for that slot. Previous machines required a separate signal indicating the slot was allocated. Because any signal can be routed to any counter in P8, we can count level in one PMC and edge detect in another PMC using the same signal,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09c",
+    "EventName": "PM_LSU_SRQ_S0_VALID",
+    "BriefDescription": "Slot 0 of SRQ valid,",
+    "PublicDescription": "Slot 0 of SRQ validSRQ slot 0 valid,"
+  },
+  {
+    "EventCode": "0xf093",
+    "EventName": "PM_LSU_SRQ_S39_ALLOC",
+    "BriefDescription": "SRQ slot 39 was released,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xf092",
+    "EventName": "PM_LSU_SRQ_S39_VALID",
+    "BriefDescription": "SRQ slot 39 was busy,",
+    "PublicDescription": "SRQ slot 39 was busy42,"
+  },
+  {
+    "EventCode": "0xd09b",
+    "EventName": "PM_LSU_SRQ_SYNC",
+    "BriefDescription": "A sync in the SRQ ended,",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0xd09a",
+    "EventName": "PM_LSU_SRQ_SYNC_CYC",
+    "BriefDescription": "A sync is in the SRQ (edge detect to count),",
+    "PublicDescription": "A sync is in the SRQ (edge detect to count)SRQ sync duration,"
+  },
+  {
+    "EventCode": "0xf084",
+    "EventName": "PM_LSU_STORE_REJECT",
+    "BriefDescription": "Store reject on either pipe,",
+    "PublicDescription": "LSU,"
+  },
+  {
+    "EventCode": "0xd0a6",
+    "EventName": "PM_LSU_TWO_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when two tablewalks pending on this thread,",
+    "PublicDescription": "Cycles when two tablewalks pending on this thread42,"
+  },
+  {
+    "EventCode": "0x5094",
+    "EventName": "PM_LWSYNC",
+    "BriefDescription": "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out,",
+    "PublicDescription": "threaded version, IC Misses where we got EA dir hit but no sector valids were on. ICBI took line out,"
+  },
+  {
+    "EventCode": "0x209a",
+    "EventName": "PM_LWSYNC_HELD",
+    "BriefDescription": "LWSYNC held at dispatch,",
+    "PublicDescription": "LWSYNC held at dispatch,"
+  },
+  {
+    "EventCode": "0x4c058",
+    "EventName": "PM_MEM_CO",
+    "BriefDescription": "Memory castouts from this lpar,",
+    "PublicDescription": "Memory castouts from this lpar.,"
+  },
+  {
+    "EventCode": "0x10058",
+    "EventName": "PM_MEM_LOC_THRESH_IFU",
+    "BriefDescription": "Local Memory above threshold for IFU speculation control,",
+    "PublicDescription": "Local Memory above threshold for IFU speculation control.,"
+  },
+  {
+    "EventCode": "0x40056",
+    "EventName": "PM_MEM_LOC_THRESH_LSU_HIGH",
+    "BriefDescription": "Local memory above threshold for LSU medium,",
+    "PublicDescription": "Local memory above threshold for LSU medium.,"
+  },
+  {
+    "EventCode": "0x1c05e",
+    "EventName": "PM_MEM_LOC_THRESH_LSU_MED",
+    "BriefDescription": "Local memory above theshold for data prefetch,",
+    "PublicDescription": "Local memory above theshold for data prefetch.,"
+  },
+  {
+    "EventCode": "0x2c058",
+    "EventName": "PM_MEM_PREF",
+    "BriefDescription": "Memory prefetch for this lpar. Includes L4,",
+    "PublicDescription": "Memory prefetch for this lpar.,"
+  },
+  {
+    "EventCode": "0x10056",
+    "EventName": "PM_MEM_READ",
+    "BriefDescription": "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch). Includes L4,",
+    "PublicDescription": "Reads from Memory from this lpar (includes data/inst/xlate/l1prefetch/inst prefetch).,"
+  },
+  {
+    "EventCode": "0x3c05e",
+    "EventName": "PM_MEM_RWITM",
+    "BriefDescription": "Memory rwitm for this lpar,",
+    "PublicDescription": "Memory rwitm for this lpar.,"
+  },
+  {
+    "EventCode": "0x3515e",
+    "EventName": "PM_MRK_BACK_BR_CMPL",
+    "BriefDescription": "Marked branch instruction completed with a target address less than current instruction address,",
+    "PublicDescription": "Marked branch instruction completed with a target address less than current instruction address.,"
+  },
+  {
+    "EventCode": "0x2013a",
+    "EventName": "PM_MRK_BRU_FIN",
+    "BriefDescription": "bru marked instr finish,",
+    "PublicDescription": "bru marked instr finish.,"
+  },
+  {
+    "EventCode": "0x1016e",
+    "EventName": "PM_MRK_BR_CMPL",
+    "BriefDescription": "Branch Instruction completed,",
+    "PublicDescription": "Branch Instruction completed.,"
+  },
+  {
+    "EventCode": "0x301e4",
+    "EventName": "PM_MRK_BR_MPRED_CMPL",
+    "BriefDescription": "Marked Branch Mispredicted,",
+    "PublicDescription": "Marked Branch Mispredicted.,"
+  },
+  {
+    "EventCode": "0x101e2",
+    "EventName": "PM_MRK_BR_TAKEN_CMPL",
+    "BriefDescription": "Marked Branch Taken completed,",
+    "PublicDescription": "Marked Branch Taken.,"
+  },
+  {
+    "EventCode": "0x3013a",
+    "EventName": "PM_MRK_CRU_FIN",
+    "BriefDescription": "IFU non-branch finished,",
+    "PublicDescription": "IFU non-branch marked instruction finished.,"
+  },
+  {
+    "EventCode": "0x4d148",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d128",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d148",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c128",
+    "EventName": "PM_MRK_DATA_FROM_DL2L3_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d14c",
+    "EventName": "PM_MRK_DATA_FROM_DL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on a different Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c12c",
+    "EventName": "PM_MRK_DATA_FROM_DL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's L4 on a different Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d14c",
+    "EventName": "PM_MRK_DATA_FROM_DMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12c",
+    "EventName": "PM_MRK_DATA_FROM_DMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group (Distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d142",
+    "EventName": "PM_MRK_DATA_FROM_L2",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d146",
+    "EventName": "PM_MRK_DATA_FROM_L21_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d126",
+    "EventName": "PM_MRK_DATA_FROM_L21_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d146",
+    "EventName": "PM_MRK_DATA_FROM_L21_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c126",
+    "EventName": "PM_MRK_DATA_FROM_L21_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's L2 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14e",
+    "EventName": "PM_MRK_DATA_FROM_L2MISS",
+    "BriefDescription": "Data cache reload L2 miss,",
+    "PublicDescription": "Data cache reload L2 miss.,"
+  },
+  {
+    "EventCode": "0x4c12e",
+    "EventName": "PM_MRK_DATA_FROM_L2MISS_CYC",
+    "BriefDescription": "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a localtion other than the local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c122",
+    "EventName": "PM_MRK_DATA_FROM_L2_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with load hit store conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c120",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_LDHITST_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 with load hit store conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d120",
+    "EventName": "PM_MRK_DATA_FROM_L2_DISP_CONFLICT_OTHER_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d120",
+    "EventName": "PM_MRK_DATA_FROM_L2_MEPF_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d140",
+    "EventName": "PM_MRK_DATA_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L2 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c120",
+    "EventName": "PM_MRK_DATA_FROM_L2_NO_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L2 without conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L2 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d142",
+    "EventName": "PM_MRK_DATA_FROM_L3",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d124",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c124",
+    "EventName": "PM_MRK_DATA_FROM_L31_ECO_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's ECO L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d144",
+    "EventName": "PM_MRK_DATA_FROM_L31_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d124",
+    "EventName": "PM_MRK_DATA_FROM_L31_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d146",
+    "EventName": "PM_MRK_DATA_FROM_L31_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c126",
+    "EventName": "PM_MRK_DATA_FROM_L31_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another core's L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x201e4",
+    "EventName": "PM_MRK_DATA_FROM_L3MISS",
+    "BriefDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a localtion other than the local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12e",
+    "EventName": "PM_MRK_DATA_FROM_L3MISS_CYC",
+    "BriefDescription": "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a localtion other than the local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d122",
+    "EventName": "PM_MRK_DATA_FROM_L3_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d142",
+    "EventName": "PM_MRK_DATA_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c122",
+    "EventName": "PM_MRK_DATA_FROM_L3_DISP_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 with dispatch conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d142",
+    "EventName": "PM_MRK_DATA_FROM_L3_MEPF",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d122",
+    "EventName": "PM_MRK_DATA_FROM_L3_MEPF_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d144",
+    "EventName": "PM_MRK_DATA_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from local core's L3 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c124",
+    "EventName": "PM_MRK_DATA_FROM_L3_NO_CONFLICT_CYC",
+    "BriefDescription": "Duration in cycles to reload from local core's L3 without conflict due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from local core's L3 without conflict due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14c",
+    "EventName": "PM_MRK_DATA_FROM_LL4",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's L4 cache due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c12c",
+    "EventName": "PM_MRK_DATA_FROM_LL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from the local chip's L4 cache due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from the local chip's L4 cache due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d148",
+    "EventName": "PM_MRK_DATA_FROM_LMEM",
+    "BriefDescription": "The processor's data cache was reloaded from the local chip's Memory due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from the local chip's Memory due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d128",
+    "EventName": "PM_MRK_DATA_FROM_LMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from the local chip's Memory due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from the local chip's Memory due to a marked load.,"
+  },
+  {
+    "EventCode": "0x201e0",
+    "EventName": "PM_MRK_DATA_FROM_MEM",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d14c",
+    "EventName": "PM_MRK_DATA_FROM_MEMORY",
+    "BriefDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d12c",
+    "EventName": "PM_MRK_DATA_FROM_MEMORY_CYC",
+    "BriefDescription": "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from a memory location including L4 from local remote or distant due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d14a",
+    "EventName": "PM_MRK_DATA_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d12a",
+    "EventName": "PM_MRK_DATA_FROM_OFF_CHIP_CACHE_CYC",
+    "BriefDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d148",
+    "EventName": "PM_MRK_DATA_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded either shared or modified data from another core's L2/L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c128",
+    "EventName": "PM_MRK_DATA_FROM_ON_CHIP_CACHE_CYC",
+    "BriefDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload either shared or modified data from another core's L2/L3 on the same chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d146",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_MOD",
+    "BriefDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d126",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_MOD_CYC",
+    "BriefDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x1d14a",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_SHR",
+    "BriefDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4c12a",
+    "EventName": "PM_MRK_DATA_FROM_RL2L3_SHR_CYC",
+    "BriefDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2d14a",
+    "EventName": "PM_MRK_DATA_FROM_RL4",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's L4 on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x4d12a",
+    "EventName": "PM_MRK_DATA_FROM_RL4_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's L4 on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x3d14a",
+    "EventName": "PM_MRK_DATA_FROM_RMEM",
+    "BriefDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "The processor's data cache was reloaded from another chip's memory on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x2c12a",
+    "EventName": "PM_MRK_DATA_FROM_RMEM_CYC",
+    "BriefDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load,",
+    "PublicDescription": "Duration in cycles to reload from another chip's memory on the same Node or Group ( Remote) due to a marked load.,"
+  },
+  {
+    "EventCode": "0x40118",
+    "EventName": "PM_MRK_DCACHE_RELOAD_INTV",
+    "BriefDescription": "Combined Intervention event,",
+    "PublicDescription": "Combined Intervention event.,"
+  },
+  {
+    "EventCode": "0x301e6",
+    "EventName": "PM_MRK_DERAT_MISS",
+    "BriefDescription": "Erat Miss (TLB Access) All page sizes,",
+    "PublicDescription": "Erat Miss (TLB Access) All page sizes.,"
+  },
+  {
+    "EventCode": "0x4d154",
+    "EventName": "PM_MRK_DERAT_MISS_16G",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16G,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16G.,"
+  },
+  {
+    "EventCode": "0x3d154",
+    "EventName": "PM_MRK_DERAT_MISS_16M",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16M,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 16M.,"
+  },
+  {
+    "EventCode": "0x1d156",
+    "EventName": "PM_MRK_DERAT_MISS_4K",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 4K,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 4K.,"
+  },
+  {
+    "EventCode": "0x2d154",
+    "EventName": "PM_MRK_DERAT_MISS_64K",
+    "BriefDescription": "Marked Data ERAT Miss (Data TLB Access) page size 64K,",
+    "PublicDescription": "Marked Data ERAT Miss (Data TLB Access) page size 64K.,"
+  },
+  {
+    "EventCode": "0x20132",
+    "EventName": "PM_MRK_DFU_FIN",
+    "BriefDescription": "Decimal Unit marked Instruction Finish,",
+    "PublicDescription": "Decimal Unit marked Instruction Finish.,"
+  },
+  {
+    "EventCode": "0x4f148",
+    "EventName": "PM_MRK_DPTEG_FROM_DL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f148",
+    "EventName": "PM_MRK_DPTEG_FROM_DL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on a different Node or Group (Distant), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_DL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on a different Node or Group (Distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_DMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group (Distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L2",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L21_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L2 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L21_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L2 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14e",
+    "EventName": "PM_MRK_DPTEG_FROM_L2MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L2 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_LDHITST",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with load hit store conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_DISP_CONFLICT_OTHER",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 with dispatch conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 hit without dispatch conflicts on Mepf state. due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f140",
+    "EventName": "PM_MRK_DPTEG_FROM_L2_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L2 without conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_ECO_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's ECO L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_ECO_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's ECO L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another core's L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f146",
+    "EventName": "PM_MRK_DPTEG_FROM_L31_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another core's L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14e",
+    "EventName": "PM_MRK_DPTEG_FROM_L3MISS",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a localtion other than the local core's L3 due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_DISP_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 with dispatch conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f142",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_MEPF",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without dispatch conflicts hit on Mepf state. due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f144",
+    "EventName": "PM_MRK_DPTEG_FROM_L3_NO_CONFLICT",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from local core's L3 without conflict due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_LL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's L4 cache due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f148",
+    "EventName": "PM_MRK_DPTEG_FROM_LMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from the local chip's Memory due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f14c",
+    "EventName": "PM_MRK_DPTEG_FROM_MEMORY",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from a memory location including L4 from local remote or distant due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x4f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_OFF_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on a different chip (remote or distant) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f148",
+    "EventName": "PM_MRK_DPTEG_FROM_ON_CHIP_CACHE",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB either shared or modified data from another core's L2/L3 on the same chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f146",
+    "EventName": "PM_MRK_DPTEG_FROM_RL2L3_MOD",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Modified (M) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x1f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RL2L3_SHR",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB with Shared (S) data from another chip's L2 or L3 on the same Node or Group (Remote), as this chip due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x2f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RL4",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's L4 on the same Node or Group ( Remote) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x3f14a",
+    "EventName": "PM_MRK_DPTEG_FROM_RMEM",
+    "BriefDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request,",
+    "PublicDescription": "A Page Table Entry was loaded into the TLB from another chip's memory on the same Node or Group ( Remote) due to a marked data side request.,"
+  },
+  {
+    "EventCode": "0x401e4",
+    "EventName": "PM_MRK_DTLB_MISS",
+    "BriefDescription": "Marked dtlb miss,",
+    "PublicDescription": "Marked dtlb miss.,"
+  },
+  {
+    "EventCode": "0x1d158",
+    "EventName": "PM_MRK_DTLB_MISS_16G",
+    "BriefDescription": "Marked Data TLB Miss page size 16G,",
+    "PublicDescription": "Marked Data TLB Miss page size 16G.,"
+  },
+  {
+    "EventCode": "0x4d156",
+    "EventName": "PM_MRK_DTLB_MISS_16M",
+    "BriefDescription": "Marked Data TLB Miss page size 16M,",
+    "PublicDescription": "Marked Data TLB Miss page size 16M.,"
+  },
+  {
+    "EventCode": "0x2d156",
+    "EventName": "PM_MRK_DTLB_MISS_4K",
+    "BriefDescription": "Marked Data TLB Miss page size 4k,",
+    "PublicDescription": "Marked Data TLB Miss page size 4k.,"
+  },
+  {
+    "EventCode": "0x3d156",
+    "EventName": "PM_MRK_DTLB_MISS_64K",
+    "BriefDescription": "Marked Data TLB Miss page size 64K,",
+    "PublicDescription": "Marked Data TLB Miss page size 64K.,"
+  },
+  {
+    "EventCode": "0x40154",
+    "EventName": "PM_MRK_FAB_RSP_BKILL",
+    "BriefDescription": "Marked store had to do a bkill,",
+    "PublicDescription": "Marked store had to do a bkill.,"
+  },
+  {
+    "EventCode": "0x2f150",
+    "EventName": "PM_MRK_FAB_RSP_BKILL_CYC",
+    "BriefDescription": "cycles L2 RC took for a bkill,",
+    "PublicDescription": "cycles L2 RC took for a bkill.,"
+  },
+  {
+    "EventCode": "0x3015e",
+    "EventName": "PM_MRK_FAB_RSP_CLAIM_RTY",
+    "BriefDescription": "Sampled store did a rwitm and got a rty,",
+    "PublicDescription": "Sampled store did a rwitm and got a rty.,"
+  },
+  {
+    "EventCode": "0x30154",
+    "EventName": "PM_MRK_FAB_RSP_DCLAIM",
+    "BriefDescription": "Marked store had to do a dclaim,",
+    "PublicDescription": "Marked store had to do a dclaim.,"
+  },
+  {
+    "EventCode": "0x2f152",
+    "EventName": "PM_MRK_FAB_RSP_DCLAIM_CYC",
+    "BriefDescription": "cycles L2 RC took for a dclaim,",
+    "PublicDescription": "cycles L2 RC took for a dclaim.,"
+  },
+  {
+    "EventCode": "0x30156",
+    "EventName": "PM_MRK_FAB_RSP_MATCH",
+    "BriefDescription": "ttype and cresp matched as specified in MMCR1,",
+    "PublicDescription": "ttype and cresp matched as specified in MMCR1.,"
+  },
+  {
+    "EventCode": "0x4f152",
+    "EventName": "PM_MRK_FAB_RSP_MATCH_CYC",
+    "BriefDescription": "cresp/ttype match cycles,",
+    "PublicDescription": "cresp/ttype match cycles.,"
+  },
+  {
+    "EventCode": "0x4015e",
+    "EventName": "PM_MRK_FAB_RSP_RD_RTY",
+    "BriefDescription": "Sampled L2 reads retry count,",
+    "PublicDescription": "Sampled L2 reads retry count.,"
+  },
+  {
+    "EventCode": "0x1015e",
+    "EventName": "PM_MRK_FAB_RSP_RD_T_INTV",
+    "BriefDescription": "Sampled Read got a T intervention,",
+    "PublicDescription": "Sampled Read got a T intervention.,"
+  },
+  {
+    "EventCode": "0x4f150",
+    "EventName": "PM_MRK_FAB_RSP_RWITM_CYC",
+    "BriefDescription": "cycles L2 RC took for a rwitm,",
+    "PublicDescription": "cycles L2 RC took for a rwitm.,"
+  },
+  {
+    "EventCode": "0x2015e",
+    "EventName": "PM_MRK_FAB_RSP_RWITM_RTY",
+    "BriefDescription": "Sampled store did a rwitm and got a rty,",
+    "PublicDescription": "Sampled store did a rwitm and got a rty.,"
+  },
+  {
+    "EventCode": "0x2013c",
+    "EventName": "PM_MRK_FILT_MATCH",
+    "BriefDescription": "Marked filter Match,",
+    "PublicDescription": "Marked filter Match.,"
+  },
+  {
+    "EventCode": "0x1013c",
+    "EventName": "PM_MRK_FIN_STALL_CYC",
+    "BriefDescription": "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count ),",
+    "PublicDescription": "Marked instruction Finish Stall cycles (marked finish after NTC) (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x20134",
+    "EventName": "PM_MRK_FXU_FIN",
+    "BriefDescription": "fxu marked instr finish,",
+    "PublicDescription": "fxu marked instr finish.,"
+  },
+  {
+    "EventCode": "0x40130",
+    "EventName": "PM_MRK_GRP_CMPL",
+    "BriefDescription": "marked instruction finished (completed),",
+    "PublicDescription": "marked instruction finished (completed).,"
+  },
+  {
+    "EventCode": "0x4013a",
+    "EventName": "PM_MRK_GRP_IC_MISS",
+    "BriefDescription": "Marked Group experienced I cache miss,",
+    "PublicDescription": "Marked Group experienced I cache miss.,"
+  },
+  {
+    "EventCode": "0x3013c",
+    "EventName": "PM_MRK_GRP_NTC",
+    "BriefDescription": "Marked group ntc cycles.,",
+    "PublicDescription": "Marked group ntc cycles.,"
+  },
+  {
+    "EventCode": "0x401e0",
+    "EventName": "PM_MRK_INST_CMPL",
+    "BriefDescription": "marked instruction completed,",
+    "PublicDescription": "marked instruction completed.,"
+  },
+  {
+    "EventCode": "0x20130",
+    "EventName": "PM_MRK_INST_DECODED",
+    "BriefDescription": "marked instruction decoded,",
+    "PublicDescription": "marked instruction decoded. Name from ISU?,"
+  },
+  {
+    "EventCode": "0x101e0",
+    "EventName": "PM_MRK_INST_DISP",
+    "BriefDescription": "The thread has dispatched a randomly sampled marked instruction,",
+    "PublicDescription": "Marked Instruction dispatched.,"
+  },
+  {
+    "EventCode": "0x30130",
+    "EventName": "PM_MRK_INST_FIN",
+    "BriefDescription": "marked instruction finished,",
+    "PublicDescription": "marked instr finish any unit .,"
+  },
+  {
+    "EventCode": "0x401e6",
+    "EventName": "PM_MRK_INST_FROM_L3MISS",
+    "BriefDescription": "Marked instruction was reloaded from a location beyond the local chiplet,",
+    "PublicDescription": "n/a,"
+  },
+  {
+    "EventCode": "0x10132",
+    "EventName": "PM_MRK_INST_ISSUED",
+    "BriefDescription": "Marked instruction issued,",
+    "PublicDescription": "Marked instruction issued.,"
+  },
+  {
+    "EventCode": "0x40134",
+    "EventName": "PM_MRK_INST_TIMEO",
+    "BriefDescription": "marked Instruction finish timeout (instruction lost),",
+    "PublicDescription": "marked Instruction finish timeout (instruction lost).,"
+  },
+  {
+    "EventCode": "0x101e4",
+    "EventName": "PM_MRK_L1_ICACHE_MISS",
+    "BriefDescription": "sampled Instruction suffered an icache Miss,",
+    "PublicDescription": "Marked L1 Icache Miss.,"
+  },
+  {
+    "EventCode": "0x101ea",
+    "EventName": "PM_MRK_L1_RELOAD_VALID",
+    "BriefDescription": "Marked demand reload,",
+    "PublicDescription": "Marked demand reload.,"
+  },
+  {
+    "EventCode": "0x20114",
+    "EventName": "PM_MRK_L2_RC_DISP",
+    "BriefDescription": "Marked Instruction RC dispatched in L2,",
+    "PublicDescription": "Marked Instruction RC dispatched in L2.,"
+  },
+  {
+    "EventCode": "0x3012a",
+    "EventName": "PM_MRK_L2_RC_DONE",
+    "BriefDescription": "Marked RC done,",
+    "PublicDescription": "Marked RC done.,"
+  },
+  {
+    "EventCode": "0x40116",
+    "EventName": "PM_MRK_LARX_FIN",
+    "BriefDescription": "Larx finished,",
+    "PublicDescription": "Larx finished .,"
+  },
+  {
+    "EventCode": "0x1013f",
+    "EventName": "PM_MRK_LD_MISS_EXPOSED",
+    "BriefDescription": "Marked Load exposed Miss (exposed period ended),",
+    "PublicDescription": "Marked Load exposed Miss (use edge detect to count #),"
+  },
+  {
+    "EventCode": "0x1013e",
+    "EventName": "PM_MRK_LD_MISS_EXPOSED_CYC",
+    "BriefDescription": "Marked Load exposed Miss cycles,",
+    "PublicDescription": "Marked Load exposed Miss (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x201e2",
+    "EventName": "PM_MRK_LD_MISS_L1",
+    "BriefDescription": "Marked DL1 Demand Miss counted at exec time,",
+    "PublicDescription": "Marked DL1 Demand Miss counted at exec time.,"
+  },
+  {
+    "EventCode": "0x4013e",
+    "EventName": "PM_MRK_LD_MISS_L1_CYC",
+    "BriefDescription": "Marked ld latency,",
+    "PublicDescription": "Marked ld latency.,"
+  },
+  {
+    "EventCode": "0x40132",
+    "EventName": "PM_MRK_LSU_FIN",
+    "BriefDescription": "lsu marked instr finish,",
+    "PublicDescription": "lsu marked instr finish.,"
+  },
+  {
+    "EventCode": "0xd180",
+    "EventName": "PM_MRK_LSU_FLUSH",
+    "BriefDescription": "Flush: (marked) : All Cases,",
+    "PublicDescription": "Flush: (marked) : All Cases42,"
+  },
+  {
+    "EventCode": "0xd188",
+    "EventName": "PM_MRK_LSU_FLUSH_LRQ",
+    "BriefDescription": "Flush: (marked) LRQ,",
+    "PublicDescription": "Flush: (marked) LRQMarked LRQ flushes,"
+  },
+  {
+    "EventCode": "0xd18a",
+    "EventName": "PM_MRK_LSU_FLUSH_SRQ",
+    "BriefDescription": "Flush: (marked) SRQ,",
+    "PublicDescription": "Flush: (marked) SRQMarked SRQ lhs flushes,"
+  },
+  {
+    "EventCode": "0xd184",
+    "EventName": "PM_MRK_LSU_FLUSH_ULD",
+    "BriefDescription": "Flush: (marked) Unaligned Load,",
+    "PublicDescription": "Flush: (marked) Unaligned LoadMarked unaligned load flushes,"
+  },
+  {
+    "EventCode": "0xd186",
+    "EventName": "PM_MRK_LSU_FLUSH_UST",
+    "BriefDescription": "Flush: (marked) Unaligned Store,",
+    "PublicDescription": "Flush: (marked) Unaligned StoreMarked unaligned store flushes,"
+  },
+  {
+    "EventCode": "0x40164",
+    "EventName": "PM_MRK_LSU_REJECT",
+    "BriefDescription": "LSU marked reject (up to 2 per cycle),",
+    "PublicDescription": "LSU marked reject (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0x30164",
+    "EventName": "PM_MRK_LSU_REJECT_ERAT_MISS",
+    "BriefDescription": "LSU marked reject due to ERAT (up to 2 per cycle),",
+    "PublicDescription": "LSU marked reject due to ERAT (up to 2 per cycle).,"
+  },
+  {
+    "EventCode": "0x20112",
+    "EventName": "PM_MRK_NTF_FIN",
+    "BriefDescription": "Marked next to finish instruction finished,",
+    "PublicDescription": "Marked next to finish instruction finished.,"
+  },
+  {
+    "EventCode": "0x1d15e",
+    "EventName": "PM_MRK_RUN_CYC",
+    "BriefDescription": "Marked run cycles,",
+    "PublicDescription": "Marked run cycles.,"
+  },
+  {
+    "EventCode": "0x1d15a",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_EFF",
+    "BriefDescription": "Marked src pref track was effective,",
+    "PublicDescription": "Marked src pref track was effective.,"
+  },
+  {
+    "EventCode": "0x3d15a",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective for marked src,",
+    "PublicDescription": "Prefetch tracked was ineffective for marked src.,"
+  },
+  {
+    "EventCode": "0x4d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate for marked src,",
+    "PublicDescription": "Prefetch tracked was moderate for marked src.,"
+  },
+  {
+    "EventCode": "0x1d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Marked src Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Marked src Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3d15c",
+    "EventName": "PM_MRK_SRC_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3 hit) for marked src,",
+    "PublicDescription": "Prefetch tracked was moderate (L3 hit) for marked src.,"
+  },
+  {
+    "EventCode": "0x3013e",
+    "EventName": "PM_MRK_STALL_CMPLU_CYC",
+    "BriefDescription": "Marked Group completion Stall,",
+    "PublicDescription": "Marked Group Completion Stall cycles (use edge detect to count #).,"
+  },
+  {
+    "EventCode": "0x3e158",
+    "EventName": "PM_MRK_STCX_FAIL",
+    "BriefDescription": "marked stcx failed,",
+    "PublicDescription": "marked stcx failed.,"
+  },
+  {
+    "EventCode": "0x10134",
+    "EventName": "PM_MRK_ST_CMPL",
+    "BriefDescription": "marked store completed and sent to nest,",
+    "PublicDescription": "Marked store completed.,"
+  },
+  {
+    "EventCode": "0x30134",
+    "EventName": "PM_MRK_ST_CMPL_INT",
+    "BriefDescription": "marked store finished with intervention,",
+    "PublicDescription": "marked store complete (data home) with intervention.,"
+  },
+  {
+    "EventCode": "0x3f150",
+    "EventName": "PM_MRK_ST_DRAIN_TO_L2DISP_CYC",
+    "BriefDescription": "cycles to drain st from core to L2,",
+    "PublicDescription": "cycles to drain st from core to L2.,"
+  },
+  {
+    "EventCode": "0x3012c",
+    "EventName": "PM_MRK_ST_FWD",
+    "BriefDescription": "Marked st forwards,",
+    "PublicDescription": "Marked st forwards.,"
+  },
+  {
+    "EventCode": "0x1f150",
+    "EventName": "PM_MRK_ST_L2DISP_TO_CMPL_CYC",
+    "BriefDescription": "cycles from L2 rc disp to l2 rc completion,",
+    "PublicDescription": "cycles from L2 rc disp to l2 rc completion.,"
+  },
+  {
+    "EventCode": "0x20138",
+    "EventName": "PM_MRK_ST_NEST",
+    "BriefDescription": "Marked store sent to nest,",
+    "PublicDescription": "Marked store sent to nest.,"
+  },
+  {
+    "EventCode": "0x1c15a",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_EFF",
+    "BriefDescription": "Marked target pref track was effective,",
+    "PublicDescription": "Marked target pref track was effective.,"
+  },
+  {
+    "EventCode": "0x3c15a",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective for marked target,",
+    "PublicDescription": "Prefetch tracked was ineffective for marked target.,"
+  },
+  {
+    "EventCode": "0x4c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate for marked target,",
+    "PublicDescription": "Prefetch tracked was moderate for marked target.,"
+  },
+  {
+    "EventCode": "0x1c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Marked target Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Marked target Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3c15c",
+    "EventName": "PM_MRK_TGT_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3 hit) for marked target,",
+    "PublicDescription": "Prefetch tracked was moderate (L3 hit) for marked target.,"
+  },
+  {
+    "EventCode": "0x30132",
+    "EventName": "PM_MRK_VSU_FIN",
+    "BriefDescription": "VSU marked instr finish,",
+    "PublicDescription": "vsu (fpu) marked instr finish.,"
+  },
+  {
+    "EventCode": "0x3d15e",
+    "EventName": "PM_MULT_MRK",
+    "BriefDescription": "mult marked instr,",
+    "PublicDescription": "mult marked instr.,"
+  },
+  {
+    "EventCode": "0x20b0",
+    "EventName": "PM_NESTED_TEND",
+    "BriefDescription": "Completion time nested tend,",
+    "PublicDescription": "Completion time nested tend,"
+  },
+  {
+    "EventCode": "0x3006e",
+    "EventName": "PM_NEST_REF_CLK",
+    "BriefDescription": "Multiply by 4 to obtain the number of PB cycles,",
+    "PublicDescription": "Nest reference clocks.,"
+  },
+  {
+    "EventCode": "0x20b6",
+    "EventName": "PM_NON_FAV_TBEGIN",
+    "BriefDescription": "Dispatch time non favored tbegin,",
+    "PublicDescription": "Dispatch time non favored tbegin,"
+  },
+  {
+    "EventCode": "0x2001a",
+    "EventName": "PM_NTCG_ALL_FIN",
+    "BriefDescription": "Cycles after all instructions have finished to group completed,",
+    "PublicDescription": "Ccycles after all instructions have finished to group completed.,"
+  },
+  {
+    "EventCode": "0x20ac",
+    "EventName": "PM_OUTER_TBEGIN",
+    "BriefDescription": "Completion time outer tbegin,",
+    "PublicDescription": "Completion time outer tbegin,"
+  },
+  {
+    "EventCode": "0x20ae",
+    "EventName": "PM_OUTER_TEND",
+    "BriefDescription": "Completion time outer tend,",
+    "PublicDescription": "Completion time outer tend,"
+  },
+  {
+    "EventCode": "0x20010",
+    "EventName": "PM_PMC1_OVERFLOW",
+    "BriefDescription": "Overflow from counter 1,",
+    "PublicDescription": "Overflow from counter 1.,"
+  },
+  {
+    "EventCode": "0x30010",
+    "EventName": "PM_PMC2_OVERFLOW",
+    "BriefDescription": "Overflow from counter 2,",
+    "PublicDescription": "Overflow from counter 2.,"
+  },
+  {
+    "EventCode": "0x30020",
+    "EventName": "PM_PMC2_REWIND",
+    "BriefDescription": "PMC2 Rewind Event (did not match condition),",
+    "PublicDescription": "PMC2 Rewind Event (did not match condition).,"
+  },
+  {
+    "EventCode": "0x10022",
+    "EventName": "PM_PMC2_SAVED",
+    "BriefDescription": "PMC2 Rewind Value saved,",
+    "PublicDescription": "PMC2 Rewind Value saved (matched condition).,"
+  },
+  {
+    "EventCode": "0x40010",
+    "EventName": "PM_PMC3_OVERFLOW",
+    "BriefDescription": "Overflow from counter 3,",
+    "PublicDescription": "Overflow from counter 3.,"
+  },
+  {
+    "EventCode": "0x10010",
+    "EventName": "PM_PMC4_OVERFLOW",
+    "BriefDescription": "Overflow from counter 4,",
+    "PublicDescription": "Overflow from counter 4.,"
+  },
+  {
+    "EventCode": "0x10020",
+    "EventName": "PM_PMC4_REWIND",
+    "BriefDescription": "PMC4 Rewind Event,",
+    "PublicDescription": "PMC4 Rewind Event (did not match condition).,"
+  },
+  {
+    "EventCode": "0x30022",
+    "EventName": "PM_PMC4_SAVED",
+    "BriefDescription": "PMC4 Rewind Value saved (matched condition),",
+    "PublicDescription": "PMC4 Rewind Value saved (matched condition).,"
+  },
+  {
+    "EventCode": "0x10024",
+    "EventName": "PM_PMC5_OVERFLOW",
+    "BriefDescription": "Overflow from counter 5,",
+    "PublicDescription": "Overflow from counter 5.,"
+  },
+  {
+    "EventCode": "0x30024",
+    "EventName": "PM_PMC6_OVERFLOW",
+    "BriefDescription": "Overflow from counter 6,",
+    "PublicDescription": "Overflow from counter 6.,"
+  },
+  {
+    "EventCode": "0x2005a",
+    "EventName": "PM_PREF_TRACKED",
+    "BriefDescription": "Total number of Prefetch Operations that were tracked,",
+    "PublicDescription": "Total number of Prefetch Operations that were tracked.,"
+  },
+  {
+    "EventCode": "0x1005a",
+    "EventName": "PM_PREF_TRACK_EFF",
+    "BriefDescription": "Prefetch Tracked was effective,",
+    "PublicDescription": "Prefetch Tracked was effective.,"
+  },
+  {
+    "EventCode": "0x3005a",
+    "EventName": "PM_PREF_TRACK_INEFF",
+    "BriefDescription": "Prefetch tracked was ineffective,",
+    "PublicDescription": "Prefetch tracked was ineffective.,"
+  },
+  {
+    "EventCode": "0x4005a",
+    "EventName": "PM_PREF_TRACK_MOD",
+    "BriefDescription": "Prefetch tracked was moderate,",
+    "PublicDescription": "Prefetch tracked was moderate.,"
+  },
+  {
+    "EventCode": "0x1005c",
+    "EventName": "PM_PREF_TRACK_MOD_L2",
+    "BriefDescription": "Prefetch Tracked was moderate (source L2),",
+    "PublicDescription": "Prefetch Tracked was moderate (source L2).,"
+  },
+  {
+    "EventCode": "0x3005c",
+    "EventName": "PM_PREF_TRACK_MOD_L3",
+    "BriefDescription": "Prefetch tracked was moderate (L3),",
+    "PublicDescription": "Prefetch tracked was moderate (L3).,"
+  },
+  {
+    "EventCode": "0x40014",
+    "EventName": "PM_PROBE_NOP_DISP",
+    "BriefDescription": "ProbeNops dispatched,",
+    "PublicDescription": "ProbeNops dispatched.,"
+  },
+  {
+    "EventCode": "0xe084",
+    "EventName": "PM_PTE_PREFETCH",
+    "BriefDescription": "PTE prefetches,",
+    "PublicDescription": "PTE prefetches42,"
+  },
+  {
+    "EventCode": "0x10054",
+    "EventName": "PM_PUMP_CPRED",
+    "BriefDescription": "Pump prediction correct. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump prediction correct. Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x40052",
+    "EventName": "PM_PUMP_MPRED",
+    "BriefDescription": "Pump misprediction. Counts across all types of pumps for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Pump Mis prediction Counts across all types of pumpsfor all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x16081",
+    "EventName": "PM_RC0_ALLOC",
+    "BriefDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x16080",
+    "EventName": "PM_RC0_BUSY",
+    "BriefDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "RC mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0x200301ea",
+    "EventName": "PM_RC_LIFETIME_EXC_1024",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 1024 cycles,",
+    "PublicDescription": "Reload latency exceeded 1024 cyc,"
+  },
+  {
+    "EventCode": "0x200401ec",
+    "EventName": "PM_RC_LIFETIME_EXC_2048",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 2048 cycles,",
+    "PublicDescription": "Threshold counter exceeded a value of 2048,"
+  },
+  {
+    "EventCode": "0x200101e8",
+    "EventName": "PM_RC_LIFETIME_EXC_256",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 256 cycles,",
+    "PublicDescription": "Threshold counter exceed a count of 256,"
+  },
+  {
+    "EventCode": "0x200201e6",
+    "EventName": "PM_RC_LIFETIME_EXC_32",
+    "BriefDescription": "Number of times the RC machine for a sampled instruction was active for more than 32 cycles,",
+    "PublicDescription": "Reload latency exceeded 32 cyc,"
+  },
+  {
+    "EventCode": "0x36088",
+    "EventName": "PM_RC_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 RC machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x20004",
+    "EventName": "PM_REAL_SRQ_FULL",
+    "BriefDescription": "Out of real srq entries,",
+    "PublicDescription": "Out of real srq entries.,"
+  },
+  {
+    "EventCode": "0x600f4",
+    "EventName": "PM_RUN_CYC",
+    "BriefDescription": "Run_cycles,",
+    "PublicDescription": "Run_cycles.,"
+  },
+  {
+    "EventCode": "0x3006c",
+    "EventName": "PM_RUN_CYC_SMT2_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT2 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2 mode.,"
+  },
+  {
+    "EventCode": "0x2006a",
+    "EventName": "PM_RUN_CYC_SMT2_SHRD_MODE",
+    "BriefDescription": "cycles this threads run latch is set and the core is in SMT2 shared mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2-shared mode.,"
+  },
+  {
+    "EventCode": "0x1006a",
+    "EventName": "PM_RUN_CYC_SMT2_SPLIT_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT2-split mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT2-split mode.,"
+  },
+  {
+    "EventCode": "0x2006c",
+    "EventName": "PM_RUN_CYC_SMT4_MODE",
+    "BriefDescription": "cycles this threads run latch is set and the core is in SMT4 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT4 mode.,"
+  },
+  {
+    "EventCode": "0x4006c",
+    "EventName": "PM_RUN_CYC_SMT8_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in SMT8 mode,",
+    "PublicDescription": "Cycles run latch is set and core is in SMT8 mode.,"
+  },
+  {
+    "EventCode": "0x1006c",
+    "EventName": "PM_RUN_CYC_ST_MODE",
+    "BriefDescription": "Cycles run latch is set and core is in ST mode,",
+    "PublicDescription": "Cycles run latch is set and core is in ST mode.,"
+  },
+  {
+    "EventCode": "0x500fa",
+    "EventName": "PM_RUN_INST_CMPL",
+    "BriefDescription": "Run_Instructions,",
+    "PublicDescription": "Run_Instructions.,"
+  },
+  {
+    "EventCode": "0x400f4",
+    "EventName": "PM_RUN_PURR",
+    "BriefDescription": "Run_PURR,",
+    "PublicDescription": "Run_PURR.,"
+  },
+  {
+    "EventCode": "0x10008",
+    "EventName": "PM_RUN_SPURR",
+    "BriefDescription": "Run SPURR,",
+    "PublicDescription": "Run SPURR.,"
+  },
+  {
+    "EventCode": "0xf082",
+    "EventName": "PM_SEC_ERAT_HIT",
+    "BriefDescription": "secondary ERAT Hit,",
+    "PublicDescription": "secondary ERAT Hit42,"
+  },
+  {
+    "EventCode": "0x508c",
+    "EventName": "PM_SHL_CREATED",
+    "BriefDescription": "Store-Hit-Load Table Entry Created,",
+    "PublicDescription": "Store-Hit-Load Table Entry Created,"
+  },
+  {
+    "EventCode": "0x508e",
+    "EventName": "PM_SHL_ST_CONVERT",
+    "BriefDescription": "Store-Hit-Load Table Read Hit with entry Enabled,",
+    "PublicDescription": "Store-Hit-Load Table Read Hit with entry Enabled,"
+  },
+  {
+    "EventCode": "0x5090",
+    "EventName": "PM_SHL_ST_DISABLE",
+    "BriefDescription": "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush),",
+    "PublicDescription": "Store-Hit-Load Table Read Hit with entry Disabled (entry was disabled due to the entry shown to not prevent the flush),"
+  },
+  {
+    "EventCode": "0x26085",
+    "EventName": "PM_SN0_ALLOC",
+    "BriefDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "0.0,"
+  },
+  {
+    "EventCode": "0x26084",
+    "EventName": "PM_SN0_BUSY",
+    "BriefDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),",
+    "PublicDescription": "SN mach 0 Busy. Used by PMU to sample ave RC livetime(mach0 used as sample point),"
+  },
+  {
+    "EventCode": "0xd0b2",
+    "EventName": "PM_SNOOP_TLBIE",
+    "BriefDescription": "TLBIE snoop,",
+    "PublicDescription": "TLBIE snoopSnoop TLBIE,"
+  },
+  {
+    "EventCode": "0x4608c",
+    "EventName": "PM_SN_USAGE",
+    "BriefDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,",
+    "PublicDescription": "Continuous 16 cycle(2to1) window where this signals rotates thru sampling each L2 SN machine busy. PMU uses this wave to then do 16 cyc count to sample total number of machs running,"
+  },
+  {
+    "EventCode": "0x10028",
+    "EventName": "PM_STALL_END_GCT_EMPTY",
+    "BriefDescription": "Count ended because GCT went empty,",
+    "PublicDescription": "Count ended because GCT went empty.,"
+  },
+  {
+    "EventCode": "0x1e058",
+    "EventName": "PM_STCX_FAIL",
+    "BriefDescription": "stcx failed,",
+    "PublicDescription": "stcx failed .,"
+  },
+  {
+    "EventCode": "0xc090",
+    "EventName": "PM_STCX_LSU",
+    "BriefDescription": "STCX executed reported at sent to nest,",
+    "PublicDescription": "STCX executed reported at sent to nest42,"
+  },
+  {
+    "EventCode": "0x20016",
+    "EventName": "PM_ST_CMPL",
+    "BriefDescription": "Store completion count,",
+    "PublicDescription": "Store completion count.,"
+  },
+  {
+    "EventCode": "0x200f0",
+    "EventName": "PM_ST_FIN",
+    "BriefDescription": "Store Instructions Finished,",
+    "PublicDescription": "Store Instructions Finished (store sent to nest).,"
+  },
+  {
+    "EventCode": "0x20018",
+    "EventName": "PM_ST_FWD",
+    "BriefDescription": "Store forwards that finished,",
+    "PublicDescription": "Store forwards that finished.,"
+  },
+  {
+    "EventCode": "0x300f0",
+    "EventName": "PM_ST_MISS_L1",
+    "BriefDescription": "Store Missed L1,",
+    "PublicDescription": "Store Missed L1.,"
+  },
+  {
+    "EventCode": "0x0",
+    "EventName": "PM_SUSPENDED",
+    "BriefDescription": "Counter OFF,",
+    "PublicDescription": "Counter OFF.,"
+  },
+  {
+    "EventCode": "0x3090",
+    "EventName": "PM_SWAP_CANCEL",
+    "BriefDescription": "SWAP cancel , rtag not available,",
+    "PublicDescription": "SWAP cancel , rtag not available,"
+  },
+  {
+    "EventCode": "0x3092",
+    "EventName": "PM_SWAP_CANCEL_GPR",
+    "BriefDescription": "SWAP cancel , rtag not available for gpr,",
+    "PublicDescription": "SWAP cancel , rtag not available for gpr,"
+  },
+  {
+    "EventCode": "0x308c",
+    "EventName": "PM_SWAP_COMPLETE",
+    "BriefDescription": "swap cast in completed,",
+    "PublicDescription": "swap cast in completed,"
+  },
+  {
+    "EventCode": "0x308e",
+    "EventName": "PM_SWAP_COMPLETE_GPR",
+    "BriefDescription": "swap cast in completed fpr gpr,",
+    "PublicDescription": "swap cast in completed fpr gpr,"
+  },
+  {
+    "EventCode": "0x15152",
+    "EventName": "PM_SYNC_MRK_BR_LINK",
+    "BriefDescription": "Marked Branch and link branch that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked Branch and link branch that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x1515c",
+    "EventName": "PM_SYNC_MRK_BR_MPRED",
+    "BriefDescription": "Marked Branch mispredict that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked Branch mispredict that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15156",
+    "EventName": "PM_SYNC_MRK_FX_DIVIDE",
+    "BriefDescription": "Marked fixed point divide that can cause a synchronous interrupt,",
+    "PublicDescription": "Marked fixed point divide that can cause a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15158",
+    "EventName": "PM_SYNC_MRK_L2HIT",
+    "BriefDescription": "Marked L2 Hits that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L2 Hits that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x1515a",
+    "EventName": "PM_SYNC_MRK_L2MISS",
+    "BriefDescription": "Marked L2 Miss that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L2 Miss that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15154",
+    "EventName": "PM_SYNC_MRK_L3MISS",
+    "BriefDescription": "Marked L3 misses that can throw a synchronous interrupt,",
+    "PublicDescription": "Marked L3 misses that can throw a synchronous interrupt.,"
+  },
+  {
+    "EventCode": "0x15150",
+    "EventName": "PM_SYNC_MRK_PROBE_NOP",
+    "BriefDescription": "Marked probeNops which can cause synchronous interrupts,",
+    "PublicDescription": "Marked probeNops which can cause synchronous interrupts.,"
+  },
+  {
+    "EventCode": "0x30050",
+    "EventName": "PM_SYS_PUMP_CPRED",
+    "BriefDescription": "Initial and Final Pump Scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Initial and Final Pump Scope and data sourced across this scope was system pump for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x30052",
+    "EventName": "PM_SYS_PUMP_MPRED",
+    "BriefDescription": "Final Pump Scope (system) mispredicted. Either the original scope was too small (Chip/Group) or the original scope was System and it should have been smaller. Counts for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope(Chip/Group) OR Final Pump Scope(system) got data from source that was at smaller scope(Chip/group) Final pump was system pump and initial pump was chip or group or,"
+  },
+  {
+    "EventCode": "0x40050",
+    "EventName": "PM_SYS_PUMP_MPRED_RTY",
+    "BriefDescription": "Final Pump Scope (system) ended up larger than Initial Pump Scope (Chip/Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate),",
+    "PublicDescription": "Final Pump Scope(system) to get data sourced, ended up larger than Initial Pump Scope (Chip or Group) for all data types excluding data prefetch (demand load,inst prefetch,inst fetch,xlate).,"
+  },
+  {
+    "EventCode": "0x10026",
+    "EventName": "PM_TABLEWALK_CYC",
+    "BriefDescription": "Cycles when a tablewalk (I or D) is active,",
+    "PublicDescription": "Tablewalk Active.,"
+  },
+  {
+    "EventCode": "0xe086",
+    "EventName": "PM_TABLEWALK_CYC_PREF",
+    "BriefDescription": "tablewalk qualified for pte prefetches,",
+    "PublicDescription": "tablewalk qualified for pte prefetches42,"
+  },
+  {
+    "EventCode": "0x20b2",
+    "EventName": "PM_TABORT_TRECLAIM",
+    "BriefDescription": "Completion time tabortnoncd, tabortcd, treclaim,",
+    "PublicDescription": "Completion time tabortnoncd, tabortcd, treclaim,"
+  },
+  {
+    "EventCode": "0x300f8",
+    "EventName": "PM_TB_BIT_TRANS",
+    "BriefDescription": "timebase event,",
+    "PublicDescription": "timebase event.,"
+  },
+  {
+    "EventCode": "0xe0ba",
+    "EventName": "PM_TEND_PEND_CYC",
+    "BriefDescription": "TEND latency per thread,",
+    "PublicDescription": "TEND latency per thread42,"
+  },
+  {
+    "EventCode": "0x2000c",
+    "EventName": "PM_THRD_ALL_RUN_CYC",
+    "BriefDescription": "All Threads in Run_cycles (was both threads in run_cycles),",
+    "PublicDescription": "All Threads in Run_cycles (was both threads in run_cycles).,"
+  },
+  {
+    "EventCode": "0x300f4",
+    "EventName": "PM_THRD_CONC_RUN_INST",
+    "BriefDescription": "PPC Instructions Finished when both threads in run_cycles,",
+    "PublicDescription": "Concurrent Run Instructions.,"
+  },
+  {
+    "EventCode": "0x10012",
+    "EventName": "PM_THRD_GRP_CMPL_BOTH_CYC",
+    "BriefDescription": "Cycles group completed on both completion slots by any thread,",
+    "PublicDescription": "Two threads finished same cycle (gated by run latch).,"
+  },
+  {
+    "EventCode": "0x40bc",
+    "EventName": "PM_THRD_PRIO_0_1_CYC",
+    "BriefDescription": "Cycles thread running at priority level 0 or 1,",
+    "PublicDescription": "Cycles thread running at priority level 0 or 1,"
+  },
+  {
+    "EventCode": "0x40be",
+    "EventName": "PM_THRD_PRIO_2_3_CYC",
+    "BriefDescription": "Cycles thread running at priority level 2 or 3,",
+    "PublicDescription": "Cycles thread running at priority level 2 or 3,"
+  },
+  {
+    "EventCode": "0x5080",
+    "EventName": "PM_THRD_PRIO_4_5_CYC",
+    "BriefDescription": "Cycles thread running at priority level 4 or 5,",
+    "PublicDescription": "Cycles thread running at priority level 4 or 5,"
+  },
+  {
+    "EventCode": "0x5082",
+    "EventName": "PM_THRD_PRIO_6_7_CYC",
+    "BriefDescription": "Cycles thread running at priority level 6 or 7,",
+    "PublicDescription": "Cycles thread running at priority level 6 or 7,"
+  },
+  {
+    "EventCode": "0x3098",
+    "EventName": "PM_THRD_REBAL_CYC",
+    "BriefDescription": "cycles rebalance was active,",
+    "PublicDescription": "cycles rebalance was active,"
+  },
+  {
+    "EventCode": "0x301ea",
+    "EventName": "PM_THRESH_EXC_1024",
+    "BriefDescription": "Threshold counter exceeded a value of 1024,",
+    "PublicDescription": "Threshold counter exceeded a value of 1024.,"
+  },
+  {
+    "EventCode": "0x401ea",
+    "EventName": "PM_THRESH_EXC_128",
+    "BriefDescription": "Threshold counter exceeded a value of 128,",
+    "PublicDescription": "Threshold counter exceeded a value of 128.,"
+  },
+  {
+    "EventCode": "0x401ec",
+    "EventName": "PM_THRESH_EXC_2048",
+    "BriefDescription": "Threshold counter exceeded a value of 2048,",
+    "PublicDescription": "Threshold counter exceeded a value of 2048.,"
+  },
+  {
+    "EventCode": "0x101e8",
+    "EventName": "PM_THRESH_EXC_256",
+    "BriefDescription": "Threshold counter exceed a count of 256,",
+    "PublicDescription": "Threshold counter exceed a count of 256.,"
+  },
+  {
+    "EventCode": "0x201e6",
+    "EventName": "PM_THRESH_EXC_32",
+    "BriefDescription": "Threshold counter exceeded a value of 32,",
+    "PublicDescription": "Threshold counter exceeded a value of 32.,"
+  },
+  {
+    "EventCode": "0x101e6",
+    "EventName": "PM_THRESH_EXC_4096",
+    "BriefDescription": "Threshold counter exceed a count of 4096,",
+    "PublicDescription": "Threshold counter exceed a count of 4096.,"
+  },
+  {
+    "EventCode": "0x201e8",
+    "EventName": "PM_THRESH_EXC_512",
+    "BriefDescription": "Threshold counter exceeded a value of 512,",
+    "PublicDescription": "Threshold counter exceeded a value of 512.,"
+  },
+  {
+    "EventCode": "0x301e8",
+    "EventName": "PM_THRESH_EXC_64",
+    "BriefDescription": "IFU non-branch finished,",
+    "PublicDescription": "Threshold counter exceeded a value of 64.,"
+  },
+  {
+    "EventCode": "0x101ec",
+    "EventName": "PM_THRESH_MET",
+    "BriefDescription": "threshold exceeded,",
+    "PublicDescription": "threshold exceeded.,"
+  },
+  {
+    "EventCode": "0x4016e",
+    "EventName": "PM_THRESH_NOT_MET",
+    "BriefDescription": "Threshold counter did not meet threshold,",
+    "PublicDescription": "Threshold counter did not meet threshold.,"
+  },
+  {
+    "EventCode": "0x30058",
+    "EventName": "PM_TLBIE_FIN",
+    "BriefDescription": "tlbie finished,",
+    "PublicDescription": "tlbie finished.,"
+  },
+  {
+    "EventCode": "0x20066",
+    "EventName": "PM_TLB_MISS",
+    "BriefDescription": "TLB Miss (I + D),",
+    "PublicDescription": "TLB Miss (I + D).,"
+  },
+  {
+    "EventCode": "0x20b8",
+    "EventName": "PM_TM_BEGIN_ALL",
+    "BriefDescription": "Tm any tbegin,",
+    "PublicDescription": "Tm any tbegin,"
+  },
+  {
+    "EventCode": "0x20ba",
+    "EventName": "PM_TM_END_ALL",
+    "BriefDescription": "Tm any tend,",
+    "PublicDescription": "Tm any tend,"
+  },
+  {
+    "EventCode": "0x3086",
+    "EventName": "PM_TM_FAIL_CONF_NON_TM",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0x3088",
+    "EventName": "PM_TM_FAIL_CON_TM",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b2",
+    "EventName": "PM_TM_FAIL_DISALLOW",
+    "BriefDescription": "TM fail disallow,",
+    "PublicDescription": "TM fail disallow42,"
+  },
+  {
+    "EventCode": "0x3084",
+    "EventName": "PM_TM_FAIL_FOOTPRINT_OVERFLOW",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b8",
+    "EventName": "PM_TM_FAIL_NON_TX_CONFLICT",
+    "BriefDescription": "Non transactional conflict from LSU whtver gets repoted to texas,",
+    "PublicDescription": "Non transactional conflict from LSU whtver gets repoted to texas42,"
+  },
+  {
+    "EventCode": "0x308a",
+    "EventName": "PM_TM_FAIL_SELF",
+    "BriefDescription": "TEXAS fail reason @ completion,",
+    "PublicDescription": "TEXAS fail reason @ completion,"
+  },
+  {
+    "EventCode": "0xe0b4",
+    "EventName": "PM_TM_FAIL_TLBIE",
+    "BriefDescription": "TLBIE hit bloom filter,",
+    "PublicDescription": "TLBIE hit bloom filter42,"
+  },
+  {
+    "EventCode": "0xe0b6",
+    "EventName": "PM_TM_FAIL_TX_CONFLICT",
+    "BriefDescription": "Transactional conflict from LSU, whatever gets reported to texas,",
+    "PublicDescription": "Transactional conflict from LSU, whatever gets reported to texas 42,"
+  },
+  {
+    "EventCode": "0x20bc",
+    "EventName": "PM_TM_TBEGIN",
+    "BriefDescription": "Tm nested tbegin,",
+    "PublicDescription": "Tm nested tbegin,"
+  },
+  {
+    "EventCode": "0x10060",
+    "EventName": "PM_TM_TRANS_RUN_CYC",
+    "BriefDescription": "run cycles in transactional state,",
+    "PublicDescription": "run cycles in transactional state.,"
+  },
+  {
+    "EventCode": "0x30060",
+    "EventName": "PM_TM_TRANS_RUN_INST",
+    "BriefDescription": "Instructions completed in transactional state,",
+    "PublicDescription": "Instructions completed in transactional state.,"
+  },
+  {
+    "EventCode": "0x3080",
+    "EventName": "PM_TM_TRESUME",
+    "BriefDescription": "Tm resume,",
+    "PublicDescription": "Tm resume,"
+  },
+  {
+    "EventCode": "0x20be",
+    "EventName": "PM_TM_TSUSPEND",
+    "BriefDescription": "Tm suspend,",
+    "PublicDescription": "Tm suspend,"
+  },
+  {
+    "EventCode": "0x2e012",
+    "EventName": "PM_TM_TX_PASS_RUN_CYC",
+    "BriefDescription": "cycles spent in successful transactions,",
+    "PublicDescription": "run cycles spent in successful transactions.,"
+  },
+  {
+    "EventCode": "0x4e014",
+    "EventName": "PM_TM_TX_PASS_RUN_INST",
+    "BriefDescription": "run instructions spent in successful transactions.,",
+    "PublicDescription": "run instructions spent in successful transactions.,"
+  },
+  {
+    "EventCode": "0xe08c",
+    "EventName": "PM_UP_PREF_L3",
+    "BriefDescription": "Micropartition prefetch,",
+    "PublicDescription": "Micropartition prefetch42,"
+  },
+  {
+    "EventCode": "0xe08e",
+    "EventName": "PM_UP_PREF_POINTER",
+    "BriefDescription": "Micrpartition pointer prefetches,",
+    "PublicDescription": "Micrpartition pointer prefetches42,"
+  },
+  {
+    "EventCode": "0xa0a4",
+    "EventName": "PM_VSU0_16FLOP",
+    "BriefDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),",
+    "PublicDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),"
+  },
+  {
+    "EventCode": "0xa080",
+    "EventName": "PM_VSU0_1FLOP",
+    "BriefDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,",
+    "PublicDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finishedDecode into 1,2,4 FLOP according to instr IOP, multiplied by #vector elements according to route( eg x1, x2, x4) Only if instr sends finish to ISU,"
+  },
+  {
+    "EventCode": "0xa098",
+    "EventName": "PM_VSU0_2FLOP",
+    "BriefDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),",
+    "PublicDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa09c",
+    "EventName": "PM_VSU0_4FLOP",
+    "BriefDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),",
+    "PublicDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa0a0",
+    "EventName": "PM_VSU0_8FLOP",
+    "BriefDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),",
+    "PublicDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),"
+  },
+  {
+    "EventCode": "0xb0a4",
+    "EventName": "PM_VSU0_COMPLEX_ISSUED",
+    "BriefDescription": "Complex VMX instruction issued,",
+    "PublicDescription": "Complex VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xb0b4",
+    "EventName": "PM_VSU0_CY_ISSUED",
+    "BriefDescription": "Cryptographic instruction RFC02196 Issued,",
+    "PublicDescription": "Cryptographic instruction RFC02196 Issued,"
+  },
+  {
+    "EventCode": "0xb0a8",
+    "EventName": "PM_VSU0_DD_ISSUED",
+    "BriefDescription": "64BIT Decimal Issued,",
+    "PublicDescription": "64BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xa08c",
+    "EventName": "PM_VSU0_DP_2FLOP",
+    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,",
+    "PublicDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,"
+  },
+  {
+    "EventCode": "0xa090",
+    "EventName": "PM_VSU0_DP_FMA",
+    "BriefDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,",
+    "PublicDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,"
+  },
+  {
+    "EventCode": "0xa094",
+    "EventName": "PM_VSU0_DP_FSQRT_FDIV",
+    "BriefDescription": "DP vector versions of fdiv,fsqrt,",
+    "PublicDescription": "DP vector versions of fdiv,fsqrt,"
+  },
+  {
+    "EventCode": "0xb0ac",
+    "EventName": "PM_VSU0_DQ_ISSUED",
+    "BriefDescription": "128BIT Decimal Issued,",
+    "PublicDescription": "128BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xb0b0",
+    "EventName": "PM_VSU0_EX_ISSUED",
+    "BriefDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,",
+    "PublicDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,"
+  },
+  {
+    "EventCode": "0xa0bc",
+    "EventName": "PM_VSU0_FIN",
+    "BriefDescription": "VSU0 Finished an instruction,",
+    "PublicDescription": "VSU0 Finished an instruction,"
+  },
+  {
+    "EventCode": "0xa084",
+    "EventName": "PM_VSU0_FMA",
+    "BriefDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,",
+    "PublicDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,"
+  },
+  {
+    "EventCode": "0xb098",
+    "EventName": "PM_VSU0_FPSCR",
+    "BriefDescription": "Move to/from FPSCR type instruction issued on Pipe 0,",
+    "PublicDescription": "Move to/from FPSCR type instruction issued on Pipe 0,"
+  },
+  {
+    "EventCode": "0xa088",
+    "EventName": "PM_VSU0_FSQRT_FDIV",
+    "BriefDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,",
+    "PublicDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,"
+  },
+  {
+    "EventCode": "0xb090",
+    "EventName": "PM_VSU0_PERMUTE_ISSUED",
+    "BriefDescription": "Permute VMX Instruction Issued,",
+    "PublicDescription": "Permute VMX Instruction Issued,"
+  },
+  {
+    "EventCode": "0xb088",
+    "EventName": "PM_VSU0_SCALAR_DP_ISSUED",
+    "BriefDescription": "Double Precision scalar instruction issued on Pipe0,",
+    "PublicDescription": "Double Precision scalar instruction issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb094",
+    "EventName": "PM_VSU0_SIMPLE_ISSUED",
+    "BriefDescription": "Simple VMX instruction issued,",
+    "PublicDescription": "Simple VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xa0a8",
+    "EventName": "PM_VSU0_SINGLE",
+    "BriefDescription": "FPU single precision,",
+    "PublicDescription": "FPU single precision,"
+  },
+  {
+    "EventCode": "0xb09c",
+    "EventName": "PM_VSU0_SQ",
+    "BriefDescription": "Store Vector Issued,",
+    "PublicDescription": "Store Vector Issued,"
+  },
+  {
+    "EventCode": "0xb08c",
+    "EventName": "PM_VSU0_STF",
+    "BriefDescription": "FPU store (SP or DP) issued on Pipe0,",
+    "PublicDescription": "FPU store (SP or DP) issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb080",
+    "EventName": "PM_VSU0_VECTOR_DP_ISSUED",
+    "BriefDescription": "Double Precision vector instruction issued on Pipe0,",
+    "PublicDescription": "Double Precision vector instruction issued on Pipe0,"
+  },
+  {
+    "EventCode": "0xb084",
+    "EventName": "PM_VSU0_VECTOR_SP_ISSUED",
+    "BriefDescription": "Single Precision vector instruction issued (executed),",
+    "PublicDescription": "Single Precision vector instruction issued (executed),"
+  },
+  {
+    "EventCode": "0xa0a6",
+    "EventName": "PM_VSU1_16FLOP",
+    "BriefDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),",
+    "PublicDescription": "Sixteen flops operation (SP vector versions of fdiv,fsqrt),"
+  },
+  {
+    "EventCode": "0xa082",
+    "EventName": "PM_VSU1_1FLOP",
+    "BriefDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,",
+    "PublicDescription": "one flop (fadd, fmul, fsub, fcmp, fsel, fabs, fnabs, fres, fsqrte, fneg) operation finished,"
+  },
+  {
+    "EventCode": "0xa09a",
+    "EventName": "PM_VSU1_2FLOP",
+    "BriefDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),",
+    "PublicDescription": "two flops operation (scalar fmadd, fnmadd, fmsub, fnmsub and DP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa09e",
+    "EventName": "PM_VSU1_4FLOP",
+    "BriefDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),",
+    "PublicDescription": "four flops operation (scalar fdiv, fsqrt, DP vector version of fmadd, fnmadd, fmsub, fnmsub, SP vector versions of single flop instructions),"
+  },
+  {
+    "EventCode": "0xa0a2",
+    "EventName": "PM_VSU1_8FLOP",
+    "BriefDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),",
+    "PublicDescription": "eight flops operation (DP vector versions of fdiv,fsqrt and SP vector versions of fmadd,fnmadd,fmsub,fnmsub),"
+  },
+  {
+    "EventCode": "0xb0a6",
+    "EventName": "PM_VSU1_COMPLEX_ISSUED",
+    "BriefDescription": "Complex VMX instruction issued,",
+    "PublicDescription": "Complex VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xb0b6",
+    "EventName": "PM_VSU1_CY_ISSUED",
+    "BriefDescription": "Cryptographic instruction RFC02196 Issued,",
+    "PublicDescription": "Cryptographic instruction RFC02196 Issued,"
+  },
+  {
+    "EventCode": "0xb0aa",
+    "EventName": "PM_VSU1_DD_ISSUED",
+    "BriefDescription": "64BIT Decimal Issued,",
+    "PublicDescription": "64BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xa08e",
+    "EventName": "PM_VSU1_DP_2FLOP",
+    "BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,",
+    "PublicDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg,"
+  },
+  {
+    "EventCode": "0xa092",
+    "EventName": "PM_VSU1_DP_FMA",
+    "BriefDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,",
+    "PublicDescription": "DP vector version of fmadd,fnmadd,fmsub,fnmsub,"
+  },
+  {
+    "EventCode": "0xa096",
+    "EventName": "PM_VSU1_DP_FSQRT_FDIV",
+    "BriefDescription": "DP vector versions of fdiv,fsqrt,",
+    "PublicDescription": "DP vector versions of fdiv,fsqrt,"
+  },
+  {
+    "EventCode": "0xb0ae",
+    "EventName": "PM_VSU1_DQ_ISSUED",
+    "BriefDescription": "128BIT Decimal Issued,",
+    "PublicDescription": "128BIT Decimal Issued,"
+  },
+  {
+    "EventCode": "0xb0b2",
+    "EventName": "PM_VSU1_EX_ISSUED",
+    "BriefDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,",
+    "PublicDescription": "Direct move 32/64b VRFtoGPR RFC02206 Issued,"
+  },
+  {
+    "EventCode": "0xa0be",
+    "EventName": "PM_VSU1_FIN",
+    "BriefDescription": "VSU1 Finished an instruction,",
+    "PublicDescription": "VSU1 Finished an instruction,"
+  },
+  {
+    "EventCode": "0xa086",
+    "EventName": "PM_VSU1_FMA",
+    "BriefDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,",
+    "PublicDescription": "two flops operation (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only!,"
+  },
+  {
+    "EventCode": "0xb09a",
+    "EventName": "PM_VSU1_FPSCR",
+    "BriefDescription": "Move to/from FPSCR type instruction issued on Pipe 0,",
+    "PublicDescription": "Move to/from FPSCR type instruction issued on Pipe 0,"
+  },
+  {
+    "EventCode": "0xa08a",
+    "EventName": "PM_VSU1_FSQRT_FDIV",
+    "BriefDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,",
+    "PublicDescription": "four flops operation (fdiv,fsqrt) Scalar Instructions only!,"
+  },
+  {
+    "EventCode": "0xb092",
+    "EventName": "PM_VSU1_PERMUTE_ISSUED",
+    "BriefDescription": "Permute VMX Instruction Issued,",
+    "PublicDescription": "Permute VMX Instruction Issued,"
+  },
+  {
+    "EventCode": "0xb08a",
+    "EventName": "PM_VSU1_SCALAR_DP_ISSUED",
+    "BriefDescription": "Double Precision scalar instruction issued on Pipe1,",
+    "PublicDescription": "Double Precision scalar instruction issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb096",
+    "EventName": "PM_VSU1_SIMPLE_ISSUED",
+    "BriefDescription": "Simple VMX instruction issued,",
+    "PublicDescription": "Simple VMX instruction issued,"
+  },
+  {
+    "EventCode": "0xa0aa",
+    "EventName": "PM_VSU1_SINGLE",
+    "BriefDescription": "FPU single precision,",
+    "PublicDescription": "FPU single precision,"
+  },
+  {
+    "EventCode": "0xb09e",
+    "EventName": "PM_VSU1_SQ",
+    "BriefDescription": "Store Vector Issued,",
+    "PublicDescription": "Store Vector Issued,"
+  },
+  {
+    "EventCode": "0xb08e",
+    "EventName": "PM_VSU1_STF",
+    "BriefDescription": "FPU store (SP or DP) issued on Pipe1,",
+    "PublicDescription": "FPU store (SP or DP) issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb082",
+    "EventName": "PM_VSU1_VECTOR_DP_ISSUED",
+    "BriefDescription": "Double Precision vector instruction issued on Pipe1,",
+    "PublicDescription": "Double Precision vector instruction issued on Pipe1,"
+  },
+  {
+    "EventCode": "0xb086",
+    "EventName": "PM_VSU1_VECTOR_SP_ISSUED",
+    "BriefDescription": "Single Precision vector instruction issued (executed),",
+    "PublicDescription": "Single Precision vector instruction issued (executed),"
+  }
+]
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-20  0:02 ` [PATCH 3/4] perf: Use pmu_events_map table to create event aliases Sukadev Bhattiprolu
@ 2015-05-20 23:58   ` Andi Kleen
  2015-05-21  0:19     ` Sukadev Bhattiprolu
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-20 23:58 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

> +/*
> + * Return TRUE if the CPU identified by @vfm, @version, and @type
> + * matches the current CPU.  vfm refers to [Vendor, Family, Model],
> + *
> + * Return FALSE otherwise.
> + *
> + * For Powerpc, we only compare @version to the processor PVR.
> + */
> +bool arch_pmu_events_match_cpu(const char *vfm __maybe_unused,
> +				const char *version,
> +				const char *type __maybe_unused)
> +{
> +	char *cpustr;
> +	bool rc;
> +
> +	cpustr = get_cpu_str();
> +	rc = !strcmp(version, cpustr);


Surely against vfm not version
I think your mapfile is wrong if that works?

That's the Intel format:

.vfm = "GenuineIntel-6-3E",
        .version = "V16",
        .type = "core",
        .table = pme_IvyTown_core


-Andi


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-20 23:58   ` Andi Kleen
@ 2015-05-21  0:19     ` Sukadev Bhattiprolu
  2015-05-21  2:56       ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-21  0:19 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

Andi Kleen [ak@linux.intel.com] wrote:
| > +/*
| > + * Return TRUE if the CPU identified by @vfm, @version, and @type
| > + * matches the current CPU.  vfm refers to [Vendor, Family, Model],
| > + *
| > + * Return FALSE otherwise.
| > + *
| > + * For Powerpc, we only compare @version to the processor PVR.
| > + */
| > +bool arch_pmu_events_match_cpu(const char *vfm __maybe_unused,
| > +				const char *version,
| > +				const char *type __maybe_unused)
| > +{
| > +	char *cpustr;
| > +	bool rc;
| > +
| > +	cpustr = get_cpu_str();
| > +	rc = !strcmp(version, cpustr);
| 
| 
| Surely against vfm not version
| I think your mapfile is wrong if that works?

Like I say in the comment, and elsewhere, each archictecture
could use a subset of [vfm, version, type] to match the CPU.

On Power, we use the PVR, which is a string like "004d0100",
to uniquely identify the CPU.

Obviously, that does not fit into the VFM field. We could either
add a new PVR field to the mapfile:

	[vfm, version, type, pvr]

or, as the patch currently does, let architectures intepret the
"version" field as they see fit?

IOW, leave it to architectures to keep arch_pmu_events_match_cpu()
consistent with _their_ mapfile?

| 
| That's the Intel format:
| 
| .vfm = "GenuineIntel-6-3E",
|         .version = "V16",
|         .type = "core",
|         .table = pme_IvyTown_core
| 
| 
| -Andi


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-21  0:19     ` Sukadev Bhattiprolu
@ 2015-05-21  2:56       ` Andi Kleen
  2015-05-21  5:02         ` Sukadev Bhattiprolu
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-21  2:56 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

> Obviously, that does not fit into the VFM field. We could either
> add a new PVR field to the mapfile:
> 
> 	[vfm, version, type, pvr]
> 
> or, as the patch currently does, let architectures intepret the
> "version" field as they see fit?
> 
> IOW, leave it to architectures to keep arch_pmu_events_match_cpu()
> consistent with _their_ mapfile?

version is the version number of the event file. This way 
you can't signify the version number if you ever change something.

If you need something else in vfm to identify the CPU 
can't you just add it there? I wouldn't really call it vfm, it's
really a "abstract cpu identifier per architecture". So if you
need pvr just add it there.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-21  2:56       ` Andi Kleen
@ 2015-05-21  5:02         ` Sukadev Bhattiprolu
  2015-05-21 18:50           ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-21  5:02 UTC (permalink / raw)
  To: Andi Kleen
  Cc: mingo, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

Andi Kleen [ak@linux.intel.com] wrote:
| If you need something else in vfm to identify the CPU 
| can't you just add it there? I wouldn't really call it vfm, it's
| really a "abstract cpu identifier per architecture". So if you
| need pvr just add it there.

Ok. I will change vfm to cpuid_str and include pvr in it.

Sukadev


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] perf: Use pmu_events_map table to create event aliases
  2015-05-21  5:02         ` Sukadev Bhattiprolu
@ 2015-05-21 18:50           ` Andi Kleen
  0 siblings, 0 replies; 32+ messages in thread
From: Andi Kleen @ 2015-05-21 18:50 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

On Wed, May 20, 2015 at 10:02:04PM -0700, Sukadev Bhattiprolu wrote:
> Andi Kleen [ak@linux.intel.com] wrote:
> | If you need something else in vfm to identify the CPU 
> | can't you just add it there? I wouldn't really call it vfm, it's
> | really a "abstract cpu identifier per architecture". So if you
> | need pvr just add it there.
> 
> Ok. I will change vfm to cpuid_str and include pvr in it.

Thanks.

With that change it would be also cleaner to provide a get_cpuid_str()
function by the architecture code, and then strcmp in the matching
code, instead of having architecture specific compare code.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
@ 2015-05-22 14:56   ` Jiri Olsa
  2015-05-22 15:58     ` Sukadev Bhattiprolu
  2015-05-22 14:56   ` Jiri Olsa
  2015-05-27 13:54   ` Namhyung Kim
  2 siblings, 1 reply; 32+ messages in thread
From: Jiri Olsa @ 2015-05-22 14:56 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, ak, Michael Ellerman, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

On Tue, May 19, 2015 at 05:02:08PM -0700, Sukadev Bhattiprolu wrote:

SNIP

> ---
>  tools/perf/Build                   |    1 +
>  tools/perf/Makefile.perf           |    4 +-
>  tools/perf/pmu-events/Build        |   38 ++
>  tools/perf/pmu-events/README       |   67 ++++
>  tools/perf/pmu-events/jevents.c    |  700 ++++++++++++++++++++++++++++++++++++
>  tools/perf/pmu-events/jevents.h    |   17 +
>  tools/perf/pmu-events/pmu-events.h |   39 ++
>  7 files changed, 865 insertions(+), 1 deletion(-)
>  create mode 100644 tools/perf/pmu-events/Build
>  create mode 100644 tools/perf/pmu-events/README
>  create mode 100644 tools/perf/pmu-events/jevents.c
>  create mode 100644 tools/perf/pmu-events/jevents.h
>  create mode 100644 tools/perf/pmu-events/pmu-events.h
> 
> diff --git a/tools/perf/Build b/tools/perf/Build
> index b77370e..40bffa0 100644
> --- a/tools/perf/Build
> +++ b/tools/perf/Build
> @@ -36,6 +36,7 @@ CFLAGS_builtin-help.o      += $(paths)
>  CFLAGS_builtin-timechart.o += $(paths)
>  CFLAGS_perf.o              += -DPERF_HTML_PATH="BUILD_STR($(htmldir_SQ))" -include $(OUTPUT)PERF-VERSION-FILE
>  
> +libperf-y += pmu-events/

there's no concetion (yet) in the new build system to trigger
another binery build as a dependency for object file.. I'd
rather do this the framework way, please check attached patch

also currently the pmu-events.c is generated every time,
so we need to add the event json data files as dependency

jirka


---
diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build
index 10df57237a66..f6e7fd868892 100644
--- a/tools/build/Makefile.build
+++ b/tools/build/Makefile.build
@@ -41,6 +41,7 @@ include $(build-file)
 
 quiet_cmd_flex  = FLEX     $@
 quiet_cmd_bison = BISON    $@
+quiet_cmd_gen   = GEN      $@
 
 # Create directory unless it exists
 quiet_cmd_mkdir = MKDIR    $(dir $@)
diff --git a/tools/perf/Build b/tools/perf/Build
index 40bffa0b6ee1..b77370ef7005 100644
--- a/tools/perf/Build
+++ b/tools/perf/Build
@@ -36,7 +36,6 @@ CFLAGS_builtin-help.o      += $(paths)
 CFLAGS_builtin-timechart.o += $(paths)
 CFLAGS_perf.o              += -DPERF_HTML_PATH="BUILD_STR($(htmldir_SQ))" -include $(OUTPUT)PERF-VERSION-FILE
 
-libperf-y += pmu-events/
 libperf-y += util/
 libperf-y += arch/
 libperf-y += ui/
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 57e46a541686..a4ba451cffa2 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -272,14 +272,29 @@ strip: $(PROGRAMS) $(OUTPUT)perf
 
 PERF_IN := $(OUTPUT)perf-in.o
 
+JEVENTS       := $(OUTPUT)pmu-events/jevents
+JEVENTS_IN    := $(OUTPUT)pmu-events/jevents-in.o
+PMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o
+
+export JEVENTS
+
 export srctree OUTPUT RM CC LD AR CFLAGS V BISON FLEX
 build := -f $(srctree)/tools/build/Makefile.build dir=. obj
 
 $(PERF_IN): $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h FORCE
 	$(Q)$(MAKE) $(build)=perf
 
-$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN)
-	$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(PERF_IN) $(LIBS) -o $@
+$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN) $(PMU_EVENTS_IN)
+	$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(PERF_IN) $(PMU_EVENTS_IN) $(LIBS) -o $@
+
+$(JEVENTS_IN): FORCE
+	$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=$(OUTPUT)pmu-events obj=jevents
+
+$(JEVENTS): $(JEVENTS_IN)
+	$(QUIET_LINK)$(CC) $(JEVENTS_IN) -o $@
+
+$(PMU_EVENTS_IN): $(JEVENTS) FORCE
+	$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=$(OUTPUT)pmu-events obj=pmu-events
 
 $(GTK_IN): FORCE
 	$(Q)$(MAKE) $(build)=gtk
@@ -538,7 +553,7 @@ clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean config-clean
 	$(Q)find . -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
 	$(Q)$(RM) .config-detected
 	$(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 perf-read-vdsox32 $(OUTPUT)pmu-events/jevents
-	$(call QUIET_CLEAN, core-gen)   $(RM)  *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex*
+	$(call QUIET_CLEAN, core-gen)   $(RM)  *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* $(OUTPUT)pmu-events/pmu-events.c
 	$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) clean
 	$(python-clean)
 
diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
index 7a2aaafa05e5..c35eeec2674c 100644
--- a/tools/perf/pmu-events/Build
+++ b/tools/perf/pmu-events/Build
@@ -1,26 +1,13 @@
-.SUFFIXES:
-
-libperf-y += pmu-events.o
-
-JEVENTS =	$(OUTPUT)pmu-events/jevents
-JEVENTS_OBJS =	$(OUTPUT)pmu-events/json.o $(OUTPUT)pmu-events/jsmn.o \
-		$(OUTPUT)pmu-events/jevents.o
-
-PMU_EVENTS =	$(srctree)/tools/perf/pmu-events/
-
-all: $(OUTPUT)pmu-events.o
-
-$(OUTPUT)pmu-events/jevents: $(JEVENTS_OBJS)
-	$(call rule_mkdir)
-	$(CC) -o $@ $(JEVENTS_OBJS)
+jevents-y    += json.o jsmn.o jevents.o
+pmu-events-y += pmu-events.o
 
 #
-# Look for JSON files in $(PMU_EVENTS)/arch directory,
-# process them and create tables in $(PMU_EVENTS)/pmu-events.c
+# Look for JSON files in arch directory,
+# process them and create tables in pmu-events.c
 #
-pmu-events/pmu-events.c: $(JEVENTS) FORCE
-	$(JEVENTS) $(PMU_EVENTS)/arch $(PMU_EVENTS)/pmu-events.c
- 
+# TODO put event data files as dependencies instead of FORCE
+pmu-events/pmu-events.c: FORCE
+	$(Q)$(call echo-cmd,gen)$(JEVENTS) pmu-events/arch $(OUTPUT)pmu-events/pmu-events.c
 
 #
 # If we fail to build pmu-events.o, it could very well be due to
@@ -30,9 +17,3 @@ pmu-events/pmu-events.c: $(JEVENTS) FORCE
 # so the build of perf can succeed even if we are not able to use
 # the PMU event aliases.
 #
-
-clean:
-	rm -f $(JEVENTS_OBJS) $(JEVENTS) $(OUTPUT)pmu-events.o \
-		$(PMU_EVENTS)pmu-events.c
-
-FORCE:

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
  2015-05-22 14:56   ` Jiri Olsa
@ 2015-05-22 14:56   ` Jiri Olsa
  2015-05-22 17:25     ` Sukadev Bhattiprolu
  2015-05-27 13:54   ` Namhyung Kim
  2 siblings, 1 reply; 32+ messages in thread
From: Jiri Olsa @ 2015-05-22 14:56 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, ak, Michael Ellerman, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

On Tue, May 19, 2015 at 05:02:08PM -0700, Sukadev Bhattiprolu wrote:

SNIP

> +int main(int argc, char *argv[])
> +{
> +	int rc;
> +	int flags;

SNIP

> +
> +	rc = uname(&uts);
> +	if (rc < 0) {
> +		printf("%s: uname() failed: %s\n", argv[0], strerror(errno));
> +		goto empty_map;
> +	}
> +
> +	/* TODO: Add other flavors of machine type here */
> +	if (!strcmp(uts.machine, "ppc64"))
> +		arch = "powerpc";
> +	else if (!strcmp(uts.machine, "i686"))
> +		arch = "x86";
> +	else if (!strcmp(uts.machine, "x86_64"))
> +		arch = "x86";
> +	else {
> +		printf("%s: Unknown architecture %s\n", argv[0], uts.machine);
> +		goto empty_map;
> +	}

hum, wouldnt it be easier to pass the arch directly from the Makefile,
we should have it ready in the $(ARCH) variable..

jirka

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 14:56   ` Jiri Olsa
@ 2015-05-22 15:58     ` Sukadev Bhattiprolu
  2015-05-22 17:33       ` Jiri Olsa
  2015-05-22 18:01       ` Andi Kleen
  0 siblings, 2 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-22 15:58 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: ak, linux-kernel, Arnaldo Carvalho de Melo, mingo,
	Paul Mackerras, namhyung, linuxppc-dev

Jiri Olsa [jolsa@redhat.com] wrote:
| On Tue, May 19, 2015 at 05:02:08PM -0700, Sukadev Bhattiprolu wrote:
| 
| SNIP
| 
| > ---
| >  tools/perf/Build                   |    1 +
| >  tools/perf/Makefile.perf           |    4 +-
| >  tools/perf/pmu-events/Build        |   38 ++
| >  tools/perf/pmu-events/README       |   67 ++++
| >  tools/perf/pmu-events/jevents.c    |  700 ++++++++++++++++++++++++++++++++++++
| >  tools/perf/pmu-events/jevents.h    |   17 +
| >  tools/perf/pmu-events/pmu-events.h |   39 ++
| >  7 files changed, 865 insertions(+), 1 deletion(-)
| >  create mode 100644 tools/perf/pmu-events/Build
| >  create mode 100644 tools/perf/pmu-events/README
| >  create mode 100644 tools/perf/pmu-events/jevents.c
| >  create mode 100644 tools/perf/pmu-events/jevents.h
| >  create mode 100644 tools/perf/pmu-events/pmu-events.h
| > 
| > diff --git a/tools/perf/Build b/tools/perf/Build
| > index b77370e..40bffa0 100644
| > --- a/tools/perf/Build
| > +++ b/tools/perf/Build
| > @@ -36,6 +36,7 @@ CFLAGS_builtin-help.o      += $(paths)
| >  CFLAGS_builtin-timechart.o += $(paths)
| >  CFLAGS_perf.o              += -DPERF_HTML_PATH="BUILD_STR($(htmldir_SQ))" -include $(OUTPUT)PERF-VERSION-FILE
| >  
| > +libperf-y += pmu-events/
| 
| there's no concetion (yet) in the new build system to trigger
| another binery build as a dependency for object file.. I'd
| rather do this the framework way, please check attached patch
| 
| also currently the pmu-events.c is generated every time,
| so we need to add the event json data files as dependency

pmu-events.c depends only on JSON files relevant to the arch perf is
being built on and there could be several JSON files per arch. So it
would complicate the Makefiles.

Besides, didn't we conclude that the cost of generating pmu-events.c
during build is negligible ?

| 
| jirka
| 
| 
| ---
| diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build
| index 10df57237a66..f6e7fd868892 100644
| --- a/tools/build/Makefile.build
| +++ b/tools/build/Makefile.build
| @@ -41,6 +41,7 @@ include $(build-file)
|  
|  quiet_cmd_flex  = FLEX     $@
|  quiet_cmd_bison = BISON    $@
| +quiet_cmd_gen   = GEN      $@
|  
|  # Create directory unless it exists
|  quiet_cmd_mkdir = MKDIR    $(dir $@)
| diff --git a/tools/perf/Build b/tools/perf/Build
| index 40bffa0b6ee1..b77370ef7005 100644
| --- a/tools/perf/Build
| +++ b/tools/perf/Build
| @@ -36,7 +36,6 @@ CFLAGS_builtin-help.o      += $(paths)
|  CFLAGS_builtin-timechart.o += $(paths)
|  CFLAGS_perf.o              += -DPERF_HTML_PATH="BUILD_STR($(htmldir_SQ))" -include $(OUTPUT)PERF-VERSION-FILE
|  
| -libperf-y += pmu-events/
|  libperf-y += util/
|  libperf-y += arch/
|  libperf-y += ui/
| diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
| index 57e46a541686..a4ba451cffa2 100644
| --- a/tools/perf/Makefile.perf
| +++ b/tools/perf/Makefile.perf
| @@ -272,14 +272,29 @@ strip: $(PROGRAMS) $(OUTPUT)perf
|  
|  PERF_IN := $(OUTPUT)perf-in.o
|  
| +JEVENTS       := $(OUTPUT)pmu-events/jevents
| +JEVENTS_IN    := $(OUTPUT)pmu-events/jevents-in.o
| +PMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o

I will try this out, but why not just add pmu-events.o to libperf?

| +
| +export JEVENTS
| +
|  export srctree OUTPUT RM CC LD AR CFLAGS V BISON FLEX
|  build := -f $(srctree)/tools/build/Makefile.build dir=. obj
|  
|  $(PERF_IN): $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h FORCE
|  	$(Q)$(MAKE) $(build)=perf
|  
| -$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN)
| -	$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(PERF_IN) $(LIBS) -o $@
| +$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN) $(PMU_EVENTS_IN)
| +	$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(PERF_IN) $(PMU_EVENTS_IN) $(LIBS) -o $@
| +
| +$(JEVENTS_IN): FORCE
| +	$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=$(OUTPUT)pmu-events obj=jevents
| +
| +$(JEVENTS): $(JEVENTS_IN)
| +	$(QUIET_LINK)$(CC) $(JEVENTS_IN) -o $@
| +
| +$(PMU_EVENTS_IN): $(JEVENTS) FORCE
| +	$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=$(OUTPUT)pmu-events obj=pmu-events
|  
|  $(GTK_IN): FORCE
|  	$(Q)$(MAKE) $(build)=gtk
| @@ -538,7 +553,7 @@ clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean config-clean
|  	$(Q)find . -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
|  	$(Q)$(RM) .config-detected
|  	$(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 perf-read-vdsox32 $(OUTPUT)pmu-events/jevents
| -	$(call QUIET_CLEAN, core-gen)   $(RM)  *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex*
| +	$(call QUIET_CLEAN, core-gen)   $(RM)  *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* $(OUTPUT)pmu-events/pmu-events.c
|  	$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) clean
|  	$(python-clean)
|  
| diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
| index 7a2aaafa05e5..c35eeec2674c 100644
| --- a/tools/perf/pmu-events/Build
| +++ b/tools/perf/pmu-events/Build
| @@ -1,26 +1,13 @@
| -.SUFFIXES:
| -
| -libperf-y += pmu-events.o
| -
| -JEVENTS =	$(OUTPUT)pmu-events/jevents
| -JEVENTS_OBJS =	$(OUTPUT)pmu-events/json.o $(OUTPUT)pmu-events/jsmn.o \
| -		$(OUTPUT)pmu-events/jevents.o
| -
| -PMU_EVENTS =	$(srctree)/tools/perf/pmu-events/
| -
| -all: $(OUTPUT)pmu-events.o
| -
| -$(OUTPUT)pmu-events/jevents: $(JEVENTS_OBJS)
| -	$(call rule_mkdir)
| -	$(CC) -o $@ $(JEVENTS_OBJS)
| +jevents-y    += json.o jsmn.o jevents.o
| +pmu-events-y += pmu-events.o
|  
|  #
| -# Look for JSON files in $(PMU_EVENTS)/arch directory,
| -# process them and create tables in $(PMU_EVENTS)/pmu-events.c
| +# Look for JSON files in arch directory,
| +# process them and create tables in pmu-events.c
|  #
| -pmu-events/pmu-events.c: $(JEVENTS) FORCE
| -	$(JEVENTS) $(PMU_EVENTS)/arch $(PMU_EVENTS)/pmu-events.c
| - 
| +# TODO put event data files as dependencies instead of FORCE
| +pmu-events/pmu-events.c: FORCE
| +	$(Q)$(call echo-cmd,gen)$(JEVENTS) pmu-events/arch $(OUTPUT)pmu-events/pmu-events.c
|  
|  #
|  # If we fail to build pmu-events.o, it could very well be due to
| @@ -30,9 +17,3 @@ pmu-events/pmu-events.c: $(JEVENTS) FORCE
|  # so the build of perf can succeed even if we are not able to use
|  # the PMU event aliases.
|  #
| -
| -clean:
| -	rm -f $(JEVENTS_OBJS) $(JEVENTS) $(OUTPUT)pmu-events.o \
| -		$(PMU_EVENTS)pmu-events.c
| -
| -FORCE:
| _______________________________________________
| Linuxppc-dev mailing list
| Linuxppc-dev@lists.ozlabs.org
| https://lists.ozlabs.org/listinfo/linuxppc-dev


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 14:56   ` Jiri Olsa
@ 2015-05-22 17:25     ` Sukadev Bhattiprolu
  0 siblings, 0 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-22 17:25 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: mingo, ak, Michael Ellerman, Arnaldo Carvalho de Melo,
	Paul Mackerras, namhyung, linuxppc-dev, linux-kernel

Jiri Olsa [jolsa@redhat.com] wrote:
| On Tue, May 19, 2015 at 05:02:08PM -0700, Sukadev Bhattiprolu wrote:
| 
| SNIP
| 
| > +int main(int argc, char *argv[])
| > +{
| > +	int rc;
| > +	int flags;
| 
| SNIP
| 
| > +
| > +	rc = uname(&uts);
| > +	if (rc < 0) {
| > +		printf("%s: uname() failed: %s\n", argv[0], strerror(errno));
| > +		goto empty_map;
| > +	}
| > +
| > +	/* TODO: Add other flavors of machine type here */
| > +	if (!strcmp(uts.machine, "ppc64"))
| > +		arch = "powerpc";
| > +	else if (!strcmp(uts.machine, "i686"))
| > +		arch = "x86";
| > +	else if (!strcmp(uts.machine, "x86_64"))
| > +		arch = "x86";
| > +	else {
| > +		printf("%s: Unknown architecture %s\n", argv[0], uts.machine);
| > +		goto empty_map;
| > +	}
| 
| hum, wouldnt it be easier to pass the arch directly from the Makefile,
| we should have it ready in the $(ARCH) variable..

Yes, I will do that and make all three args (arch, start_dir, output_file)
mandatory (jevents won't be run from command line often, it doesn't need
default args).


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 15:58     ` Sukadev Bhattiprolu
@ 2015-05-22 17:33       ` Jiri Olsa
  2015-05-22 18:01       ` Andi Kleen
  1 sibling, 0 replies; 32+ messages in thread
From: Jiri Olsa @ 2015-05-22 17:33 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: ak, linux-kernel, Arnaldo Carvalho de Melo, mingo,
	Paul Mackerras, namhyung, linuxppc-dev

On Fri, May 22, 2015 at 08:58:22AM -0700, Sukadev Bhattiprolu wrote:

SNIP

> | 
> | there's no concetion (yet) in the new build system to trigger
> | another binery build as a dependency for object file.. I'd
> | rather do this the framework way, please check attached patch
> | 
> | also currently the pmu-events.c is generated every time,
> | so we need to add the event json data files as dependency
> 
> pmu-events.c depends only on JSON files relevant to the arch perf is
> being built on and there could be several JSON files per arch. So it
> would complicate the Makefiles.
> 
> Besides, didn't we conclude that the cost of generating pmu-events.c
> during build is negligible ?

yes, but only when it's necessary.. if there's no change in definitions
and we already have pmu-events.o built.. why rebuild?

> |  
> | -libperf-y += pmu-events/
> |  libperf-y += util/
> |  libperf-y += arch/
> |  libperf-y += ui/
> | diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
> | index 57e46a541686..a4ba451cffa2 100644
> | --- a/tools/perf/Makefile.perf
> | +++ b/tools/perf/Makefile.perf
> | @@ -272,14 +272,29 @@ strip: $(PROGRAMS) $(OUTPUT)perf
> |  
> |  PERF_IN := $(OUTPUT)perf-in.o
> |  
> | +JEVENTS       := $(OUTPUT)pmu-events/jevents
> | +JEVENTS_IN    := $(OUTPUT)pmu-events/jevents-in.o
> | +PMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o
> 
> I will try this out, but why not just add pmu-events.o to libperf?

this is related to my first comment:

	> | there's no concetion (yet) in the new build system to trigger
	> | another binery build as a dependency for object file.. I'd
	> | rather do this the framework way, please check attached patch

it's not possible to trigger the application build within the Build file
in a way the framework was designed.. so it cannot easily display commands
handle dependencies etc.. just allows simple/hacky solution you did ;-)

so I separated the pmu-events.o so libperf does not have dependency
on the jevents applications, and treat it as separated object

jirka

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 15:58     ` Sukadev Bhattiprolu
  2015-05-22 17:33       ` Jiri Olsa
@ 2015-05-22 18:01       ` Andi Kleen
  2015-05-22 18:09         ` Sukadev Bhattiprolu
  1 sibling, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-22 18:01 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: Jiri Olsa, linux-kernel, Arnaldo Carvalho de Melo, mingo,
	Paul Mackerras, namhyung, linuxppc-dev

> pmu-events.c depends only on JSON files relevant to the arch perf is
> being built on and there could be several JSON files per arch. So it
> would complicate the Makefiles.

Could just use a wildcard dependency on */$(ARCH)/*.json 

Also it would be good to move the generated file into the object
directory. I tried it but it needs some more changes to the Makefiles.

-Andi


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 18:01       ` Andi Kleen
@ 2015-05-22 18:09         ` Sukadev Bhattiprolu
  2015-05-22 21:28           ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-22 18:09 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Jiri Olsa, linux-kernel, Arnaldo Carvalho de Melo, mingo,
	Paul Mackerras, namhyung, linuxppc-dev

Andi Kleen [ak@linux.intel.com] wrote:
| > pmu-events.c depends only on JSON files relevant to the arch perf is
| > being built on and there could be several JSON files per arch. So it
| > would complicate the Makefiles.
| 
| Could just use a wildcard dependency on */$(ARCH)/*.json 

Sure, but shouldn't we allow JSON files to be in subdirs

	pmu-events/arch/x86/HSX/Haswell_core.json

and this could go to arbitrary levels?

| 
| Also it would be good to move the generated file into the object
| directory. I tried it but it needs some more changes to the Makefiles.
| 
| -Andi


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-22 18:09         ` Sukadev Bhattiprolu
@ 2015-05-22 21:28           ` Andi Kleen
  0 siblings, 0 replies; 32+ messages in thread
From: Andi Kleen @ 2015-05-22 21:28 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: Jiri Olsa, linux-kernel, Arnaldo Carvalho de Melo, mingo,
	Paul Mackerras, namhyung, linuxppc-dev

> Sure, but shouldn't we allow JSON files to be in subdirs
> 
> 	pmu-events/arch/x86/HSX/Haswell_core.json
> 
> and this could go to arbitrary levels?

I used a flat hierarchy. Should be good enough.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
  2015-05-22 14:56   ` Jiri Olsa
  2015-05-22 14:56   ` Jiri Olsa
@ 2015-05-27 13:54   ` Namhyung Kim
  2015-05-27 14:40     ` Andi Kleen
  2 siblings, 1 reply; 32+ messages in thread
From: Namhyung Kim @ 2015-05-27 13:54 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, linuxppc-dev, linux-kernel

Hi Sukadev,

On Tue, May 19, 2015 at 05:02:08PM -0700, Sukadev Bhattiprolu wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> This is a modified version of an earlier patch by Andi Kleen.
> 
> We expect architectures to describe the performance monitoring events
> for each CPU in a corresponding JSON file, which look like:
> 
> 	[
> 	{
> 	"EventCode": "0x00",
> 	"UMask": "0x01",
> 	"EventName": "INST_RETIRED.ANY",
> 	"BriefDescription": "Instructions retired from execution.",
> 	"PublicDescription": "Instructions retired from execution.",
> 	"Counter": "Fixed counter 1",
> 	"CounterHTOff": "Fixed counter 1",
> 	"SampleAfterValue": "2000003",
> 	"SampleAfterValue": "2000003",
> 	"MSRIndex": "0",
> 	"MSRValue": "0",
> 	"TakenAlone": "0",
> 	"CounterMask": "0",
> 	"Invert": "0",
> 	"AnyThread": "0",
> 	"EdgeDetect": "0",
> 	"PEBS": "0",
> 	"PRECISE_STORE": "0",
> 	"Errata": "null",
> 	"Offcore": "0"
> 	}
> 	]
> 
> We also expect the architectures to provide a mapping between individual
> CPUs to their JSON files. Eg:
> 
> 	GenuineIntel-6-1E,V1,/NHM-EP/NehalemEP_core_V1.json,core
> 
> which maps each CPU, identified by [vendor, family, model, version, type]
> to a JSON file.
> 
> Given these files, the program, jevents::
> 	- locates all JSON files for the architecture,
> 	- parses each JSON file and generates a C-style "PMU-events table"
> 	  (pmu-events.c)
> 	- locates a mapfile for the architecture
> 	- builds a global table, mapping each model of CPU to the
> 	  corresponding PMU-events table.

So we build tables of all models in the architecture, and choose
matching one when compiling perf, right?  Can't we do that when
building the tables?  IOW, why don't we check the VFM and discard
non-matching tables?  Those non-matching tables are also needed?

Sorry if I missed something..


> 
> The 'pmu-events.c' is generated when building perf and added to libperf.a.
> The global table pmu_events_map[] table in this pmu-events.c will be used
> in perf in a follow-on patch.
> 
> If the architecture does not have any JSON files or there is an error in
> processing them, an empty mapping file is created. This would allow the
> build of perf to proceed even if we are not able to provide aliases for
> events.
> 
> The parser for JSON files allows parsing Intel style JSON event files. This
> allows to use an Intel event list directly with perf. The Intel event lists
> can be quite large and are too big to store in unswappable kernel memory.
> 
> The conversion from JSON to C-style is straight forward.  The parser knows
> (very little) Intel specific information, and can be easily extended to
> handle fields for other CPUs.
> 
> The parser code is partially shared with an independent parsing library,
> which is 2-clause BSD licenced. To avoid any conflicts I marked those
> files as BSD licenced too. As part of perf they become GPLv2.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> 
> v2: Address review feedback. Rename option to --event-files
> v3: Add JSON example
> v4: Update manpages.
> v5: Don't remove dot in fixname. Fix compile error. Add include
> 	protection. Comment realloc.
> v6: Include debug/util.h
> v7: (Sukadev Bhattiprolu)
> 	Rebase to 4.0 and fix some conflicts.
> v8: (Sukadev Bhattiprolu)
> 	Move jevents.[hc] to tools/perf/pmu-events/
> 	Rewrite to locate and process arch specific JSON and "map" files;
> 	and generate a C file.
> 	(Removed acked-by Namhyung Kim due to modest changes to patch)
> 	Compile the generated pmu-events.c and add the pmu-events.o to
> 	libperf.a
> ---

[SNIP]
> +/* Call func with each event in the json file */
> +int json_events(const char *fn,
> +	  int (*func)(void *data, char *name, char *event, char *desc),
> +	  void *data)
> +{
> +	int err = -EIO;
> +	size_t size;
> +	jsmntok_t *tokens, *tok;
> +	int i, j, len;
> +	char *map;
> +
> +	if (!fn)
> +		return -ENOENT;
> +
> +	tokens = parse_json(fn, &map, &size, &len);
> +	if (!tokens)
> +		return -EIO;
> +	EXPECT(tokens->type == JSMN_ARRAY, tokens, "expected top level array");
> +	tok = tokens + 1;
> +	for (i = 0; i < tokens->size; i++) {
> +		char *event = NULL, *desc = NULL, *name = NULL;
> +		struct msrmap *msr = NULL;
> +		jsmntok_t *msrval = NULL;
> +		jsmntok_t *precise = NULL;
> +		jsmntok_t *obj = tok++;
> +
> +		EXPECT(obj->type == JSMN_OBJECT, obj, "expected object");
> +		for (j = 0; j < obj->size; j += 2) {
> +			jsmntok_t *field, *val;
> +			int nz;
> +
> +			field = tok + j;
> +			EXPECT(field->type == JSMN_STRING, tok + j,
> +			       "Expected field name");
> +			val = tok + j + 1;
> +			EXPECT(val->type == JSMN_STRING, tok + j + 1,
> +			       "Expected string value");
> +
> +			nz = !json_streq(map, val, "0");
> +			if (match_field(map, field, nz, &event, val)) {
> +				/* ok */
> +			} else if (json_streq(map, field, "EventName")) {
> +				addfield(map, &name, "", "", val);
> +			} else if (json_streq(map, field, "BriefDescription")) {
> +				addfield(map, &desc, "", "", val);
> +				fixdesc(desc);
> +			} else if (json_streq(map, field, "PEBS") && nz) {
> +				precise = val;
> +			} else if (json_streq(map, field, "MSRIndex") && nz) {
> +				msr = lookup_msr(map, val);
> +			} else if (json_streq(map, field, "MSRValue")) {
> +				msrval = val;
> +			} else if (json_streq(map, field, "Errata") &&
> +				   !json_streq(map, val, "null")) {
> +				addfield(map, &desc, ". ",
> +					" Spec update: ", val);
> +			} else if (json_streq(map, field, "Data_LA") && nz) {
> +				addfield(map, &desc, ". ",
> +					" Supports address when precise",
> +					NULL);
> +			}

Wouldn't it be better split arch-specific fields and put them in
somewhere in arch directory?

> +			/* ignore unknown fields */
> +		}
> +		if (precise && !strstr(desc, "(Precise Event)")) {
> +			if (json_streq(map, precise, "2"))
> +				addfield(map, &desc, " ", "(Must be precise)",
> +						NULL);
> +			else
> +				addfield(map, &desc, " ",
> +						"(Precise event)", NULL);
> +		}
> +		if (msr != NULL)
> +			addfield(map, &event, ",", msr->pname, msrval);
> +		fixname(name);
> +		err = func(data, name, event, desc);
> +		free(event);
> +		free(desc);
> +		free(name);
> +		if (err)
> +			break;
> +		tok += j;
> +	}
> +	EXPECT(tok - tokens == len, tok, "unexpected objects at end");
> +	err = 0;
> +out_free:
> +	free_json(map, size, tokens);
> +	return err;
> +}

[SNIP]
> +static int process_mapfile(FILE *outfp, char *fpath)
> +{
> +	int n = 16384;
> +	FILE *mapfp;
> +	char *save;
> +	char *line, *p;
> +	int line_num;
> +	char *tblname;
> +
> +	printf("Processing mapfile %s\n", fpath);
> +
> +	line = malloc(n);
> +	if (!line)
> +		return -1;
> +
> +	mapfp = fopen(fpath, "r");
> +	if (!mapfp) {
> +		printf("Error %s opening %s\n", strerror(errno), fpath);
> +		return -1;
> +	}
> +
> +	print_mapping_table_prefix(outfp);
> +
> +	line_num = 0;
> +	while (1) {
> +		char *vfm, *version, *type, *fname;
> +
> +		line_num++;
> +		p = fgets(line, n, mapfp);
> +		if (!p)
> +			break;
> +
> +		if (line[0] == '#')
> +			continue;
> +
> +		if (line[strlen(line)-1] != '\n') {
> +			/* TODO Deal with lines longer than 16K */
> +			printf("Mapfile %s: line %d too long, aborting\n",
> +					fpath, line_num);
> +			return -1;
> +		}
> +		line[strlen(line)-1] = '\0';
> +
> +		vfm = strtok_r(p, ",", &save);
> +		version = strtok_r(NULL, ",", &save);
> +		fname = strtok_r(NULL, ",", &save);
> +		type = strtok_r(NULL, ",", &save);
> +
> +		tblname = file_name_to_table_name(fname);
> +		fprintf(outfp, "{\n");
> +		fprintf(outfp, "\t.vfm = \"%s\",\n", vfm);
> +		fprintf(outfp, "\t.version = \"%s\",\n", version);
> +		fprintf(outfp, "\t.type = \"%s\",\n", type);
> +
> +		/*
> +		 * CHECK: We can't use the type (eg "core") field in the
> +		 * table name. For us to do that, we need to somehow tweak
> +		 * the other caller of file_name_to_table(), process_json()
> +		 * to determine the type. process_json() file has no way
> +		 * of knowing these are "core" events unless file name has
> +		 * core in it. If filename has core in it, we can safely
> +		 * ignore the type field here also.
> +		 */
> +		fprintf(outfp, "\t.table = %s\n", tblname);
> +		fprintf(outfp, "},\n");
> +	}
> +
> +	print_mapping_table_suffix(outfp);
> +

You need to free 'line' for each return path..

Thanks,
Namhyung


> +	return 0;
> +}

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-20  0:02 ` [PATCH 4/4] perf: Add power8 PMU events in JSON format Sukadev Bhattiprolu
@ 2015-05-27 13:59   ` Namhyung Kim
  2015-05-27 14:41     ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Namhyung Kim @ 2015-05-27 13:59 UTC (permalink / raw)
  To: Sukadev Bhattiprolu
  Cc: mingo, ak, Michael Ellerman, Jiri Olsa, Arnaldo Carvalho de Melo,
	Paul Mackerras, linuxppc-dev, linux-kernel

On Tue, May 19, 2015 at 05:02:10PM -0700, Sukadev Bhattiprolu wrote:
> The power8.json and 004d0100.json files describe the PMU events in the
> Power8 processor.
> 
> The jevents program from the prior patches will use these JSON files
> to create tables which will then be used in perf to build aliases for
> PMU events. This in turn would allow users to specify these PMU events
> by name:
> 
> 	$ perf stat -e pm_1plus_ppc_cmpl sleep 1
> 
> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
> ---

[SNIP]
> +  {
> +    "EventCode": "0x2505e",
> +    "EventName": "PM_BACK_BR_CMPL",
> +    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
> +    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"

Can't we remove PublicDescription field if it's identical to
BriefDescription?  It seems just wasting spaces..

Thanks,
Namhyung


> +  },
> +  {
> +    "EventCode": "0x4082",
> +    "EventName": "PM_BANK_CONFLICT",
> +    "BriefDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,",
> +    "PublicDescription": "Read blocked due to interleave conflict. The ifar logic will detect an interleave conflict and kill the data that was read that cycle.,"
> +  },
> +  {
> +    "EventCode": "0x10068",
> +    "EventName": "PM_BRU_FIN",
> +    "BriefDescription": "Branch Instruction Finished,",
> +    "PublicDescription": "Branch Instruction Finished .,"
> +  },
> +  {
> +    "EventCode": "0x20036",
> +    "EventName": "PM_BR_2PATH",
> +    "BriefDescription": "two path branch,",
> +    "PublicDescription": "two path branch.,"
> +  },
> +  {
> +    "EventCode": "0x5086",
> +    "EventName": "PM_BR_BC_8",
> +    "BriefDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,",
> +    "PublicDescription": "Pairable BC+8 branch that has not been converted to a Resolve Finished in the BRU pipeline,"
> +  },
> +  {
> +    "EventCode": "0x5084",
> +    "EventName": "PM_BR_BC_8_CONV",
> +    "BriefDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,",
> +    "PublicDescription": "Pairable BC+8 branch that was converted to a Resolve Finished in the BRU pipeline.,"
> +  },

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-27 13:54   ` Namhyung Kim
@ 2015-05-27 14:40     ` Andi Kleen
  2015-05-27 14:59       ` Namhyung Kim
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-27 14:40 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Sukadev Bhattiprolu, mingo, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

> So we build tables of all models in the architecture, and choose
> matching one when compiling perf, right?  Can't we do that when
> building the tables?  IOW, why don't we check the VFM and discard
> non-matching tables?  Those non-matching tables are also needed?

We build it for all cpus in an architecture, not all architectures.
So e.g. for an x86 binary power is not included, and vice versa.
It always includes all CPUs for a given architecture, so it's possible
to use the perf binary on other systems than just the one it was 
build on.

-andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-27 13:59   ` Namhyung Kim
@ 2015-05-27 14:41     ` Andi Kleen
  2015-05-27 15:01       ` Namhyung Kim
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-27 14:41 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Sukadev Bhattiprolu, mingo, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

> > +  {
> > +    "EventCode": "0x2505e",
> > +    "EventName": "PM_BACK_BR_CMPL",
> > +    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
> > +    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"
> 
> Can't we remove PublicDescription field if it's identical to
> BriefDescription?  It seems just wasting spaces..

It's not always identical. There are events where PublicDescription is much longer (several paragraphs)

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-27 14:40     ` Andi Kleen
@ 2015-05-27 14:59       ` Namhyung Kim
  2015-05-28 11:52         ` Jiri Olsa
  0 siblings, 1 reply; 32+ messages in thread
From: Namhyung Kim @ 2015-05-27 14:59 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Sukadev Bhattiprolu, Ingo Molnar, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

Hi Andi,

On Wed, May 27, 2015 at 11:40 PM, Andi Kleen <ak@linux.intel.com> wrote:
>> So we build tables of all models in the architecture, and choose
>> matching one when compiling perf, right?  Can't we do that when
>> building the tables?  IOW, why don't we check the VFM and discard
>> non-matching tables?  Those non-matching tables are also needed?
>
> We build it for all cpus in an architecture, not all architectures.
> So e.g. for an x86 binary power is not included, and vice versa.

OK.

> It always includes all CPUs for a given architecture, so it's possible
> to use the perf binary on other systems than just the one it was
> build on.

So it selects one at run-time not build-time, good.  But I worry about
the size of the intel tables.  How large are they?  Maybe we can make
it dynamic-loadable if needed..

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-27 14:41     ` Andi Kleen
@ 2015-05-27 15:01       ` Namhyung Kim
  2015-05-27 16:24         ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Namhyung Kim @ 2015-05-27 15:01 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Sukadev Bhattiprolu, Ingo Molnar, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

On Wed, May 27, 2015 at 11:41 PM, Andi Kleen <ak@linux.intel.com> wrote:
>> > +  {
>> > +    "EventCode": "0x2505e",
>> > +    "EventName": "PM_BACK_BR_CMPL",
>> > +    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
>> > +    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"
>>
>> Can't we remove PublicDescription field if it's identical to
>> BriefDescription?  It seems just wasting spaces..
>
> It's not always identical. There are events where PublicDescription is much longer (several paragraphs)

I know.  What I said is make it optional so that we can drop if it's identical.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-27 15:01       ` Namhyung Kim
@ 2015-05-27 16:24         ` Andi Kleen
  2015-05-27 20:24           ` Sukadev Bhattiprolu
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-27 16:24 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Sukadev Bhattiprolu, Ingo Molnar, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

On Thu, May 28, 2015 at 12:01:31AM +0900, Namhyung Kim wrote:
> On Wed, May 27, 2015 at 11:41 PM, Andi Kleen <ak@linux.intel.com> wrote:
> >> > +  {
> >> > +    "EventCode": "0x2505e",
> >> > +    "EventName": "PM_BACK_BR_CMPL",
> >> > +    "BriefDescription": "Branch instruction completed with a target address less than current instruction address,",
> >> > +    "PublicDescription": "Branch instruction completed with a target address less than current instruction address.,"
> >>
> >> Can't we remove PublicDescription field if it's identical to
> >> BriefDescription?  It seems just wasting spaces..
> >
> > It's not always identical. There are events where PublicDescription is much longer (several paragraphs)
> 
> I know.  What I said is make it optional so that we can drop if it's identical.

Should be easy enough. It's already optional in the jevents parser.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] perf: Add power8 PMU events in JSON format
  2015-05-27 16:24         ` Andi Kleen
@ 2015-05-27 20:24           ` Sukadev Bhattiprolu
  0 siblings, 0 replies; 32+ messages in thread
From: Sukadev Bhattiprolu @ 2015-05-27 20:24 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Namhyung Kim, Ingo Molnar, Michael Ellerman, Jiri Olsa,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

Andi Kleen [ak@linux.intel.com] wrote:
| > I know.  What I said is make it optional so that we can drop if it's identical.
| 
| Should be easy enough. It's already optional in the jevents parser.

I have removed the duplicated entries in power8.json.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-27 14:59       ` Namhyung Kim
@ 2015-05-28 11:52         ` Jiri Olsa
  2015-05-28 12:09           ` Ingo Molnar
  0 siblings, 1 reply; 32+ messages in thread
From: Jiri Olsa @ 2015-05-28 11:52 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Andi Kleen, Sukadev Bhattiprolu, Ingo Molnar, Michael Ellerman,
	Arnaldo Carvalho de Melo, Paul Mackerras, linuxppc-dev,
	linux-kernel

On Wed, May 27, 2015 at 11:59:04PM +0900, Namhyung Kim wrote:
> Hi Andi,
> 
> On Wed, May 27, 2015 at 11:40 PM, Andi Kleen <ak@linux.intel.com> wrote:
> >> So we build tables of all models in the architecture, and choose
> >> matching one when compiling perf, right?  Can't we do that when
> >> building the tables?  IOW, why don't we check the VFM and discard
> >> non-matching tables?  Those non-matching tables are also needed?
> >
> > We build it for all cpus in an architecture, not all architectures.
> > So e.g. for an x86 binary power is not included, and vice versa.
> 
> OK.
> 
> > It always includes all CPUs for a given architecture, so it's possible
> > to use the perf binary on other systems than just the one it was
> > build on.
> 
> So it selects one at run-time not build-time, good.  But I worry about
> the size of the intel tables.  How large are they?  Maybe we can make
> it dynamic-loadable if needed..

just compiled Sukadev's new version with Andi's events list
and stripped binary size is:

[jolsa@krava perf]$ ls -l perf
-rwxrwxr-x 1 jolsa jolsa 2772640 May 28 13:49 perf


while perf on Arnaldo's perf/core is:

[jolsa@krava perf]$ ls -l perf
-rwxrwxr-x 1 jolsa jolsa 2334816 May 28 13:49 perf


seems not that bad

jirka

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-28 11:52         ` Jiri Olsa
@ 2015-05-28 12:09           ` Ingo Molnar
  2015-05-28 13:07             ` Ingo Molnar
  0 siblings, 1 reply; 32+ messages in thread
From: Ingo Molnar @ 2015-05-28 12:09 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Namhyung Kim, Andi Kleen, Sukadev Bhattiprolu, Ingo Molnar,
	Michael Ellerman, Arnaldo Carvalho de Melo, Paul Mackerras,
	linuxppc-dev, linux-kernel


* Jiri Olsa <jolsa@redhat.com> wrote:

> On Wed, May 27, 2015 at 11:59:04PM +0900, Namhyung Kim wrote:
> > Hi Andi,
> > 
> > On Wed, May 27, 2015 at 11:40 PM, Andi Kleen <ak@linux.intel.com> wrote:
> > >> So we build tables of all models in the architecture, and choose
> > >> matching one when compiling perf, right?  Can't we do that when
> > >> building the tables?  IOW, why don't we check the VFM and discard
> > >> non-matching tables?  Those non-matching tables are also needed?
> > >
> > > We build it for all cpus in an architecture, not all architectures.
> > > So e.g. for an x86 binary power is not included, and vice versa.
> > 
> > OK.
> > 
> > > It always includes all CPUs for a given architecture, so it's possible
> > > to use the perf binary on other systems than just the one it was
> > > build on.
> > 
> > So it selects one at run-time not build-time, good.  But I worry about
> > the size of the intel tables.  How large are they?  Maybe we can make
> > it dynamic-loadable if needed..
> 
> just compiled Sukadev's new version with Andi's events list
> and stripped binary size is:
> 
> [jolsa@krava perf]$ ls -l perf
> -rwxrwxr-x 1 jolsa jolsa 2772640 May 28 13:49 perf
> 
> 
> while perf on Arnaldo's perf/core is:
> 
> [jolsa@krava perf]$ ls -l perf
> -rwxrwxr-x 1 jolsa jolsa 2334816 May 28 13:49 perf
> 
> seems not that bad

It's not bad at all.

Do you have a Git tree URI where I could take a look at its current state? A tree 
would be nice that has as many of these patches integrated as possible.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-28 12:09           ` Ingo Molnar
@ 2015-05-28 13:07             ` Ingo Molnar
  2015-05-28 15:39               ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Ingo Molnar @ 2015-05-28 13:07 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Namhyung Kim, Andi Kleen, Sukadev Bhattiprolu, Ingo Molnar,
	Michael Ellerman, Arnaldo Carvalho de Melo, Paul Mackerras,
	linuxppc-dev, linux-kernel


* Ingo Molnar <mingo@kernel.org> wrote:

> 
> * Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > On Wed, May 27, 2015 at 11:59:04PM +0900, Namhyung Kim wrote:
> > > Hi Andi,
> > > 
> > > On Wed, May 27, 2015 at 11:40 PM, Andi Kleen <ak@linux.intel.com> wrote:
> > > >> So we build tables of all models in the architecture, and choose
> > > >> matching one when compiling perf, right?  Can't we do that when
> > > >> building the tables?  IOW, why don't we check the VFM and discard
> > > >> non-matching tables?  Those non-matching tables are also needed?
> > > >
> > > > We build it for all cpus in an architecture, not all architectures.
> > > > So e.g. for an x86 binary power is not included, and vice versa.
> > > 
> > > OK.
> > > 
> > > > It always includes all CPUs for a given architecture, so it's possible
> > > > to use the perf binary on other systems than just the one it was
> > > > build on.
> > > 
> > > So it selects one at run-time not build-time, good.  But I worry about
> > > the size of the intel tables.  How large are they?  Maybe we can make
> > > it dynamic-loadable if needed..
> > 
> > just compiled Sukadev's new version with Andi's events list
> > and stripped binary size is:
> > 
> > [jolsa@krava perf]$ ls -l perf
> > -rwxrwxr-x 1 jolsa jolsa 2772640 May 28 13:49 perf
> > 
> > 
> > while perf on Arnaldo's perf/core is:
> > 
> > [jolsa@krava perf]$ ls -l perf
> > -rwxrwxr-x 1 jolsa jolsa 2334816 May 28 13:49 perf
> > 
> > seems not that bad
> 
> It's not bad at all.
> 
> Do you have a Git tree URI where I could take a look at its current state? A 
> tree would be nice that has as many of these patches integrated as possible.

A couple of observations:

1)

The x86 JSON files are unnecessarily large, and for no good reason, for example:

 triton:~/tip/tools/perf/pmu-events/arch/x86> grep -h EdgeDetect * | sort | uniq -c
   5534         "EdgeDetect": "0",
     57         "EdgeDetect": "1",

it's ridiculous to repeat "EdgeDetect": "0" more than 5 thousand times, just so 
that in 57 cases we can say '1'. Those lines should be omitted, and the default 
value should be 0.

This would reduce the source code line count of the JSON files by 40% already:

 triton:~/tip/tools/perf/pmu-events/arch/x86> grep ': "0",' * | wc -l
 42127
 triton:~/tip/tools/perf/pmu-events/arch/x86> cat * | wc -l
 103702

And no, I don't care if manufacturers release crappy JSON files - they need to be 
fixed/stripped before applied to our source tree.

2)

Also, the JSON files should carry more high levelstructure than they do today. 
Let's take SandyBridge_core.json as an example: it defines 386 events, but they 
are all in a 'flat' hierarchy, which is almost impossible for all but the most 
expert users to overview.

So instead of this flat structure, there should at minimum be broad categorization 
of the various parts of the hardware they relate to: whether they relate to the 
branch predictor, memory caches, TLB caches, memory ops, offcore, decoders, 
execution units, FPU ops, etc., etc. - so that they can be queried via 'perf 
list'.

We don't just want the import the unstructured mess that these event files are - 
we want to turn them into real structure. We can still keep the messy vendor names 
as well, like IDQ.DSB_CYCLES, but we want to impose structure as well.

3)

There should be good 'perf list' visualization for these events: grouping, 
individual names, with a good interface to query details if needed. I.e. it should 
be possible to browse and discover events relevant to the CPU the tool is 
executing on.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-28 13:07             ` Ingo Molnar
@ 2015-05-28 15:39               ` Andi Kleen
  2015-05-29  7:27                 ` Ingo Molnar
  0 siblings, 1 reply; 32+ messages in thread
From: Andi Kleen @ 2015-05-28 15:39 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Jiri Olsa, Namhyung Kim, Sukadev Bhattiprolu, Ingo Molnar,
	Michael Ellerman, Arnaldo Carvalho de Melo, Paul Mackerras,
	linuxppc-dev, linux-kernel

> So instead of this flat structure, there should at minimum be broad categorization 
> of the various parts of the hardware they relate to: whether they relate to the 
> branch predictor, memory caches, TLB caches, memory ops, offcore, decoders, 
> execution units, FPU ops, etc., etc. - so that they can be queried via 'perf 
> list'.

The categorization is generally on the stem name, which already works fine with
the existing perf list wildcard support. So for example you only want 
branches. 

perf list br*
...
  br_inst_exec.all_branches                         
       [Speculative and retired branches]
  br_inst_exec.all_conditional                      
       [Speculative and retired macro-conditional branches]
  br_inst_exec.all_direct_jmp                       
       [Speculative and retired macro-unconditional branches excluding calls and indirects]
  br_inst_exec.all_direct_near_call                 
       [Speculative and retired direct near calls]
  br_inst_exec.all_indirect_jump_non_call_ret       
       [Speculative and retired indirect branches excluding calls and returns]
  br_inst_exec.all_indirect_near_return             
       [Speculative and retired indirect return branches]
...

Or mid level cache events:

perf list l2*
...
  l2_l1d_wb_rqsts.all                               
       [Not rejected writebacks from L1D to L2 cache lines in any state]
  l2_l1d_wb_rqsts.hit_e                             
       [Not rejected writebacks from L1D to L2 cache lines in E state]
  l2_l1d_wb_rqsts.hit_m                             
       [Not rejected writebacks from L1D to L2 cache lines in M state]
  l2_l1d_wb_rqsts.miss                              
       [Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)]
  l2_lines_in.all                                   
       [L2 cache lines filling L2]
...

There are some exceptions, but generally it works this way.

The stem could be put into a separate header, but it would seem redundant to me. 

> We don't just want the import the unstructured mess that these event files are - 
> we want to turn them into real structure. We can still keep the messy vendor names 
> as well, like IDQ.DSB_CYCLES, but we want to impose structure as well.

The vendor names directly map to the micro architecture, which is whole
point of the events. IDQ is a part of the CPU, and is described in the 
CPU manuals. One of the main motivations for adding event lists is to make
perf match to that documentation.

> 
> 3)
> 
> There should be good 'perf list' visualization for these events: grouping, 
> individual names, with a good interface to query details if needed. I.e. it should 
> be possible to browse and discover events relevant to the CPU the tool is 
> executing on.

I suppose we could change perf list to give the stem names as section headers
to make the long list a bit more readable.

Generally you need to have some knowledge of the micro architecture to use
these events. There is no way around that.

-Andi
-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-28 15:39               ` Andi Kleen
@ 2015-05-29  7:27                 ` Ingo Molnar
  2015-05-31 16:07                   ` Andi Kleen
  0 siblings, 1 reply; 32+ messages in thread
From: Ingo Molnar @ 2015-05-29  7:27 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Jiri Olsa, Namhyung Kim, Sukadev Bhattiprolu, Ingo Molnar,
	Michael Ellerman, Arnaldo Carvalho de Melo, Paul Mackerras,
	linuxppc-dev, linux-kernel


* Andi Kleen <ak@linux.intel.com> wrote:

> > So instead of this flat structure, there should at minimum be broad categorization 
> > of the various parts of the hardware they relate to: whether they relate to the 
> > branch predictor, memory caches, TLB caches, memory ops, offcore, decoders, 
> > execution units, FPU ops, etc., etc. - so that they can be queried via 'perf 
> > list'.
> 
> The categorization is generally on the stem name, which already works fine with 
> the existing perf list wildcard support. So for example you only want branches.
>
> perf list br*
> ...
>   br_inst_exec.all_branches                         
>        [Speculative and retired branches]
>   br_inst_exec.all_conditional                      
>        [Speculative and retired macro-conditional branches]
>   br_inst_exec.all_direct_jmp                       
>        [Speculative and retired macro-unconditional branches excluding calls and indirects]
>   br_inst_exec.all_direct_near_call                 
>        [Speculative and retired direct near calls]
>   br_inst_exec.all_indirect_jump_non_call_ret       
>        [Speculative and retired indirect branches excluding calls and returns]
>   br_inst_exec.all_indirect_near_return             
>        [Speculative and retired indirect return branches]
> ...
> 
> Or mid level cache events:
> 
> perf list l2*
> ...
>   l2_l1d_wb_rqsts.all                               
>        [Not rejected writebacks from L1D to L2 cache lines in any state]
>   l2_l1d_wb_rqsts.hit_e                             
>        [Not rejected writebacks from L1D to L2 cache lines in E state]
>   l2_l1d_wb_rqsts.hit_m                             
>        [Not rejected writebacks from L1D to L2 cache lines in M state]
>   l2_l1d_wb_rqsts.miss                              
>        [Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)]
>   l2_lines_in.all                                   
>        [L2 cache lines filling L2]
> ...
> 
> There are some exceptions, but generally it works this way.

You are missing my point in several ways:

1)

Firstly, there are _tons_ of 'exceptions' to the 'stem name' grouping, to the 
level that makes it unusable for high level grouping of events.

Here's the 'stem name' histogram on the SandyBridge event list:

  $ grep EventName pmu-events/arch/x86/SandyBridge_core.json  | cut -d\. -f1 | cut -d\" -f4 | cut -d\_ -f1 | sort | uniq -c | sort -n

      1 AGU
      1 BACLEARS
      1 EPT
      1 HW
      1 ICACHE
      1 INSTS
      1 PAGE
      1 ROB
      1 RS
      1 SQ
      2 ARITH
      2 DSB2MITE
      2 ILD
      2 LOAD
      2 LOCK
      2 LONGEST
      2 MISALIGN
      2 SIMD
      2 TLB
      3 CPL
      3 DSB
      3 INST
      3 INT
      3 LSD
      3 MACHINE
      4 CPU
      4 OTHER
      4 PARTIAL
      5 CYCLE
      5 ITLB
      6 LD
      7 L1D
      8 DTLB
     10 FP
     12 RESOURCE
     21 UOPS
     24 IDQ
     25 MEM
     37 BR
     37 L2
    131 OFFCORE

Out of 386 events. This grouping has the following severe problems:

  - that's 41 'stem name' groups, way too much as a first hop high level 
    structure. We want the kind of high level categorization I suggested:
    cache, decoding, branches, execution pipeline, memory events, vector unit 
    events - which broad categories exist in all CPUs and are microarchitecture 
    independent.

  - even these 'stem names' are mostly unstructured and unreadable. The two 
    examples you cited are the best case that are borderline readable, but they
    cover less than 20% of all events.

  - the 'stem name' concept is not even used consistently, the names are 
    essentially a random collection of Intel internal acronyms, which occasionally 
    match up with high level concepts. These vendor defined names have very poor 
    high level structure.

  - the 'stem names' are totally imbalanced: there's one 'super' category 'stem 
    name': OFFCORE_RESPONSE, with 131 events in it and then there are super small 
    groups in the list above. Not well suited to get a good overview about what 
    measurement capabilities the hardware has.

So forget about using 'stem names' as the high level structure. These events have 
no high level structure and we should provide that, instead of dumping 380+ events 
on the unsuspecting user.

2)

Secondly, categorization and higher level hieararchy should be used to keep the 
list manageable. The fact that if _you_ know what to search for you can list just 
a subset does not mean anything to the new user trying to discover events.

A simple 'perf list' should list the high level categories by default, with a 
count displayed that shows how many further events are within that category. 
(compacted tree output would be usable as well.)

> The stem could be put into a separate header, but it would seem redundant to me.

Higher level categories simply don't exist in these names in any usable form, so 
it has to be created. Just redundantly repeating the 'stem name' would be silly, 
as they are unusable for the purposes of high level categorization.

> > We don't just want the import the unstructured mess that these event files are 
> > - we want to turn them into real structure. We can still keep the messy vendor 
> > names as well, like IDQ.DSB_CYCLES, but we want to impose structure as well.
> 
> The vendor names directly map to the micro architecture, which is whole point of 
> the events. IDQ is a part of the CPU, and is described in the CPU manuals. One 
> of the main motivations for adding event lists is to make perf match to that 
> documentation.

Your argument is a logical fallacy: there is absolutely no conflict between also 
supporting quirky vendor names and also having good high level structure and 
naming, to make it all accessible to the first time user.

> > 3)
> > 
> > There should be good 'perf list' visualization for these events: grouping, 
> > individual names, with a good interface to query details if needed. I.e. it 
> > should be possible to browse and discover events relevant to the CPU the tool 
> > is executing on.
> 
> I suppose we could change perf list to give the stem names as section headers to 
> make the long list a bit more readable.

No, the 'stem names' are crap - instead we want to create sensible high level 
categories and want to categorize the events, I gave you a few ideas above and in 
the previous mail.

> Generally you need to have some knowledge of the micro architecture to use these 
> events. There is no way around that.

Here your argument again relies on a logical fallacy: there is absolutely no 
conflict between good high level structure, and the idea that you need to know 
about CPUs to make sense of hardware events that deal with fine internal details.

Also, you are denying the plain fact that the highest level categories _are_ 
largely microarchitecture independent: can you show me a single modern mainstream 
x86 CPU that doesn't have these broad high level categories:

  - CPU cache
  - memory accesses
  - decoding, branch execution
  - execution pipeline
  - FPU, vector units

?

There's none, and the reason is simple: the high level structure of CPUs is still 
dictated by basic physics, and physics is microarchitecture independent.

Lower level structure will inevitably be microarchitecture and sometimes even 
model specific - but that's absolutely no excuse to not have good high level 
structure.

So these are not difficult concepts at all, please make an honest effort at 
understanding then and responding to them, as properly addressing them is a 
must-have for this patch submission.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file
  2015-05-29  7:27                 ` Ingo Molnar
@ 2015-05-31 16:07                   ` Andi Kleen
  0 siblings, 0 replies; 32+ messages in thread
From: Andi Kleen @ 2015-05-31 16:07 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Jiri Olsa, Namhyung Kim, Sukadev Bhattiprolu, Ingo Molnar,
	Michael Ellerman, Arnaldo Carvalho de Melo, Paul Mackerras,
	linuxppc-dev, linux-kernel


Ok I did some scripting to add these topics you requested to the Intel JSON files,
and changed perf list to group events by them. 

I'll redirect any questions on their value to you.  
And I certainly hope this is the last of your "improvements" for now.

The updated event lists are available in

git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/intel-json-files-3

The updated patches are available in 

git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/builtin-json-6

Also posted separately.

The output looks like this

% perf list
...
Cache:
  l1d.replacement                                   
       [L1D data line replacements]
  l1d_pend_miss.pending                             
       [L1D miss oustandings duration in cycles]
  l1d_pend_miss.pending_cycles                      
       [Cycles with L1D load Misses outstanding]
...
Floating point:
  fp_assist.any                                     
       [Cycles with any input/output SSE or FP assist]
  fp_assist.simd_input                              
       [Number of SIMD FP assists due to input values]
  fp_assist.simd_output                             
       [Number of SIMD FP assists due to Output values]
...
Memory:
  machine_clears.memory_ordering                    
       [Counts the number of machine clears due to memory order conflicts]
  mem_trans_retired.load_latency_gt_128             
       [Loads with latency value being above 128 (Must be precise)]
  mem_trans_retired.load_latency_gt_16              
       [Loads with latency value being above 16 (Must be precise)]
...
Pipeline:
  arith.fpu_div                                     
       [Divide operations executed]
  arith.fpu_div_active                              
       [Cycles when divider is busy executing divide operations]
  baclears.any                                      
       [Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct
        prediction and this is corrected by other branch handling mechanisms at the front end]


-Andi

P.S.: You may want to look up the definition of logical fallacy in wikipedia.

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2015-05-31 16:07 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-20  0:02 [PATCH 0/4] perf: Add support for PMU events in JSON format Sukadev Bhattiprolu
2015-05-20  0:02 ` [PATCH 1/4] perf: Add jsmn `jasmine' JSON parser Sukadev Bhattiprolu
2015-05-20  0:02 ` [PATCH 2/4] perf: jevents: Program to convert JSON file to C style file Sukadev Bhattiprolu
2015-05-22 14:56   ` Jiri Olsa
2015-05-22 15:58     ` Sukadev Bhattiprolu
2015-05-22 17:33       ` Jiri Olsa
2015-05-22 18:01       ` Andi Kleen
2015-05-22 18:09         ` Sukadev Bhattiprolu
2015-05-22 21:28           ` Andi Kleen
2015-05-22 14:56   ` Jiri Olsa
2015-05-22 17:25     ` Sukadev Bhattiprolu
2015-05-27 13:54   ` Namhyung Kim
2015-05-27 14:40     ` Andi Kleen
2015-05-27 14:59       ` Namhyung Kim
2015-05-28 11:52         ` Jiri Olsa
2015-05-28 12:09           ` Ingo Molnar
2015-05-28 13:07             ` Ingo Molnar
2015-05-28 15:39               ` Andi Kleen
2015-05-29  7:27                 ` Ingo Molnar
2015-05-31 16:07                   ` Andi Kleen
2015-05-20  0:02 ` [PATCH 3/4] perf: Use pmu_events_map table to create event aliases Sukadev Bhattiprolu
2015-05-20 23:58   ` Andi Kleen
2015-05-21  0:19     ` Sukadev Bhattiprolu
2015-05-21  2:56       ` Andi Kleen
2015-05-21  5:02         ` Sukadev Bhattiprolu
2015-05-21 18:50           ` Andi Kleen
2015-05-20  0:02 ` [PATCH 4/4] perf: Add power8 PMU events in JSON format Sukadev Bhattiprolu
2015-05-27 13:59   ` Namhyung Kim
2015-05-27 14:41     ` Andi Kleen
2015-05-27 15:01       ` Namhyung Kim
2015-05-27 16:24         ` Andi Kleen
2015-05-27 20:24           ` Sukadev Bhattiprolu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).