All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-08 20:49 ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The current method for allocating trace source ID values to sources is
to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
The STM is allocated ID 0x1.

This fixed algorithm is used in both the CoreSight driver code, and by
perf when writing the trace metadata in the AUXTRACE_INFO record.

The method needs replacing as currently:-
1. It is inefficient in using available IDs.
2. Does not scale to larger systems with many cores and the algorithm
has no limits so will generate invalid trace IDs for cpu number > 44.

Additionally requirements to allocate additional system IDs on some
systems have been seen.

This patch set  introduces an API that allows the allocation of trace IDs
in a dynamic manner.

Architecturally reserved IDs are never allocated, and the system is
limited to allocating only valid IDs.

Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
the new API.

perf handling is changed so that the ID associated with the CPU is read
from sysfs. The ID allocator is notified when perf events start and stop
so CPU based IDs are kept constant throughout any perf session.

For the ETMx.x devices IDs are allocated on certain events
a) When using sysfs, an ID will be allocated on hardware enable, and freed
when the sysfs reset is written.
b) When using perf, ID is allocated on hardware enable, and freed on
hardware disable.

For both cases the ID is allocated when sysfs is read to get the current
trace ID. This ensures that consistent decode metadata can be extracted
from the system where this read occurs before device enable.

Note: This patchset breaks backward compatibility for perf record.
Because the method for generating the AUXTRACE_INFO meta data has
changed, using an older perf record will result in metadata that
does not match the trace IDs used in the recorded trace data.
This mismatch will cause subsequent decode to fail. Older versions of
perf will still be able to decode data generated by the updated system.


Applies to coresight/next [b54f53bc11a5]
Tested on DB410c

Mike Leach (10):
  coresight: trace-id: Add API to dynamically assign trace ID values
  coresight: trace-id: Set up source trace ID map for system
  coresight: stm: Update STM driver to use Trace ID api
  coresight: etm4x: Use trace ID API to dynamically allocate trace ID
  coresight: etm3x: Use trace ID API to allocate IDs
  coresight: perf: traceid: Add perf notifiers for trace ID
  perf: cs-etm: Update event to read trace ID from sysfs
  coresight: Remove legacy Trace ID allocation mechanism
  coresight: etmX.X: stm: Remove unused legacy source trace ID ops
  coresight: trace-id: Add debug & test macros to trace id allocation

 drivers/hwtracing/coresight/Makefile          |   2 +-
 drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
 .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
 drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
 .../coresight/coresight-etm3x-core.c          |  93 ++++---
 .../coresight/coresight-etm3x-sysfs.c         |  28 +-
 .../coresight/coresight-etm4x-core.c          |  63 ++++-
 .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
 drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
 drivers/hwtracing/coresight/coresight-priv.h  |   1 +
 drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
 .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
 .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
 include/linux/coresight-pmu.h                 |  12 -
 include/linux/coresight.h                     |   3 -
 tools/perf/arch/arm/util/cs-etm.c             |  12 +-
 16 files changed, 530 insertions(+), 175 deletions(-)
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-08 20:49 ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The current method for allocating trace source ID values to sources is
to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
The STM is allocated ID 0x1.

This fixed algorithm is used in both the CoreSight driver code, and by
perf when writing the trace metadata in the AUXTRACE_INFO record.

The method needs replacing as currently:-
1. It is inefficient in using available IDs.
2. Does not scale to larger systems with many cores and the algorithm
has no limits so will generate invalid trace IDs for cpu number > 44.

Additionally requirements to allocate additional system IDs on some
systems have been seen.

This patch set  introduces an API that allows the allocation of trace IDs
in a dynamic manner.

Architecturally reserved IDs are never allocated, and the system is
limited to allocating only valid IDs.

Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
the new API.

perf handling is changed so that the ID associated with the CPU is read
from sysfs. The ID allocator is notified when perf events start and stop
so CPU based IDs are kept constant throughout any perf session.

For the ETMx.x devices IDs are allocated on certain events
a) When using sysfs, an ID will be allocated on hardware enable, and freed
when the sysfs reset is written.
b) When using perf, ID is allocated on hardware enable, and freed on
hardware disable.

For both cases the ID is allocated when sysfs is read to get the current
trace ID. This ensures that consistent decode metadata can be extracted
from the system where this read occurs before device enable.

Note: This patchset breaks backward compatibility for perf record.
Because the method for generating the AUXTRACE_INFO meta data has
changed, using an older perf record will result in metadata that
does not match the trace IDs used in the recorded trace data.
This mismatch will cause subsequent decode to fail. Older versions of
perf will still be able to decode data generated by the updated system.


Applies to coresight/next [b54f53bc11a5]
Tested on DB410c

Mike Leach (10):
  coresight: trace-id: Add API to dynamically assign trace ID values
  coresight: trace-id: Set up source trace ID map for system
  coresight: stm: Update STM driver to use Trace ID api
  coresight: etm4x: Use trace ID API to dynamically allocate trace ID
  coresight: etm3x: Use trace ID API to allocate IDs
  coresight: perf: traceid: Add perf notifiers for trace ID
  perf: cs-etm: Update event to read trace ID from sysfs
  coresight: Remove legacy Trace ID allocation mechanism
  coresight: etmX.X: stm: Remove unused legacy source trace ID ops
  coresight: trace-id: Add debug & test macros to trace id allocation

 drivers/hwtracing/coresight/Makefile          |   2 +-
 drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
 .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
 drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
 .../coresight/coresight-etm3x-core.c          |  93 ++++---
 .../coresight/coresight-etm3x-sysfs.c         |  28 +-
 .../coresight/coresight-etm4x-core.c          |  63 ++++-
 .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
 drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
 drivers/hwtracing/coresight/coresight-priv.h  |   1 +
 drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
 .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
 .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
 include/linux/coresight-pmu.h                 |  12 -
 include/linux/coresight.h                     |   3 -
 tools/perf/arch/arm/util/cs-etm.c             |  12 +-
 16 files changed, 530 insertions(+), 175 deletions(-)
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The existing mechanism to assign Trace ID values to sources is limited
and does not scale for larger multicore / multi trace source systems.

The API introduces functions that reserve IDs based on availabilty
represented by a coresight_trace_id_map structure. This records the
used and free IDs in a bitmap.

CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
coresight_trace_id_put_cpu_id pair of functions. The API will record
the ID associated with the CPU. This ensures that the same ID will be
re-used while perf events are active on the CPU. The put_cpu_id function
will pend release of the ID until all perf cs_etm sessions are complete.

Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
coresight_trace_id_put_system_id.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/Makefile          |   2 +-
 .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
 .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
 3 files changed, 292 insertions(+), 1 deletion(-)
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h

diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index b6c4a48140ec..329a0c704b87 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
 coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
 		coresight-sysfs.o coresight-syscfg.o coresight-config.o \
 		coresight-cfg-preload.o coresight-cfg-afdo.o \
-		coresight-syscfg-configfs.o
+		coresight-syscfg-configfs.o coresight-trace-id.o
 obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
 coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
 		      coresight-tmc-etr.o
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
new file mode 100644
index 000000000000..ce6c7d7b55d6
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -0,0 +1,222 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022, Linaro Limited, All rights reserved.
+ * Author: Mike Leach <mike.leach@linaro.org>
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/spinlock.h>
+
+#include "coresight-trace-id.h"
+
+/* need to keep data on ids & association with cpus. */
+struct cpu_id_info {
+	int id;
+	bool pend_rel;
+};
+
+/* maintain a record of the current mapping of cpu IDs */
+static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
+
+/* a list of currently used id_maps */
+static LIST_HEAD(id_map_list);
+
+/* perf session active flag */
+static int perf_cs_etm_session_active;
+
+/* lock to protect id_map list and cpu data  */
+static DEFINE_SPINLOCK(id_map_lock);
+
+/* ID 0 is reserved */
+#define CORESIGHT_TRACE_ID_RES_0 0
+
+/* ID 0x70 onwards are reserved */
+#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
+#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
+
+#define IS_VALID_ID(id)	\
+	((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
+
+static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		set_bit(id, id_map->avail_ids);
+}
+
+static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		clear_bit(id, id_map->avail_ids);
+}
+
+static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		set_bit(id, id_map->pend_rel_ids);
+}
+
+static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		clear_bit(id, id_map->pend_rel_ids);
+}
+
+static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
+{
+	int id;
+
+	id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
+	if (id >= CORESIGHT_TRACE_IDS_MAX)
+		id = -EINVAL;
+	return id;
+}
+
+/* release all pending IDs for all current maps & clear CPU associations */
+static void coresight_trace_id_release_all_pending(void)
+{
+	struct coresight_trace_id_map *id_map;
+	int cpu, bit;
+
+	list_for_each_entry(id_map, &id_map_list, node) {
+		for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
+			clear_bit(bit, id_map->avail_ids);
+			clear_bit(bit, id_map->pend_rel_ids);
+		}
+	}
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_ids, cpu).pend_rel) {
+			per_cpu(cpu_ids, cpu).pend_rel = false;
+			per_cpu(cpu_ids, cpu).id = 0;
+		}
+	}
+}
+
+int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = per_cpu(cpu_ids, cpu).id;
+	if (!id) {
+		id = coresight_trace_id_find_new_id(id_map);
+		if (id < 0)
+			goto get_cpu_id_out;
+	}
+
+	per_cpu(cpu_ids, cpu).id = id;
+	per_cpu(cpu_ids, cpu).pend_rel = false;
+	coresight_trace_id_set_inuse(id, id_map);
+	coresight_trace_id_clear_pend_rel(id, id_map);
+
+get_cpu_id_out:
+	spin_unlock_irqrestore(&id_map_lock, flags);
+	return id;
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
+
+void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = per_cpu(cpu_ids, cpu).id;
+	if (!id)
+		goto put_cpu_id_out;
+
+	if (perf_cs_etm_session_active) {
+		/* set release at pending if perf still active */
+		coresight_trace_id_set_pend_rel(id, id_map);
+		per_cpu(cpu_ids, cpu).pend_rel = true;
+	} else {
+		/* otherwise clear id */
+		coresight_trace_id_clear_inuse(id, id_map);
+		per_cpu(cpu_ids, cpu).id = 0;
+	}
+
+ put_cpu_id_out:
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
+
+int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = coresight_trace_id_find_new_id(id_map);
+	if (id > 0)
+		coresight_trace_id_set_inuse(id, id_map);
+	spin_unlock_irqrestore(&id_map_lock, flags);
+
+	return id;
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
+
+void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	coresight_trace_id_clear_inuse(id, id_map);
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
+
+void coresight_trace_id_perf_start(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	perf_cs_etm_session_active++;
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
+
+void coresight_trace_id_perf_stop(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	perf_cs_etm_session_active--;
+	if (!perf_cs_etm_session_active)
+		coresight_trace_id_release_all_pending();
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
+
+void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int bit;
+
+	/* set all reserved bits as in-use */
+	set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
+	for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
+	     bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
+		set_bit(bit, id_map->avail_ids);
+
+	spin_lock_irqsave(&id_map_lock, flags);
+
+	/* add id_map to the list */
+	list_add(&id_map->node, &id_map_list);
+
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
+
+void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+
+	/* remove id_map from list */
+	list_del(&id_map->node);
+
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
new file mode 100644
index 000000000000..01db2441cee6
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(C) 2022 Linaro Limited. All rights reserved.
+ * Author: Mike Leach <mike.leach@linaro.org>
+ */
+
+#ifndef _CORESIGHT_TRACE_ID_H
+#define _CORESIGHT_TRACE_ID_H
+
+/*
+ * Coresight trace ID allocation API
+ *
+ * With multi cpu systems, and more additional trace sources a scalable
+ * trace ID reservation system is required.
+ *
+ * The system will allocate Ids on a demand basis, and allow them to be
+ * released when done.
+ *
+ * In order to ensure that a consistent cpu / ID matching is maintained
+ * throughout a perf cs_etm event session - a session in progress flag will
+ * be maintained, and released IDs not cleared until the perf session is
+ * complete. This allows the same CPU to be re-allocated its prior ID.
+ *
+ *
+ * Trace ID maps will be created and initialised to prevent architecturally
+ * reserved IDs from being allocated.
+ *
+ * API permits multiple maps to be maintained - for large systems where
+ * different sets of cpus trace into different independent sinks.
+ */
+
+#include <linux/bitops.h>
+#include <linux/types.h>
+
+
+/* architecturally we have 128 IDs some of which are reserved */
+#define CORESIGHT_TRACE_IDS_MAX 128
+
+/**
+ * Trace ID map.
+ *
+ * @avail_ids:	Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
+ *		Initialised so that the reserved IDs are permanently marked as in use.
+ * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
+ *                as available, to allow re-allocation to the same CPU during a perf session.
+ * @node:	List entry to add to list of managed trace id maps.
+ */
+struct coresight_trace_id_map {
+	DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
+	DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
+	struct list_head node;
+};
+
+/* Allocate and release IDs */
+int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
+int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
+void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
+void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
+
+/* initialise ID map - block reserved IDs, add to list of ID maps */
+void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
+/* remove ID maps from list of maps */
+void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
+
+/* notifiers for perf session start and stop */
+void coresight_trace_id_perf_start(void);
+void coresight_trace_id_perf_stop(void);
+
+#endif /* _CORESIGHT_TRACE_ID_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The existing mechanism to assign Trace ID values to sources is limited
and does not scale for larger multicore / multi trace source systems.

The API introduces functions that reserve IDs based on availabilty
represented by a coresight_trace_id_map structure. This records the
used and free IDs in a bitmap.

CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
coresight_trace_id_put_cpu_id pair of functions. The API will record
the ID associated with the CPU. This ensures that the same ID will be
re-used while perf events are active on the CPU. The put_cpu_id function
will pend release of the ID until all perf cs_etm sessions are complete.

Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
coresight_trace_id_put_system_id.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/Makefile          |   2 +-
 .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
 .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
 3 files changed, 292 insertions(+), 1 deletion(-)
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
 create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h

diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index b6c4a48140ec..329a0c704b87 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
 coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
 		coresight-sysfs.o coresight-syscfg.o coresight-config.o \
 		coresight-cfg-preload.o coresight-cfg-afdo.o \
-		coresight-syscfg-configfs.o
+		coresight-syscfg-configfs.o coresight-trace-id.o
 obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
 coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
 		      coresight-tmc-etr.o
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
new file mode 100644
index 000000000000..ce6c7d7b55d6
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -0,0 +1,222 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022, Linaro Limited, All rights reserved.
+ * Author: Mike Leach <mike.leach@linaro.org>
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/spinlock.h>
+
+#include "coresight-trace-id.h"
+
+/* need to keep data on ids & association with cpus. */
+struct cpu_id_info {
+	int id;
+	bool pend_rel;
+};
+
+/* maintain a record of the current mapping of cpu IDs */
+static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
+
+/* a list of currently used id_maps */
+static LIST_HEAD(id_map_list);
+
+/* perf session active flag */
+static int perf_cs_etm_session_active;
+
+/* lock to protect id_map list and cpu data  */
+static DEFINE_SPINLOCK(id_map_lock);
+
+/* ID 0 is reserved */
+#define CORESIGHT_TRACE_ID_RES_0 0
+
+/* ID 0x70 onwards are reserved */
+#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
+#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
+
+#define IS_VALID_ID(id)	\
+	((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
+
+static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		set_bit(id, id_map->avail_ids);
+}
+
+static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		clear_bit(id, id_map->avail_ids);
+}
+
+static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		set_bit(id, id_map->pend_rel_ids);
+}
+
+static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
+{
+	if (IS_VALID_ID(id))
+		clear_bit(id, id_map->pend_rel_ids);
+}
+
+static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
+{
+	int id;
+
+	id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
+	if (id >= CORESIGHT_TRACE_IDS_MAX)
+		id = -EINVAL;
+	return id;
+}
+
+/* release all pending IDs for all current maps & clear CPU associations */
+static void coresight_trace_id_release_all_pending(void)
+{
+	struct coresight_trace_id_map *id_map;
+	int cpu, bit;
+
+	list_for_each_entry(id_map, &id_map_list, node) {
+		for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
+			clear_bit(bit, id_map->avail_ids);
+			clear_bit(bit, id_map->pend_rel_ids);
+		}
+	}
+
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(cpu_ids, cpu).pend_rel) {
+			per_cpu(cpu_ids, cpu).pend_rel = false;
+			per_cpu(cpu_ids, cpu).id = 0;
+		}
+	}
+}
+
+int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = per_cpu(cpu_ids, cpu).id;
+	if (!id) {
+		id = coresight_trace_id_find_new_id(id_map);
+		if (id < 0)
+			goto get_cpu_id_out;
+	}
+
+	per_cpu(cpu_ids, cpu).id = id;
+	per_cpu(cpu_ids, cpu).pend_rel = false;
+	coresight_trace_id_set_inuse(id, id_map);
+	coresight_trace_id_clear_pend_rel(id, id_map);
+
+get_cpu_id_out:
+	spin_unlock_irqrestore(&id_map_lock, flags);
+	return id;
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
+
+void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = per_cpu(cpu_ids, cpu).id;
+	if (!id)
+		goto put_cpu_id_out;
+
+	if (perf_cs_etm_session_active) {
+		/* set release at pending if perf still active */
+		coresight_trace_id_set_pend_rel(id, id_map);
+		per_cpu(cpu_ids, cpu).pend_rel = true;
+	} else {
+		/* otherwise clear id */
+		coresight_trace_id_clear_inuse(id, id_map);
+		per_cpu(cpu_ids, cpu).id = 0;
+	}
+
+ put_cpu_id_out:
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
+
+int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int id;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	id = coresight_trace_id_find_new_id(id_map);
+	if (id > 0)
+		coresight_trace_id_set_inuse(id, id_map);
+	spin_unlock_irqrestore(&id_map_lock, flags);
+
+	return id;
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
+
+void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	coresight_trace_id_clear_inuse(id, id_map);
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
+
+void coresight_trace_id_perf_start(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	perf_cs_etm_session_active++;
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
+
+void coresight_trace_id_perf_stop(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+	perf_cs_etm_session_active--;
+	if (!perf_cs_etm_session_active)
+		coresight_trace_id_release_all_pending();
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
+
+void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+	int bit;
+
+	/* set all reserved bits as in-use */
+	set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
+	for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
+	     bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
+		set_bit(bit, id_map->avail_ids);
+
+	spin_lock_irqsave(&id_map_lock, flags);
+
+	/* add id_map to the list */
+	list_add(&id_map->node, &id_map_list);
+
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
+
+void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&id_map_lock, flags);
+
+	/* remove id_map from list */
+	list_del(&id_map->node);
+
+	spin_unlock_irqrestore(&id_map_lock, flags);
+}
+EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
new file mode 100644
index 000000000000..01db2441cee6
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-trace-id.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(C) 2022 Linaro Limited. All rights reserved.
+ * Author: Mike Leach <mike.leach@linaro.org>
+ */
+
+#ifndef _CORESIGHT_TRACE_ID_H
+#define _CORESIGHT_TRACE_ID_H
+
+/*
+ * Coresight trace ID allocation API
+ *
+ * With multi cpu systems, and more additional trace sources a scalable
+ * trace ID reservation system is required.
+ *
+ * The system will allocate Ids on a demand basis, and allow them to be
+ * released when done.
+ *
+ * In order to ensure that a consistent cpu / ID matching is maintained
+ * throughout a perf cs_etm event session - a session in progress flag will
+ * be maintained, and released IDs not cleared until the perf session is
+ * complete. This allows the same CPU to be re-allocated its prior ID.
+ *
+ *
+ * Trace ID maps will be created and initialised to prevent architecturally
+ * reserved IDs from being allocated.
+ *
+ * API permits multiple maps to be maintained - for large systems where
+ * different sets of cpus trace into different independent sinks.
+ */
+
+#include <linux/bitops.h>
+#include <linux/types.h>
+
+
+/* architecturally we have 128 IDs some of which are reserved */
+#define CORESIGHT_TRACE_IDS_MAX 128
+
+/**
+ * Trace ID map.
+ *
+ * @avail_ids:	Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
+ *		Initialised so that the reserved IDs are permanently marked as in use.
+ * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
+ *                as available, to allow re-allocation to the same CPU during a perf session.
+ * @node:	List entry to add to list of managed trace id maps.
+ */
+struct coresight_trace_id_map {
+	DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
+	DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
+	struct list_head node;
+};
+
+/* Allocate and release IDs */
+int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
+int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
+void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
+void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
+
+/* initialise ID map - block reserved IDs, add to list of ID maps */
+void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
+/* remove ID maps from list of maps */
+void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
+
+/* notifiers for perf session start and stop */
+void coresight_trace_id_perf_start(void);
+void coresight_trace_id_perf_stop(void);
+
+#endif /* _CORESIGHT_TRACE_ID_H */
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 02/10] coresight: trace-id: Set up source trace ID map for system
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in a CoreSight trace ID map for the entire system.

This will be used by all source drivers to be allocated their trace IDs.

The checks for sources to have unique IDs has been removed - this is now
guaranteed by the ID allocation mechanisms, and inappropriate where
multiple ID maps are in use in larger systems

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-core.c | 64 ++++++--------------
 drivers/hwtracing/coresight/coresight-priv.h |  1 +
 2 files changed, 20 insertions(+), 45 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
index af00dca8d1ac..bbf415c252f9 100644
--- a/drivers/hwtracing/coresight/coresight-core.c
+++ b/drivers/hwtracing/coresight/coresight-core.c
@@ -22,6 +22,7 @@
 #include "coresight-etm-perf.h"
 #include "coresight-priv.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static DEFINE_MUTEX(coresight_mutex);
 static DEFINE_PER_CPU(struct coresight_device *, csdev_sink);
@@ -50,6 +51,19 @@ static DEFINE_PER_CPU(struct list_head *, tracer_path);
  */
 static struct list_head *stm_path;
 
+/*
+ * Set up a global trace ID map.
+ * We may need a per sink ID map in future for larger / multi sink systems.
+ */
+static struct coresight_trace_id_map trace_id_map;
+
+/* Allow drivers to reference ID map when getting trace IDs */
+struct coresight_trace_id_map *coresight_get_trace_id_map(void)
+{
+	return &trace_id_map;
+}
+EXPORT_SYMBOL_GPL(coresight_get_trace_id_map);
+
 /*
  * When losing synchronisation a new barrier packet needs to be inserted at the
  * beginning of the data collected in a buffer.  That way the decoder knows that
@@ -84,45 +98,6 @@ struct coresight_device *coresight_get_percpu_sink(int cpu)
 }
 EXPORT_SYMBOL_GPL(coresight_get_percpu_sink);
 
-static int coresight_id_match(struct device *dev, void *data)
-{
-	int trace_id, i_trace_id;
-	struct coresight_device *csdev, *i_csdev;
-
-	csdev = data;
-	i_csdev = to_coresight_device(dev);
-
-	/*
-	 * No need to care about oneself and components that are not
-	 * sources or not enabled
-	 */
-	if (i_csdev == csdev || !i_csdev->enable ||
-	    i_csdev->type != CORESIGHT_DEV_TYPE_SOURCE)
-		return 0;
-
-	/* Get the source ID for both components */
-	trace_id = source_ops(csdev)->trace_id(csdev);
-	i_trace_id = source_ops(i_csdev)->trace_id(i_csdev);
-
-	/* All you need is one */
-	if (trace_id == i_trace_id)
-		return 1;
-
-	return 0;
-}
-
-static int coresight_source_is_unique(struct coresight_device *csdev)
-{
-	int trace_id = source_ops(csdev)->trace_id(csdev);
-
-	/* this shouldn't happen */
-	if (trace_id < 0)
-		return 0;
-
-	return !bus_for_each_dev(&coresight_bustype, NULL,
-				 csdev, coresight_id_match);
-}
-
 static int coresight_find_link_inport(struct coresight_device *csdev,
 				      struct coresight_device *parent)
 {
@@ -431,12 +406,6 @@ static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
 {
 	int ret;
 
-	if (!coresight_source_is_unique(csdev)) {
-		dev_warn(&csdev->dev, "traceID %d not unique\n",
-			 source_ops(csdev)->trace_id(csdev));
-		return -EINVAL;
-	}
-
 	if (!csdev->enable) {
 		if (source_ops(csdev)->enable) {
 			ret = coresight_control_assoc_ectdev(csdev, true);
@@ -1763,11 +1732,15 @@ static int __init coresight_init(void)
 	if (ret)
 		goto exit_bus_unregister;
 
+	/* initialise the trace ID map */
+	coresight_trace_id_init_id_map(coresight_get_trace_id_map());
+
 	/* initialise the coresight syscfg API */
 	ret = cscfg_init();
 	if (!ret)
 		return 0;
 
+	coresight_trace_id_release_id_map(coresight_get_trace_id_map());
 	etm_perf_exit();
 exit_bus_unregister:
 	bus_unregister(&coresight_bustype);
@@ -1776,6 +1749,7 @@ static int __init coresight_init(void)
 
 static void __exit coresight_exit(void)
 {
+	coresight_trace_id_release_id_map(coresight_get_trace_id_map());
 	cscfg_exit();
 	etm_perf_exit();
 	bus_unregister(&coresight_bustype);
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
index ff1dd2092ac5..d3032ac545c1 100644
--- a/drivers/hwtracing/coresight/coresight-priv.h
+++ b/drivers/hwtracing/coresight/coresight-priv.h
@@ -166,6 +166,7 @@ int coresight_make_links(struct coresight_device *orig,
 			 struct coresight_device *target);
 void coresight_remove_links(struct coresight_device *orig,
 			    struct coresight_connection *conn);
+struct coresight_trace_id_map *coresight_get_trace_id_map(void);
 
 #if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM3X)
 extern int etm_readl_cp14(u32 off, unsigned int *val);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 02/10] coresight: trace-id: Set up source trace ID map for system
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in a CoreSight trace ID map for the entire system.

This will be used by all source drivers to be allocated their trace IDs.

The checks for sources to have unique IDs has been removed - this is now
guaranteed by the ID allocation mechanisms, and inappropriate where
multiple ID maps are in use in larger systems

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-core.c | 64 ++++++--------------
 drivers/hwtracing/coresight/coresight-priv.h |  1 +
 2 files changed, 20 insertions(+), 45 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
index af00dca8d1ac..bbf415c252f9 100644
--- a/drivers/hwtracing/coresight/coresight-core.c
+++ b/drivers/hwtracing/coresight/coresight-core.c
@@ -22,6 +22,7 @@
 #include "coresight-etm-perf.h"
 #include "coresight-priv.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static DEFINE_MUTEX(coresight_mutex);
 static DEFINE_PER_CPU(struct coresight_device *, csdev_sink);
@@ -50,6 +51,19 @@ static DEFINE_PER_CPU(struct list_head *, tracer_path);
  */
 static struct list_head *stm_path;
 
+/*
+ * Set up a global trace ID map.
+ * We may need a per sink ID map in future for larger / multi sink systems.
+ */
+static struct coresight_trace_id_map trace_id_map;
+
+/* Allow drivers to reference ID map when getting trace IDs */
+struct coresight_trace_id_map *coresight_get_trace_id_map(void)
+{
+	return &trace_id_map;
+}
+EXPORT_SYMBOL_GPL(coresight_get_trace_id_map);
+
 /*
  * When losing synchronisation a new barrier packet needs to be inserted at the
  * beginning of the data collected in a buffer.  That way the decoder knows that
@@ -84,45 +98,6 @@ struct coresight_device *coresight_get_percpu_sink(int cpu)
 }
 EXPORT_SYMBOL_GPL(coresight_get_percpu_sink);
 
-static int coresight_id_match(struct device *dev, void *data)
-{
-	int trace_id, i_trace_id;
-	struct coresight_device *csdev, *i_csdev;
-
-	csdev = data;
-	i_csdev = to_coresight_device(dev);
-
-	/*
-	 * No need to care about oneself and components that are not
-	 * sources or not enabled
-	 */
-	if (i_csdev == csdev || !i_csdev->enable ||
-	    i_csdev->type != CORESIGHT_DEV_TYPE_SOURCE)
-		return 0;
-
-	/* Get the source ID for both components */
-	trace_id = source_ops(csdev)->trace_id(csdev);
-	i_trace_id = source_ops(i_csdev)->trace_id(i_csdev);
-
-	/* All you need is one */
-	if (trace_id == i_trace_id)
-		return 1;
-
-	return 0;
-}
-
-static int coresight_source_is_unique(struct coresight_device *csdev)
-{
-	int trace_id = source_ops(csdev)->trace_id(csdev);
-
-	/* this shouldn't happen */
-	if (trace_id < 0)
-		return 0;
-
-	return !bus_for_each_dev(&coresight_bustype, NULL,
-				 csdev, coresight_id_match);
-}
-
 static int coresight_find_link_inport(struct coresight_device *csdev,
 				      struct coresight_device *parent)
 {
@@ -431,12 +406,6 @@ static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
 {
 	int ret;
 
-	if (!coresight_source_is_unique(csdev)) {
-		dev_warn(&csdev->dev, "traceID %d not unique\n",
-			 source_ops(csdev)->trace_id(csdev));
-		return -EINVAL;
-	}
-
 	if (!csdev->enable) {
 		if (source_ops(csdev)->enable) {
 			ret = coresight_control_assoc_ectdev(csdev, true);
@@ -1763,11 +1732,15 @@ static int __init coresight_init(void)
 	if (ret)
 		goto exit_bus_unregister;
 
+	/* initialise the trace ID map */
+	coresight_trace_id_init_id_map(coresight_get_trace_id_map());
+
 	/* initialise the coresight syscfg API */
 	ret = cscfg_init();
 	if (!ret)
 		return 0;
 
+	coresight_trace_id_release_id_map(coresight_get_trace_id_map());
 	etm_perf_exit();
 exit_bus_unregister:
 	bus_unregister(&coresight_bustype);
@@ -1776,6 +1749,7 @@ static int __init coresight_init(void)
 
 static void __exit coresight_exit(void)
 {
+	coresight_trace_id_release_id_map(coresight_get_trace_id_map());
 	cscfg_exit();
 	etm_perf_exit();
 	bus_unregister(&coresight_bustype);
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
index ff1dd2092ac5..d3032ac545c1 100644
--- a/drivers/hwtracing/coresight/coresight-priv.h
+++ b/drivers/hwtracing/coresight/coresight-priv.h
@@ -166,6 +166,7 @@ int coresight_make_links(struct coresight_device *orig,
 			 struct coresight_device *target);
 void coresight_remove_links(struct coresight_device *orig,
 			    struct coresight_connection *conn);
+struct coresight_trace_id_map *coresight_get_trace_id_map(void);
 
 #if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM3X)
 extern int etm_readl_cp14(u32 off, unsigned int *val);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 03/10] coresight: stm: Update STM driver to use Trace ID api
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Updates the STM driver to use the trace ID allocation API.
This uses the _system_id calls to allocate an ID on device poll,
and release on device remove.

The sysfs access to the STMTRACEIDR register has been changed from RW
to RO. Having this value as writable is not appropriate for the new
Trace ID scheme - and had potential to cause errors in the previous
scheme if values clashed with other sources.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-stm.c | 41 +++++++--------------
 1 file changed, 14 insertions(+), 27 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
index bb14a3a8a921..6c2c29ae99fe 100644
--- a/drivers/hwtracing/coresight/coresight-stm.c
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -31,6 +31,7 @@
 #include <linux/stm.h>
 
 #include "coresight-priv.h"
+#include "coresight-trace-id.h"
 
 #define STMDMASTARTR			0xc04
 #define STMDMASTOPR			0xc08
@@ -615,24 +616,7 @@ static ssize_t traceid_show(struct device *dev,
 	val = drvdata->traceid;
 	return sprintf(buf, "%#lx\n", val);
 }
-
-static ssize_t traceid_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	int ret;
-	unsigned long val;
-	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	ret = kstrtoul(buf, 16, &val);
-	if (ret)
-		return ret;
-
-	/* traceid field is 7bit wide on STM32 */
-	drvdata->traceid = val & 0x7f;
-	return size;
-}
-static DEVICE_ATTR_RW(traceid);
+static DEVICE_ATTR_RO(traceid);
 
 #define coresight_stm_reg(name, offset)	\
 	coresight_simple_reg32(struct stm_drvdata, name, offset)
@@ -819,14 +803,6 @@ static void stm_init_default_data(struct stm_drvdata *drvdata)
 	 */
 	drvdata->stmsper = ~0x0;
 
-	/*
-	 * The trace ID value for *ETM* tracers start at CPU_ID * 2 + 0x10 and
-	 * anything equal to or higher than 0x70 is reserved.  Since 0x00 is
-	 * also reserved the STM trace ID needs to be higher than 0x00 and
-	 * lowner than 0x10.
-	 */
-	drvdata->traceid = 0x1;
-
 	/* Set invariant transaction timing on all channels */
 	bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp);
 }
@@ -854,7 +830,7 @@ static void stm_init_generic_data(struct stm_drvdata *drvdata,
 
 static int stm_probe(struct amba_device *adev, const struct amba_id *id)
 {
-	int ret;
+	int ret, trace_id;
 	void __iomem *base;
 	struct device *dev = &adev->dev;
 	struct coresight_platform_data *pdata = NULL;
@@ -938,12 +914,22 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
 		goto stm_unregister;
 	}
 
+	trace_id = coresight_trace_id_get_system_id(coresight_get_trace_id_map());
+	if (trace_id < 0) {
+		ret = trace_id;
+		goto cs_unregister;
+	}
+	drvdata->traceid = (u8)trace_id;
+
 	pm_runtime_put(&adev->dev);
 
 	dev_info(&drvdata->csdev->dev, "%s initialized\n",
 		 (char *)coresight_get_uci_data(id));
 	return 0;
 
+cs_unregister:
+	coresight_unregister(drvdata->csdev);
+
 stm_unregister:
 	stm_unregister_device(&drvdata->stm);
 	return ret;
@@ -953,6 +939,7 @@ static void stm_remove(struct amba_device *adev)
 {
 	struct stm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
 
+	coresight_trace_id_put_system_id(coresight_get_trace_id_map(), drvdata->traceid);
 	coresight_unregister(drvdata->csdev);
 
 	stm_unregister_device(&drvdata->stm);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 03/10] coresight: stm: Update STM driver to use Trace ID api
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Updates the STM driver to use the trace ID allocation API.
This uses the _system_id calls to allocate an ID on device poll,
and release on device remove.

The sysfs access to the STMTRACEIDR register has been changed from RW
to RO. Having this value as writable is not appropriate for the new
Trace ID scheme - and had potential to cause errors in the previous
scheme if values clashed with other sources.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-stm.c | 41 +++++++--------------
 1 file changed, 14 insertions(+), 27 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
index bb14a3a8a921..6c2c29ae99fe 100644
--- a/drivers/hwtracing/coresight/coresight-stm.c
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -31,6 +31,7 @@
 #include <linux/stm.h>
 
 #include "coresight-priv.h"
+#include "coresight-trace-id.h"
 
 #define STMDMASTARTR			0xc04
 #define STMDMASTOPR			0xc08
@@ -615,24 +616,7 @@ static ssize_t traceid_show(struct device *dev,
 	val = drvdata->traceid;
 	return sprintf(buf, "%#lx\n", val);
 }
-
-static ssize_t traceid_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	int ret;
-	unsigned long val;
-	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	ret = kstrtoul(buf, 16, &val);
-	if (ret)
-		return ret;
-
-	/* traceid field is 7bit wide on STM32 */
-	drvdata->traceid = val & 0x7f;
-	return size;
-}
-static DEVICE_ATTR_RW(traceid);
+static DEVICE_ATTR_RO(traceid);
 
 #define coresight_stm_reg(name, offset)	\
 	coresight_simple_reg32(struct stm_drvdata, name, offset)
@@ -819,14 +803,6 @@ static void stm_init_default_data(struct stm_drvdata *drvdata)
 	 */
 	drvdata->stmsper = ~0x0;
 
-	/*
-	 * The trace ID value for *ETM* tracers start at CPU_ID * 2 + 0x10 and
-	 * anything equal to or higher than 0x70 is reserved.  Since 0x00 is
-	 * also reserved the STM trace ID needs to be higher than 0x00 and
-	 * lowner than 0x10.
-	 */
-	drvdata->traceid = 0x1;
-
 	/* Set invariant transaction timing on all channels */
 	bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp);
 }
@@ -854,7 +830,7 @@ static void stm_init_generic_data(struct stm_drvdata *drvdata,
 
 static int stm_probe(struct amba_device *adev, const struct amba_id *id)
 {
-	int ret;
+	int ret, trace_id;
 	void __iomem *base;
 	struct device *dev = &adev->dev;
 	struct coresight_platform_data *pdata = NULL;
@@ -938,12 +914,22 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
 		goto stm_unregister;
 	}
 
+	trace_id = coresight_trace_id_get_system_id(coresight_get_trace_id_map());
+	if (trace_id < 0) {
+		ret = trace_id;
+		goto cs_unregister;
+	}
+	drvdata->traceid = (u8)trace_id;
+
 	pm_runtime_put(&adev->dev);
 
 	dev_info(&drvdata->csdev->dev, "%s initialized\n",
 		 (char *)coresight_get_uci_data(id));
 	return 0;
 
+cs_unregister:
+	coresight_unregister(drvdata->csdev);
+
 stm_unregister:
 	stm_unregister_device(&drvdata->stm);
 	return ret;
@@ -953,6 +939,7 @@ static void stm_remove(struct amba_device *adev)
 {
 	struct stm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
 
+	coresight_trace_id_put_system_id(coresight_get_trace_id_map(), drvdata->traceid);
 	coresight_unregister(drvdata->csdev);
 
 	stm_unregister_device(&drvdata->stm);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 04/10] coresight: etm4x: Use trace ID API to dynamically allocate trace ID
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The trace ID API is now used to allocate trace IDs for ETM4.x / ETE
devices.

For perf sessions, these will be allocated on enable, and released on
disable.

For sysfs sessions, these will be allocated on enable, but only released
on reset. This allows the sysfs session to interrogate the Trace ID used
after the session is over - maintaining functional consistency with the
previous allocation scheme.

The trace ID will also be allocated on read of the mgmt/trctraceid file.
This ensures that if perf or sysfs read this before enabling trace, the
value will be the one used for the trace session.

Trace ID initialisation is removed from the _probe() function.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 .../coresight/coresight-etm4x-core.c          | 63 +++++++++++++++++--
 .../coresight/coresight-etm4x-sysfs.c         | 32 +++++++++-
 drivers/hwtracing/coresight/coresight-etm4x.h |  3 +
 3 files changed, 89 insertions(+), 9 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index 7f416a12000e..aa7ea5ad8b06 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -42,6 +42,7 @@
 #include "coresight-etm4x-cfg.h"
 #include "coresight-self-hosted-trace.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static int boot_enable;
 module_param(boot_enable, int, 0444);
@@ -234,6 +235,36 @@ static int etm4_trace_id(struct coresight_device *csdev)
 	return drvdata->trcid;
 }
 
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
+{
+	int trace_id;
+
+	/*
+	 * This will allocate a trace ID to the cpu,
+	 * or return the one currently allocated.
+	 */
+	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
+						 coresight_get_trace_id_map());
+	if (trace_id > 0) {
+		spin_lock(&drvdata->spinlock);
+		drvdata->trcid = (u8)trace_id;
+		spin_unlock(&drvdata->spinlock);
+	} else {
+		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
+		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
+	}
+	return trace_id;
+}
+
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata)
+{
+	coresight_trace_id_put_cpu_id(drvdata->cpu,
+				      coresight_get_trace_id_map());
+	spin_lock(&drvdata->spinlock);
+	drvdata->trcid = 0;
+	spin_unlock(&drvdata->spinlock);
+}
+
 struct etm4_enable_arg {
 	struct etmv4_drvdata *drvdata;
 	int rc;
@@ -717,9 +748,18 @@ static int etm4_enable_perf(struct coresight_device *csdev,
 	ret = etm4_parse_event_config(csdev, event);
 	if (ret)
 		goto out;
+
+	/* allocate a trace ID */
+	ret =  etm4_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		goto out;
+
 	/* And enable it */
 	ret = etm4_enable_hw(drvdata);
 
+	/* failed to enable */
+	if (ret)
+		etm4_release_trace_id(drvdata);
 out:
 	return ret;
 }
@@ -739,6 +779,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
 			return ret;
 	}
 
+	/* allocate a trace ID */
+	ret = etm4_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	spin_lock(&drvdata->spinlock);
 
 	/*
@@ -756,6 +801,8 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
 
 	if (!ret)
 		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
+	else
+		etm4_release_trace_id(drvdata);
 	return ret;
 }
 
@@ -883,6 +930,9 @@ static int etm4_disable_perf(struct coresight_device *csdev,
 	/* TRCVICTLR::SSSTATUS, bit[9] */
 	filters->ssstatus = (control & BIT(9));
 
+	/* release trace ID - this may pend release if perf session is still active */
+	etm4_release_trace_id(drvdata);
+
 	return 0;
 }
 
@@ -908,6 +958,13 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
 	spin_unlock(&drvdata->spinlock);
 	cpus_read_unlock();
 
+	/*
+	 * unlike for perf session - we only release trace IDs when resetting
+	 * sysfs. This permits sysfs users to read the trace ID after the trace
+	 * session has completed. This maintains operational behaviour with
+	 * prior trace id allocation method
+	 */
+
 	dev_dbg(&csdev->dev, "ETM tracing disabled\n");
 }
 
@@ -1596,11 +1653,6 @@ static int etm4_dying_cpu(unsigned int cpu)
 	return 0;
 }
 
-static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
-{
-	drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
-}
-
 static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
 {
 	int i, ret = 0;
@@ -2005,7 +2057,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
 	if (!desc.name)
 		return -ENOMEM;
 
-	etm4_init_trace_id(drvdata);
 	etm4_set_default(&drvdata->config);
 
 	pdata = coresight_get_platform_data(dev);
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
index 21687cc1e4e2..bb69a203b833 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
@@ -266,10 +266,11 @@ static ssize_t reset_store(struct device *dev,
 	config->vmid_mask0 = 0x0;
 	config->vmid_mask1 = 0x0;
 
-	drvdata->trcid = drvdata->cpu + 1;
-
 	spin_unlock(&drvdata->spinlock);
 
+	/* for sysfs - only release trace id when resetting */
+	etm4_release_trace_id(drvdata);
+
 	cscfg_csdev_reset_feats(to_coresight_device(dev));
 
 	return size;
@@ -2355,6 +2356,31 @@ static struct attribute *coresight_etmv4_attrs[] = {
 	NULL,
 };
 
+/*
+ * Trace ID allocated dynamically on enable - but also allocate on read
+ * in case sysfs or perf read before enable to ensure consistent metadata
+ * information for trace decode
+ */
+static ssize_t trctraceid_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	int trace_id;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	trace_id = etm4_read_alloc_trace_id(drvdata);
+	if (trace_id < 0)
+		return trace_id;
+
+	return scnprintf(buf, PAGE_SIZE, "0x%x\n", trace_id);
+}
+
+/* mgmt group uses extended attributes - no standard macro available */
+static struct dev_ext_attribute dev_attr_trctraceid = {
+		__ATTR(trctraceid, 0444, trctraceid_show, NULL),
+		(void *)(unsigned long)TRCTRACEIDR
+};
+
 struct etmv4_reg {
 	struct coresight_device *csdev;
 	u32 offset;
@@ -2491,7 +2517,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
 	coresight_etm4x_reg(trcpidr3, TRCPIDR3),
 	coresight_etm4x_reg(trcoslsr, TRCOSLSR),
 	coresight_etm4x_reg(trcconfig, TRCCONFIGR),
-	coresight_etm4x_reg(trctraceid, TRCTRACEIDR),
+	&dev_attr_trctraceid.attr.attr,
 	coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
 	NULL,
 };
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 3c4d69b096ca..64976a00c839 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -1010,4 +1010,7 @@ static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata)
 {
 	return drvdata->arch >= ETM_ARCH_ETE;
 }
+
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata);
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata);
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 04/10] coresight: etm4x: Use trace ID API to dynamically allocate trace ID
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

The trace ID API is now used to allocate trace IDs for ETM4.x / ETE
devices.

For perf sessions, these will be allocated on enable, and released on
disable.

For sysfs sessions, these will be allocated on enable, but only released
on reset. This allows the sysfs session to interrogate the Trace ID used
after the session is over - maintaining functional consistency with the
previous allocation scheme.

The trace ID will also be allocated on read of the mgmt/trctraceid file.
This ensures that if perf or sysfs read this before enabling trace, the
value will be the one used for the trace session.

Trace ID initialisation is removed from the _probe() function.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 .../coresight/coresight-etm4x-core.c          | 63 +++++++++++++++++--
 .../coresight/coresight-etm4x-sysfs.c         | 32 +++++++++-
 drivers/hwtracing/coresight/coresight-etm4x.h |  3 +
 3 files changed, 89 insertions(+), 9 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index 7f416a12000e..aa7ea5ad8b06 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -42,6 +42,7 @@
 #include "coresight-etm4x-cfg.h"
 #include "coresight-self-hosted-trace.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static int boot_enable;
 module_param(boot_enable, int, 0444);
@@ -234,6 +235,36 @@ static int etm4_trace_id(struct coresight_device *csdev)
 	return drvdata->trcid;
 }
 
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
+{
+	int trace_id;
+
+	/*
+	 * This will allocate a trace ID to the cpu,
+	 * or return the one currently allocated.
+	 */
+	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
+						 coresight_get_trace_id_map());
+	if (trace_id > 0) {
+		spin_lock(&drvdata->spinlock);
+		drvdata->trcid = (u8)trace_id;
+		spin_unlock(&drvdata->spinlock);
+	} else {
+		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
+		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
+	}
+	return trace_id;
+}
+
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata)
+{
+	coresight_trace_id_put_cpu_id(drvdata->cpu,
+				      coresight_get_trace_id_map());
+	spin_lock(&drvdata->spinlock);
+	drvdata->trcid = 0;
+	spin_unlock(&drvdata->spinlock);
+}
+
 struct etm4_enable_arg {
 	struct etmv4_drvdata *drvdata;
 	int rc;
@@ -717,9 +748,18 @@ static int etm4_enable_perf(struct coresight_device *csdev,
 	ret = etm4_parse_event_config(csdev, event);
 	if (ret)
 		goto out;
+
+	/* allocate a trace ID */
+	ret =  etm4_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		goto out;
+
 	/* And enable it */
 	ret = etm4_enable_hw(drvdata);
 
+	/* failed to enable */
+	if (ret)
+		etm4_release_trace_id(drvdata);
 out:
 	return ret;
 }
@@ -739,6 +779,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
 			return ret;
 	}
 
+	/* allocate a trace ID */
+	ret = etm4_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	spin_lock(&drvdata->spinlock);
 
 	/*
@@ -756,6 +801,8 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
 
 	if (!ret)
 		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
+	else
+		etm4_release_trace_id(drvdata);
 	return ret;
 }
 
@@ -883,6 +930,9 @@ static int etm4_disable_perf(struct coresight_device *csdev,
 	/* TRCVICTLR::SSSTATUS, bit[9] */
 	filters->ssstatus = (control & BIT(9));
 
+	/* release trace ID - this may pend release if perf session is still active */
+	etm4_release_trace_id(drvdata);
+
 	return 0;
 }
 
@@ -908,6 +958,13 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
 	spin_unlock(&drvdata->spinlock);
 	cpus_read_unlock();
 
+	/*
+	 * unlike for perf session - we only release trace IDs when resetting
+	 * sysfs. This permits sysfs users to read the trace ID after the trace
+	 * session has completed. This maintains operational behaviour with
+	 * prior trace id allocation method
+	 */
+
 	dev_dbg(&csdev->dev, "ETM tracing disabled\n");
 }
 
@@ -1596,11 +1653,6 @@ static int etm4_dying_cpu(unsigned int cpu)
 	return 0;
 }
 
-static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
-{
-	drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
-}
-
 static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
 {
 	int i, ret = 0;
@@ -2005,7 +2057,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
 	if (!desc.name)
 		return -ENOMEM;
 
-	etm4_init_trace_id(drvdata);
 	etm4_set_default(&drvdata->config);
 
 	pdata = coresight_get_platform_data(dev);
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
index 21687cc1e4e2..bb69a203b833 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
@@ -266,10 +266,11 @@ static ssize_t reset_store(struct device *dev,
 	config->vmid_mask0 = 0x0;
 	config->vmid_mask1 = 0x0;
 
-	drvdata->trcid = drvdata->cpu + 1;
-
 	spin_unlock(&drvdata->spinlock);
 
+	/* for sysfs - only release trace id when resetting */
+	etm4_release_trace_id(drvdata);
+
 	cscfg_csdev_reset_feats(to_coresight_device(dev));
 
 	return size;
@@ -2355,6 +2356,31 @@ static struct attribute *coresight_etmv4_attrs[] = {
 	NULL,
 };
 
+/*
+ * Trace ID allocated dynamically on enable - but also allocate on read
+ * in case sysfs or perf read before enable to ensure consistent metadata
+ * information for trace decode
+ */
+static ssize_t trctraceid_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	int trace_id;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	trace_id = etm4_read_alloc_trace_id(drvdata);
+	if (trace_id < 0)
+		return trace_id;
+
+	return scnprintf(buf, PAGE_SIZE, "0x%x\n", trace_id);
+}
+
+/* mgmt group uses extended attributes - no standard macro available */
+static struct dev_ext_attribute dev_attr_trctraceid = {
+		__ATTR(trctraceid, 0444, trctraceid_show, NULL),
+		(void *)(unsigned long)TRCTRACEIDR
+};
+
 struct etmv4_reg {
 	struct coresight_device *csdev;
 	u32 offset;
@@ -2491,7 +2517,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
 	coresight_etm4x_reg(trcpidr3, TRCPIDR3),
 	coresight_etm4x_reg(trcoslsr, TRCOSLSR),
 	coresight_etm4x_reg(trcconfig, TRCCONFIGR),
-	coresight_etm4x_reg(trctraceid, TRCTRACEIDR),
+	&dev_attr_trctraceid.attr.attr,
 	coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
 	NULL,
 };
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 3c4d69b096ca..64976a00c839 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -1010,4 +1010,7 @@ static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata)
 {
 	return drvdata->arch >= ETM_ARCH_ETE;
 }
+
+int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata);
+void etm4_release_trace_id(struct etmv4_drvdata *drvdata);
 #endif
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Use the TraceID API to allocate ETM trace IDs dynamically.

As with the etm4x we allocate on enable / disable for perf,
allocate on enable / reset for sysfs.

Additionally we allocate on sysfs file read as both perf and sysfs
can read the ID before enabling the hardware.

Remove sysfs option to write trace ID - which is inconsistent with
both the dynamic allocation method and the fixed allocation method
previously used.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm.h   |  2 +
 .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
 .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
 3 files changed, 71 insertions(+), 31 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index f3ab96eaf44e..3667428d38b6 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
 void etm_set_default(struct etm_config *config);
 void etm_config_trace_mode(struct etm_config *config);
 struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
+int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
+void etm_release_trace_id(struct etm_drvdata *drvdata);
 #endif
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index 7d413ba8b823..98213503bd09 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -32,6 +32,7 @@
 
 #include "coresight-etm.h"
 #include "coresight-etm-perf.h"
+#include "coresight-trace-id.h"
 
 /*
  * Not really modular but using module_param is the easiest way to
@@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
 	return etm_get_trace_id(drvdata);
 }
 
+int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
+{
+	int trace_id;
+
+	/*
+	 * This will allocate a trace ID to the cpu,
+	 * or return the one currently allocated.
+	 */
+	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
+						 coresight_get_trace_id_map());
+	if (trace_id > 0) {
+		spin_lock(&drvdata->spinlock);
+		drvdata->traceid = (u8)trace_id;
+		spin_unlock(&drvdata->spinlock);
+	} else {
+		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
+		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
+	}
+	return trace_id;
+}
+
+void etm_release_trace_id(struct etm_drvdata *drvdata)
+{
+	coresight_trace_id_put_cpu_id(drvdata->cpu,
+				      coresight_get_trace_id_map());
+	spin_lock(&drvdata->spinlock);
+	drvdata->traceid = 0;
+	spin_unlock(&drvdata->spinlock);
+}
+
 static int etm_enable_perf(struct coresight_device *csdev,
 			   struct perf_event *event)
 {
 	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+	int ret;
 
 	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
 		return -EINVAL;
 
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	/* Configure the tracer based on the session's specifics */
 	etm_parse_event_config(drvdata, event);
 	/* And enable it */
-	return etm_enable_hw(drvdata);
+	ret = etm_enable_hw(drvdata);
+
+	if (ret)
+		etm_release_trace_id(drvdata);
+	return ret;
 }
 
 static int etm_enable_sysfs(struct coresight_device *csdev)
@@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
 	struct etm_enable_arg arg = { };
 	int ret;
 
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	spin_lock(&drvdata->spinlock);
 
 	/*
@@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
 
 	if (!ret)
 		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
+	else
+		etm_release_trace_id(drvdata);
 	return ret;
 }
 
@@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
 	coresight_disclaim_device_unlocked(csdev);
 
 	CS_LOCK(drvdata->base);
+
+	etm_release_trace_id(drvdata);
 }
 
 static void etm_disable_sysfs(struct coresight_device *csdev)
@@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
 	CS_LOCK(drvdata->base);
 }
 
-static void etm_init_trace_id(struct etm_drvdata *drvdata)
-{
-	drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
-}
-
 static int __init etm_hp_setup(void)
 {
 	int ret;
@@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 	if (etm_arch_supported(drvdata->arch) == false)
 		return -EINVAL;
 
-	etm_init_trace_id(drvdata);
 	etm_set_default(&drvdata->config);
 
 	pdata = coresight_get_platform_data(dev);
@@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 		return PTR_ERR(drvdata->csdev);
 
 	ret = etm_perf_symlink(drvdata->csdev, true);
-	if (ret) {
-		coresight_unregister(drvdata->csdev);
-		return ret;
-	}
+	if (ret)
+		goto cs_unregister;
+
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		goto cs_unregister;
 
 	etmdrvdata[drvdata->cpu] = drvdata;
 
@@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 	}
 
 	return 0;
+
+cs_unregister:
+	coresight_unregister(drvdata->csdev);
+	return ret;
 }
 
 static void clear_etmdrvdata(void *info)
@@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
 
 	etm_perf_symlink(drvdata->csdev, false);
 
+	etm_release_trace_id(drvdata);
+
 	/*
 	 * Taking hotplug lock here to avoid racing between etm_remove and
 	 * CPU hotplug call backs.
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
index e8c7649f123e..3ee70b174240 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
@@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
 
 		etm_set_default(config);
 		spin_unlock(&drvdata->spinlock);
+		/* release trace id outside the spinlock as this fn uses it */
+		etm_release_trace_id(drvdata);
 	}
 
 	return size;
@@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
 static ssize_t traceid_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
-	unsigned long val;
-	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = etm_get_trace_id(drvdata);
-
-	return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t traceid_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	int ret;
-	unsigned long val;
+	int trace_id;
 	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
 
-	ret = kstrtoul(buf, 16, &val);
-	if (ret)
-		return ret;
+	trace_id = etm_read_alloc_trace_id(drvdata);
+	if (trace_id < 0)
+		return trace_id;
 
-	drvdata->traceid = val & ETM_TRACEID_MASK;
-	return size;
+	return sprintf(buf, "%#x\n", trace_id);
 }
-static DEVICE_ATTR_RW(traceid);
+static DEVICE_ATTR_RO(traceid);
 
 static struct attribute *coresight_etm_attrs[] = {
 	&dev_attr_nr_addr_cmp.attr,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Use the TraceID API to allocate ETM trace IDs dynamically.

As with the etm4x we allocate on enable / disable for perf,
allocate on enable / reset for sysfs.

Additionally we allocate on sysfs file read as both perf and sysfs
can read the ID before enabling the hardware.

Remove sysfs option to write trace ID - which is inconsistent with
both the dynamic allocation method and the fixed allocation method
previously used.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm.h   |  2 +
 .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
 .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
 3 files changed, 71 insertions(+), 31 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index f3ab96eaf44e..3667428d38b6 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
 void etm_set_default(struct etm_config *config);
 void etm_config_trace_mode(struct etm_config *config);
 struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
+int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
+void etm_release_trace_id(struct etm_drvdata *drvdata);
 #endif
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index 7d413ba8b823..98213503bd09 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -32,6 +32,7 @@
 
 #include "coresight-etm.h"
 #include "coresight-etm-perf.h"
+#include "coresight-trace-id.h"
 
 /*
  * Not really modular but using module_param is the easiest way to
@@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
 	return etm_get_trace_id(drvdata);
 }
 
+int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
+{
+	int trace_id;
+
+	/*
+	 * This will allocate a trace ID to the cpu,
+	 * or return the one currently allocated.
+	 */
+	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
+						 coresight_get_trace_id_map());
+	if (trace_id > 0) {
+		spin_lock(&drvdata->spinlock);
+		drvdata->traceid = (u8)trace_id;
+		spin_unlock(&drvdata->spinlock);
+	} else {
+		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
+		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
+	}
+	return trace_id;
+}
+
+void etm_release_trace_id(struct etm_drvdata *drvdata)
+{
+	coresight_trace_id_put_cpu_id(drvdata->cpu,
+				      coresight_get_trace_id_map());
+	spin_lock(&drvdata->spinlock);
+	drvdata->traceid = 0;
+	spin_unlock(&drvdata->spinlock);
+}
+
 static int etm_enable_perf(struct coresight_device *csdev,
 			   struct perf_event *event)
 {
 	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+	int ret;
 
 	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
 		return -EINVAL;
 
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	/* Configure the tracer based on the session's specifics */
 	etm_parse_event_config(drvdata, event);
 	/* And enable it */
-	return etm_enable_hw(drvdata);
+	ret = etm_enable_hw(drvdata);
+
+	if (ret)
+		etm_release_trace_id(drvdata);
+	return ret;
 }
 
 static int etm_enable_sysfs(struct coresight_device *csdev)
@@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
 	struct etm_enable_arg arg = { };
 	int ret;
 
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		return ret;
+
 	spin_lock(&drvdata->spinlock);
 
 	/*
@@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
 
 	if (!ret)
 		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
+	else
+		etm_release_trace_id(drvdata);
 	return ret;
 }
 
@@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
 	coresight_disclaim_device_unlocked(csdev);
 
 	CS_LOCK(drvdata->base);
+
+	etm_release_trace_id(drvdata);
 }
 
 static void etm_disable_sysfs(struct coresight_device *csdev)
@@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
 	CS_LOCK(drvdata->base);
 }
 
-static void etm_init_trace_id(struct etm_drvdata *drvdata)
-{
-	drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
-}
-
 static int __init etm_hp_setup(void)
 {
 	int ret;
@@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 	if (etm_arch_supported(drvdata->arch) == false)
 		return -EINVAL;
 
-	etm_init_trace_id(drvdata);
 	etm_set_default(&drvdata->config);
 
 	pdata = coresight_get_platform_data(dev);
@@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 		return PTR_ERR(drvdata->csdev);
 
 	ret = etm_perf_symlink(drvdata->csdev, true);
-	if (ret) {
-		coresight_unregister(drvdata->csdev);
-		return ret;
-	}
+	if (ret)
+		goto cs_unregister;
+
+	ret = etm_read_alloc_trace_id(drvdata);
+	if (ret < 0)
+		goto cs_unregister;
 
 	etmdrvdata[drvdata->cpu] = drvdata;
 
@@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
 	}
 
 	return 0;
+
+cs_unregister:
+	coresight_unregister(drvdata->csdev);
+	return ret;
 }
 
 static void clear_etmdrvdata(void *info)
@@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
 
 	etm_perf_symlink(drvdata->csdev, false);
 
+	etm_release_trace_id(drvdata);
+
 	/*
 	 * Taking hotplug lock here to avoid racing between etm_remove and
 	 * CPU hotplug call backs.
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
index e8c7649f123e..3ee70b174240 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
@@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
 
 		etm_set_default(config);
 		spin_unlock(&drvdata->spinlock);
+		/* release trace id outside the spinlock as this fn uses it */
+		etm_release_trace_id(drvdata);
 	}
 
 	return size;
@@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
 static ssize_t traceid_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
 {
-	unsigned long val;
-	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = etm_get_trace_id(drvdata);
-
-	return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t traceid_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	int ret;
-	unsigned long val;
+	int trace_id;
 	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
 
-	ret = kstrtoul(buf, 16, &val);
-	if (ret)
-		return ret;
+	trace_id = etm_read_alloc_trace_id(drvdata);
+	if (trace_id < 0)
+		return trace_id;
 
-	drvdata->traceid = val & ETM_TRACEID_MASK;
-	return size;
+	return sprintf(buf, "%#x\n", trace_id);
 }
-static DEVICE_ATTR_RW(traceid);
+static DEVICE_ATTR_RO(traceid);
 
 static struct attribute *coresight_etm_attrs[] = {
 	&dev_attr_nr_addr_cmp.attr,
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in notifier calls to the trace ID allocator that perf
events are starting and stopping.

This ensures that Trace IDs associated with CPUs remain the same
throughout the perf session, and are only release when all perf
sessions are complete.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
index c039b6ae206f..008f9dac429d 100644
--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
@@ -22,6 +22,7 @@
 #include "coresight-etm-perf.h"
 #include "coresight-priv.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static struct pmu etm_pmu;
 static bool etm_perf_up;
@@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
 		struct list_head **ppath;
 
 		ppath = etm_event_cpu_path_ptr(event_data, cpu);
-		if (!(IS_ERR_OR_NULL(*ppath)))
+		if (!(IS_ERR_OR_NULL(*ppath))) {
 			coresight_release_path(*ppath);
+			/*
+			 * perf may have read a trace id for a cpu, but never actually
+			 * executed code on that cpu - which means the trace id would
+			 * not release on disable. Re-release here to be sure.
+			 */
+			coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
+		}
 		*ppath = NULL;
 	}
 
+	/* mark perf event as done for trace id allocator */
+	coresight_trace_id_perf_stop();
+
 	free_percpu(event_data->path);
 	kfree(event_data);
 }
@@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
 		sink = user_sink = coresight_get_sink_by_id(id);
 	}
 
+	/* tell the trace ID allocator that a perf event is starting up */
+	coresight_trace_id_perf_start();
+
 	/* check if user wants a coresight configuration selected */
 	cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
 	if (cfg_hash) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in notifier calls to the trace ID allocator that perf
events are starting and stopping.

This ensures that Trace IDs associated with CPUs remain the same
throughout the perf session, and are only release when all perf
sessions are complete.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
index c039b6ae206f..008f9dac429d 100644
--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
@@ -22,6 +22,7 @@
 #include "coresight-etm-perf.h"
 #include "coresight-priv.h"
 #include "coresight-syscfg.h"
+#include "coresight-trace-id.h"
 
 static struct pmu etm_pmu;
 static bool etm_perf_up;
@@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
 		struct list_head **ppath;
 
 		ppath = etm_event_cpu_path_ptr(event_data, cpu);
-		if (!(IS_ERR_OR_NULL(*ppath)))
+		if (!(IS_ERR_OR_NULL(*ppath))) {
 			coresight_release_path(*ppath);
+			/*
+			 * perf may have read a trace id for a cpu, but never actually
+			 * executed code on that cpu - which means the trace id would
+			 * not release on disable. Re-release here to be sure.
+			 */
+			coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
+		}
 		*ppath = NULL;
 	}
 
+	/* mark perf event as done for trace id allocator */
+	coresight_trace_id_perf_stop();
+
 	free_percpu(event_data->path);
 	kfree(event_data);
 }
@@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
 		sink = user_sink = coresight_get_sink_by_id(id);
 	}
 
+	/* tell the trace ID allocator that a perf event is starting up */
+	coresight_trace_id_perf_start();
+
 	/* check if user wants a coresight configuration selected */
 	cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
 	if (cfg_hash) {
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 07/10] perf: cs-etm: Update event to read trace ID from sysfs
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Read the current trace ID values associated with a CPU
from sysfs.

Previously used the static association algorithm that is no longer
used as it did not scale and was broken for larger core counts.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 tools/perf/arch/arm/util/cs-etm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 2e8b2c4365a0..f48087d07bcb 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -43,11 +43,13 @@ struct cs_etm_recording {
 };
 
 static const char *metadata_etmv3_ro[CS_ETM_PRIV_MAX] = {
+	[CS_ETM_ETMTRACEIDR]	= "traceid",
 	[CS_ETM_ETMCCER]	= "mgmt/etmccer",
 	[CS_ETM_ETMIDR]		= "mgmt/etmidr",
 };
 
 static const char * const metadata_etmv4_ro[] = {
+	[CS_ETMV4_TRCTRACEIDR]		= "mgmt/trctraceid",
 	[CS_ETMV4_TRCIDR0]		= "trcidr/trcidr0",
 	[CS_ETMV4_TRCIDR1]		= "trcidr/trcidr1",
 	[CS_ETMV4_TRCIDR2]		= "trcidr/trcidr2",
@@ -629,8 +631,9 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr,
 
 	/* Get trace configuration register */
 	data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr);
-	/* Get traceID from the framework */
-	data[CS_ETMV4_TRCTRACEIDR] = coresight_get_trace_id(cpu);
+	/* Get traceID from the sysfs */
+	data[CS_ETMV4_TRCTRACEIDR] = cs_etm_get_ro(cs_etm_pmu, cpu,
+						   metadata_etmv4_ro[CS_ETMV4_TRCTRACEIDR]);
 	/* Get read-only information from sysFS */
 	data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
 					       metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
@@ -677,9 +680,10 @@ static void cs_etm_get_metadata(int cpu, u32 *offset,
 		magic = __perf_cs_etmv3_magic;
 		/* Get configuration register */
 		info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr);
-		/* Get traceID from the framework */
+		/* Get traceID from sysfs */
 		info->priv[*offset + CS_ETM_ETMTRACEIDR] =
-						coresight_get_trace_id(cpu);
+			cs_etm_get_ro(cs_etm_pmu, cpu,
+				      metadata_etmv3_ro[CS_ETM_ETMTRACEIDR]);
 		/* Get read-only information from sysFS */
 		info->priv[*offset + CS_ETM_ETMCCER] =
 			cs_etm_get_ro(cs_etm_pmu, cpu,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 07/10] perf: cs-etm: Update event to read trace ID from sysfs
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Read the current trace ID values associated with a CPU
from sysfs.

Previously used the static association algorithm that is no longer
used as it did not scale and was broken for larger core counts.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 tools/perf/arch/arm/util/cs-etm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 2e8b2c4365a0..f48087d07bcb 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -43,11 +43,13 @@ struct cs_etm_recording {
 };
 
 static const char *metadata_etmv3_ro[CS_ETM_PRIV_MAX] = {
+	[CS_ETM_ETMTRACEIDR]	= "traceid",
 	[CS_ETM_ETMCCER]	= "mgmt/etmccer",
 	[CS_ETM_ETMIDR]		= "mgmt/etmidr",
 };
 
 static const char * const metadata_etmv4_ro[] = {
+	[CS_ETMV4_TRCTRACEIDR]		= "mgmt/trctraceid",
 	[CS_ETMV4_TRCIDR0]		= "trcidr/trcidr0",
 	[CS_ETMV4_TRCIDR1]		= "trcidr/trcidr1",
 	[CS_ETMV4_TRCIDR2]		= "trcidr/trcidr2",
@@ -629,8 +631,9 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr,
 
 	/* Get trace configuration register */
 	data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr);
-	/* Get traceID from the framework */
-	data[CS_ETMV4_TRCTRACEIDR] = coresight_get_trace_id(cpu);
+	/* Get traceID from the sysfs */
+	data[CS_ETMV4_TRCTRACEIDR] = cs_etm_get_ro(cs_etm_pmu, cpu,
+						   metadata_etmv4_ro[CS_ETMV4_TRCTRACEIDR]);
 	/* Get read-only information from sysFS */
 	data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
 					       metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
@@ -677,9 +680,10 @@ static void cs_etm_get_metadata(int cpu, u32 *offset,
 		magic = __perf_cs_etmv3_magic;
 		/* Get configuration register */
 		info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr);
-		/* Get traceID from the framework */
+		/* Get traceID from sysfs */
 		info->priv[*offset + CS_ETM_ETMTRACEIDR] =
-						coresight_get_trace_id(cpu);
+			cs_etm_get_ro(cs_etm_pmu, cpu,
+				      metadata_etmv3_ro[CS_ETM_ETMTRACEIDR]);
 		/* Get read-only information from sysFS */
 		info->priv[*offset + CS_ETM_ETMCCER] =
 			cs_etm_get_ro(cs_etm_pmu, cpu,
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

This static 'cpu * 2 + seed' was outdated and broken for systems with high
core counts (>46).

This has been replaced by a dynamic allocation system.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 include/linux/coresight-pmu.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
index 4ac5c081af93..bb4eb4de3c77 100644
--- a/include/linux/coresight-pmu.h
+++ b/include/linux/coresight-pmu.h
@@ -8,7 +8,6 @@
 #define _LINUX_CORESIGHT_PMU_H
 
 #define CORESIGHT_ETM_PMU_NAME "cs_etm"
-#define CORESIGHT_ETM_PMU_SEED  0x10
 
 /*
  * Below are the definition of bit offsets for perf option, and works as
@@ -32,15 +31,4 @@
 #define ETM4_CFG_BIT_RETSTK	12
 #define ETM4_CFG_BIT_VMID_OPT	15
 
-static inline int coresight_get_trace_id(int cpu)
-{
-	/*
-	 * A trace ID of value 0 is invalid, so let's start at some
-	 * random value that fits in 7 bits and go from there.  Since
-	 * the common convention is to have data trace IDs be I(N) + 1,
-	 * set instruction trace IDs as a function of the CPU number.
-	 */
-	return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
-}
-
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

This static 'cpu * 2 + seed' was outdated and broken for systems with high
core counts (>46).

This has been replaced by a dynamic allocation system.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 include/linux/coresight-pmu.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
index 4ac5c081af93..bb4eb4de3c77 100644
--- a/include/linux/coresight-pmu.h
+++ b/include/linux/coresight-pmu.h
@@ -8,7 +8,6 @@
 #define _LINUX_CORESIGHT_PMU_H
 
 #define CORESIGHT_ETM_PMU_NAME "cs_etm"
-#define CORESIGHT_ETM_PMU_SEED  0x10
 
 /*
  * Below are the definition of bit offsets for perf option, and works as
@@ -32,15 +31,4 @@
 #define ETM4_CFG_BIT_RETSTK	12
 #define ETM4_CFG_BIT_VMID_OPT	15
 
-static inline int coresight_get_trace_id(int cpu)
-{
-	/*
-	 * A trace ID of value 0 is invalid, so let's start at some
-	 * random value that fits in 7 bits and go from there.  Since
-	 * the common convention is to have data trace IDs be I(N) + 1,
-	 * set instruction trace IDs as a function of the CPU number.
-	 */
-	return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
-}
-
 #endif
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 09/10] coresight: etmX.X: stm: Remove unused legacy source trace ID ops
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:49   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

CoreSight sources provide a callback (.trace_id) in the standard source
ops which returns the ID to the core code. This was used to check that
sources all had a unique Trace ID.

Uniqueness is now gauranteed by the Trace ID allocation system, and the
check code has been removed from the core.

This patch removes the unneeded and unused .trace_id source ops
from the ops structure and implementations in etm3x, etm4x and stm.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm.h   |  1 -
 .../coresight/coresight-etm3x-core.c          | 37 -------------------
 .../coresight/coresight-etm4x-core.c          |  8 ----
 drivers/hwtracing/coresight/coresight-stm.c   |  8 ----
 include/linux/coresight.h                     |  3 --
 5 files changed, 57 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index 3667428d38b6..9a0d08b092ae 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -283,7 +283,6 @@ static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
 }
 
 extern const struct attribute_group *coresight_etm_groups[];
-int etm_get_trace_id(struct etm_drvdata *drvdata);
 void etm_set_default(struct etm_config *config);
 void etm_config_trace_mode(struct etm_config *config);
 struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index 98213503bd09..4cfb8ed699df 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -455,42 +455,6 @@ static int etm_cpu_id(struct coresight_device *csdev)
 	return drvdata->cpu;
 }
 
-int etm_get_trace_id(struct etm_drvdata *drvdata)
-{
-	unsigned long flags;
-	int trace_id = -1;
-	struct device *etm_dev;
-
-	if (!drvdata)
-		goto out;
-
-	etm_dev = drvdata->csdev->dev.parent;
-	if (!local_read(&drvdata->mode))
-		return drvdata->traceid;
-
-	pm_runtime_get_sync(etm_dev);
-
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-
-	CS_UNLOCK(drvdata->base);
-	trace_id = (etm_readl(drvdata, ETMTRACEIDR) & ETM_TRACEID_MASK);
-	CS_LOCK(drvdata->base);
-
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-	pm_runtime_put(etm_dev);
-
-out:
-	return trace_id;
-
-}
-
-static int etm_trace_id(struct coresight_device *csdev)
-{
-	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return etm_get_trace_id(drvdata);
-}
-
 int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
 {
 	int trace_id;
@@ -719,7 +683,6 @@ static void etm_disable(struct coresight_device *csdev,
 
 static const struct coresight_ops_source etm_source_ops = {
 	.cpu_id		= etm_cpu_id,
-	.trace_id	= etm_trace_id,
 	.enable		= etm_enable,
 	.disable	= etm_disable,
 };
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index aa7ea5ad8b06..565947de702a 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -228,13 +228,6 @@ static int etm4_cpu_id(struct coresight_device *csdev)
 	return drvdata->cpu;
 }
 
-static int etm4_trace_id(struct coresight_device *csdev)
-{
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return drvdata->trcid;
-}
-
 int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
 {
 	int trace_id;
@@ -998,7 +991,6 @@ static void etm4_disable(struct coresight_device *csdev,
 
 static const struct coresight_ops_source etm4_source_ops = {
 	.cpu_id		= etm4_cpu_id,
-	.trace_id	= etm4_trace_id,
 	.enable		= etm4_enable,
 	.disable	= etm4_disable,
 };
diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
index 6c2c29ae99fe..6aa6eb9e4090 100644
--- a/drivers/hwtracing/coresight/coresight-stm.c
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -281,15 +281,7 @@ static void stm_disable(struct coresight_device *csdev,
 	}
 }
 
-static int stm_trace_id(struct coresight_device *csdev)
-{
-	struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return drvdata->traceid;
-}
-
 static const struct coresight_ops_source stm_source_ops = {
-	.trace_id	= stm_trace_id,
 	.enable		= stm_enable,
 	.disable	= stm_disable,
 };
diff --git a/include/linux/coresight.h b/include/linux/coresight.h
index 9f445f09fcfe..247147c11231 100644
--- a/include/linux/coresight.h
+++ b/include/linux/coresight.h
@@ -314,14 +314,11 @@ struct coresight_ops_link {
  * Operations available for sources.
  * @cpu_id:	returns the value of the CPU number this component
  *		is associated to.
- * @trace_id:	returns the value of the component's trace ID as known
- *		to the HW.
  * @enable:	enables tracing for a source.
  * @disable:	disables tracing for a source.
  */
 struct coresight_ops_source {
 	int (*cpu_id)(struct coresight_device *csdev);
-	int (*trace_id)(struct coresight_device *csdev);
 	int (*enable)(struct coresight_device *csdev,
 		      struct perf_event *event,  u32 mode);
 	void (*disable)(struct coresight_device *csdev,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 09/10] coresight: etmX.X: stm: Remove unused legacy source trace ID ops
@ 2022-03-08 20:49   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:49 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

CoreSight sources provide a callback (.trace_id) in the standard source
ops which returns the ID to the core code. This was used to check that
sources all had a unique Trace ID.

Uniqueness is now gauranteed by the Trace ID allocation system, and the
check code has been removed from the core.

This patch removes the unneeded and unused .trace_id source ops
from the ops structure and implementations in etm3x, etm4x and stm.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 drivers/hwtracing/coresight/coresight-etm.h   |  1 -
 .../coresight/coresight-etm3x-core.c          | 37 -------------------
 .../coresight/coresight-etm4x-core.c          |  8 ----
 drivers/hwtracing/coresight/coresight-stm.c   |  8 ----
 include/linux/coresight.h                     |  3 --
 5 files changed, 57 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index 3667428d38b6..9a0d08b092ae 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -283,7 +283,6 @@ static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
 }
 
 extern const struct attribute_group *coresight_etm_groups[];
-int etm_get_trace_id(struct etm_drvdata *drvdata);
 void etm_set_default(struct etm_config *config);
 void etm_config_trace_mode(struct etm_config *config);
 struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index 98213503bd09..4cfb8ed699df 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -455,42 +455,6 @@ static int etm_cpu_id(struct coresight_device *csdev)
 	return drvdata->cpu;
 }
 
-int etm_get_trace_id(struct etm_drvdata *drvdata)
-{
-	unsigned long flags;
-	int trace_id = -1;
-	struct device *etm_dev;
-
-	if (!drvdata)
-		goto out;
-
-	etm_dev = drvdata->csdev->dev.parent;
-	if (!local_read(&drvdata->mode))
-		return drvdata->traceid;
-
-	pm_runtime_get_sync(etm_dev);
-
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-
-	CS_UNLOCK(drvdata->base);
-	trace_id = (etm_readl(drvdata, ETMTRACEIDR) & ETM_TRACEID_MASK);
-	CS_LOCK(drvdata->base);
-
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-	pm_runtime_put(etm_dev);
-
-out:
-	return trace_id;
-
-}
-
-static int etm_trace_id(struct coresight_device *csdev)
-{
-	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return etm_get_trace_id(drvdata);
-}
-
 int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
 {
 	int trace_id;
@@ -719,7 +683,6 @@ static void etm_disable(struct coresight_device *csdev,
 
 static const struct coresight_ops_source etm_source_ops = {
 	.cpu_id		= etm_cpu_id,
-	.trace_id	= etm_trace_id,
 	.enable		= etm_enable,
 	.disable	= etm_disable,
 };
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index aa7ea5ad8b06..565947de702a 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -228,13 +228,6 @@ static int etm4_cpu_id(struct coresight_device *csdev)
 	return drvdata->cpu;
 }
 
-static int etm4_trace_id(struct coresight_device *csdev)
-{
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return drvdata->trcid;
-}
-
 int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
 {
 	int trace_id;
@@ -998,7 +991,6 @@ static void etm4_disable(struct coresight_device *csdev,
 
 static const struct coresight_ops_source etm4_source_ops = {
 	.cpu_id		= etm4_cpu_id,
-	.trace_id	= etm4_trace_id,
 	.enable		= etm4_enable,
 	.disable	= etm4_disable,
 };
diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
index 6c2c29ae99fe..6aa6eb9e4090 100644
--- a/drivers/hwtracing/coresight/coresight-stm.c
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -281,15 +281,7 @@ static void stm_disable(struct coresight_device *csdev,
 	}
 }
 
-static int stm_trace_id(struct coresight_device *csdev)
-{
-	struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return drvdata->traceid;
-}
-
 static const struct coresight_ops_source stm_source_ops = {
-	.trace_id	= stm_trace_id,
 	.enable		= stm_enable,
 	.disable	= stm_disable,
 };
diff --git a/include/linux/coresight.h b/include/linux/coresight.h
index 9f445f09fcfe..247147c11231 100644
--- a/include/linux/coresight.h
+++ b/include/linux/coresight.h
@@ -314,14 +314,11 @@ struct coresight_ops_link {
  * Operations available for sources.
  * @cpu_id:	returns the value of the CPU number this component
  *		is associated to.
- * @trace_id:	returns the value of the component's trace ID as known
- *		to the HW.
  * @enable:	enables tracing for a source.
  * @disable:	disables tracing for a source.
  */
 struct coresight_ops_source {
 	int (*cpu_id)(struct coresight_device *csdev);
-	int (*trace_id)(struct coresight_device *csdev);
 	int (*enable)(struct coresight_device *csdev,
 		      struct perf_event *event,  u32 mode);
 	void (*disable)(struct coresight_device *csdev,
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 10/10] coresight: trace-id: Add debug & test macros to trace id allocation
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-08 20:50   ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:50 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in a number of pr_debug macros to allow the debugging and test of
the trace ID allocation system.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 .../hwtracing/coresight/coresight-trace-id.c  | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
index ce6c7d7b55d6..8dcb698b8899 100644
--- a/drivers/hwtracing/coresight/coresight-trace-id.c
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -71,6 +71,27 @@ static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
 	return id;
 }
 
+/* #define TRACE_ID_DEBUG 1 */
+#ifdef TRACE_ID_DEBUG
+static void coresight_trace_id_dump_table(struct coresight_trace_id_map *id_map,
+					  const char *func_name)
+{
+	/* currently 2 u64s are sufficient to hold all the ids */
+	pr_debug("%s id_map::\n", func_name);
+	pr_debug("Avial= 0x%016lx%016lx\n", id_map->avail_ids[1], id_map->avail_ids[0]);
+	pr_debug("Pend = 0x%016lx%016lx\n", id_map->pend_rel_ids[1], id_map->pend_rel_ids[0]);
+}
+#define DUMP_ID_MAP(map)   coresight_trace_id_dump_table(map, __func__)
+#define DUMP_ID_CPU(cpu, id) pr_debug("%s called;  cpu=%d, id=%d\n", __func__, cpu, id)
+#define DUMP_ID(id)   pr_debug("%s called; id=%d\n", __func__, id)
+#define PERF_SESSION(n) pr_debug("%s perf count %d\n", __func__, n)
+#else
+#define DUMP_ID_MAP(map)
+#define DUMP_ID(id)
+#define DUMP_ID_CPU(cpu, id)
+#define PERF_SESSION(n)
+#endif
+
 /* release all pending IDs for all current maps & clear CPU associations */
 static void coresight_trace_id_release_all_pending(void)
 {
@@ -82,6 +103,7 @@ static void coresight_trace_id_release_all_pending(void)
 			clear_bit(bit, id_map->avail_ids);
 			clear_bit(bit, id_map->pend_rel_ids);
 		}
+		DUMP_ID_MAP(id_map);
 	}
 
 	for_each_possible_cpu(cpu) {
@@ -112,6 +134,8 @@ int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map
 
 get_cpu_id_out:
 	spin_unlock_irqrestore(&id_map_lock, flags);
+	DUMP_ID_CPU(cpu, id);
+	DUMP_ID_MAP(id_map);
 	return id;
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
@@ -138,6 +162,8 @@ void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_ma
 
  put_cpu_id_out:
 	spin_unlock_irqrestore(&id_map_lock, flags);
+	DUMP_ID_CPU(cpu, id);
+	DUMP_ID_MAP(id_map);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
 
@@ -152,6 +178,8 @@ int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
 		coresight_trace_id_set_inuse(id, id_map);
 	spin_unlock_irqrestore(&id_map_lock, flags);
 
+	DUMP_ID(id);
+	DUMP_ID_MAP(id_map);
 	return id;
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
@@ -163,6 +191,9 @@ void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int
 	spin_lock_irqsave(&id_map_lock, flags);
 	coresight_trace_id_clear_inuse(id, id_map);
 	spin_unlock_irqrestore(&id_map_lock, flags);
+
+	DUMP_ID(id);
+	DUMP_ID_MAP(id_map);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
 
@@ -172,6 +203,7 @@ void coresight_trace_id_perf_start(void)
 
 	spin_lock_irqsave(&id_map_lock, flags);
 	perf_cs_etm_session_active++;
+	PERF_SESSION(perf_cs_etm_session_active);
 	spin_unlock_irqrestore(&id_map_lock, flags);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
@@ -182,6 +214,7 @@ void coresight_trace_id_perf_stop(void)
 
 	spin_lock_irqsave(&id_map_lock, flags);
 	perf_cs_etm_session_active--;
+	PERF_SESSION(perf_cs_etm_session_active);
 	if (!perf_cs_etm_session_active)
 		coresight_trace_id_release_all_pending();
 	spin_unlock_irqrestore(&id_map_lock, flags);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH 10/10] coresight: trace-id: Add debug & test macros to trace id allocation
@ 2022-03-08 20:50   ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-08 20:50 UTC (permalink / raw)
  To: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	Mike Leach

Adds in a number of pr_debug macros to allow the debugging and test of
the trace ID allocation system.

Signed-off-by: Mike Leach <mike.leach@linaro.org>
---
 .../hwtracing/coresight/coresight-trace-id.c  | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
index ce6c7d7b55d6..8dcb698b8899 100644
--- a/drivers/hwtracing/coresight/coresight-trace-id.c
+++ b/drivers/hwtracing/coresight/coresight-trace-id.c
@@ -71,6 +71,27 @@ static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
 	return id;
 }
 
+/* #define TRACE_ID_DEBUG 1 */
+#ifdef TRACE_ID_DEBUG
+static void coresight_trace_id_dump_table(struct coresight_trace_id_map *id_map,
+					  const char *func_name)
+{
+	/* currently 2 u64s are sufficient to hold all the ids */
+	pr_debug("%s id_map::\n", func_name);
+	pr_debug("Avial= 0x%016lx%016lx\n", id_map->avail_ids[1], id_map->avail_ids[0]);
+	pr_debug("Pend = 0x%016lx%016lx\n", id_map->pend_rel_ids[1], id_map->pend_rel_ids[0]);
+}
+#define DUMP_ID_MAP(map)   coresight_trace_id_dump_table(map, __func__)
+#define DUMP_ID_CPU(cpu, id) pr_debug("%s called;  cpu=%d, id=%d\n", __func__, cpu, id)
+#define DUMP_ID(id)   pr_debug("%s called; id=%d\n", __func__, id)
+#define PERF_SESSION(n) pr_debug("%s perf count %d\n", __func__, n)
+#else
+#define DUMP_ID_MAP(map)
+#define DUMP_ID(id)
+#define DUMP_ID_CPU(cpu, id)
+#define PERF_SESSION(n)
+#endif
+
 /* release all pending IDs for all current maps & clear CPU associations */
 static void coresight_trace_id_release_all_pending(void)
 {
@@ -82,6 +103,7 @@ static void coresight_trace_id_release_all_pending(void)
 			clear_bit(bit, id_map->avail_ids);
 			clear_bit(bit, id_map->pend_rel_ids);
 		}
+		DUMP_ID_MAP(id_map);
 	}
 
 	for_each_possible_cpu(cpu) {
@@ -112,6 +134,8 @@ int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map
 
 get_cpu_id_out:
 	spin_unlock_irqrestore(&id_map_lock, flags);
+	DUMP_ID_CPU(cpu, id);
+	DUMP_ID_MAP(id_map);
 	return id;
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
@@ -138,6 +162,8 @@ void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_ma
 
  put_cpu_id_out:
 	spin_unlock_irqrestore(&id_map_lock, flags);
+	DUMP_ID_CPU(cpu, id);
+	DUMP_ID_MAP(id_map);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
 
@@ -152,6 +178,8 @@ int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
 		coresight_trace_id_set_inuse(id, id_map);
 	spin_unlock_irqrestore(&id_map_lock, flags);
 
+	DUMP_ID(id);
+	DUMP_ID_MAP(id_map);
 	return id;
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
@@ -163,6 +191,9 @@ void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int
 	spin_lock_irqsave(&id_map_lock, flags);
 	coresight_trace_id_clear_inuse(id, id_map);
 	spin_unlock_irqrestore(&id_map_lock, flags);
+
+	DUMP_ID(id);
+	DUMP_ID_MAP(id_map);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
 
@@ -172,6 +203,7 @@ void coresight_trace_id_perf_start(void)
 
 	spin_lock_irqsave(&id_map_lock, flags);
 	perf_cs_etm_session_active++;
+	PERF_SESSION(perf_cs_etm_session_active);
 	spin_unlock_irqrestore(&id_map_lock, flags);
 }
 EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
@@ -182,6 +214,7 @@ void coresight_trace_id_perf_stop(void)
 
 	spin_lock_irqsave(&id_map_lock, flags);
 	perf_cs_etm_session_active--;
+	PERF_SESSION(perf_cs_etm_session_active);
 	if (!perf_cs_etm_session_active)
 		coresight_trace_id_release_all_pending();
 	spin_unlock_irqrestore(&id_map_lock, flags);
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-08 20:49 ` Mike Leach
@ 2022-03-22 10:43   ` Suzuki Kuruppassery Poulose
  -1 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-22 10:43 UTC (permalink / raw)
  To: Mike Leach, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	James Clark

+ Cc: James Clark

Hi Mike,

On 08/03/2022 20:49, Mike Leach wrote:
> The current method for allocating trace source ID values to sources is
> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> The STM is allocated ID 0x1.
> 
> This fixed algorithm is used in both the CoreSight driver code, and by
> perf when writing the trace metadata in the AUXTRACE_INFO record.
> 
> The method needs replacing as currently:-
> 1. It is inefficient in using available IDs.
> 2. Does not scale to larger systems with many cores and the algorithm
> has no limits so will generate invalid trace IDs for cpu number > 44.

Thanks for addressing this issue.

> 
> Additionally requirements to allocate additional system IDs on some
> systems have been seen.
> 
> This patch set  introduces an API that allows the allocation of trace IDs
> in a dynamic manner.
> 
> Architecturally reserved IDs are never allocated, and the system is
> limited to allocating only valid IDs.
> 
> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> the new API.
> 
> perf handling is changed so that the ID associated with the CPU is read
> from sysfs. The ID allocator is notified when perf events start and stop
> so CPU based IDs are kept constant throughout any perf session.
> 
> For the ETMx.x devices IDs are allocated on certain events
> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> when the sysfs reset is written.
> b) When using perf, ID is allocated on hardware enable, and freed on
> hardware disable.
> 
> For both cases the ID is allocated when sysfs is read to get the current
> trace ID. This ensures that consistent decode metadata can be extracted
> from the system where this read occurs before device enable.


> 
> Note: This patchset breaks backward compatibility for perf record.
> Because the method for generating the AUXTRACE_INFO meta data has
> changed, using an older perf record will result in metadata that
> does not match the trace IDs used in the recorded trace data.
> This mismatch will cause subsequent decode to fail. Older versions of
> perf will still be able to decode data generated by the updated system.

I have some concerns over this and the future plans for the dynamic
allocation per sink. i.e., we are breaking/modifying the perf now to
accommodate the dynamic nature of the trace id of a given CPU/ETM.
The proposed approach of exposing this via sysfs may (am not sure if
this would be the case) break for the trace-id per sink change, as a
sink could assign different trace-id for a CPU depending.

So, instead if we make the trace-id available in the perf (say, an new
record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
new packet for the trace-id, irrespective of how that is allocated and
remove the locking/linking of the trace-id with that of the sysfs. This
is not something that exists today. (Ideally it would have been nice to
have some additional fields in RECORD_AUXINFO, but we don't. Instead of
breaking/extending that, we could add a new RECORD).

I believe the packet may need to be generated only once for a session
and that will also allow the flexibility of moving trace-id allocation
around (to a sink in the future).

Thoughts ?

Kind regards
Suzuki


> 
> 
> Applies to coresight/next [b54f53bc11a5]
> Tested on DB410c
> 
> Mike Leach (10):
>    coresight: trace-id: Add API to dynamically assign trace ID values
>    coresight: trace-id: Set up source trace ID map for system
>    coresight: stm: Update STM driver to use Trace ID api
>    coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>    coresight: etm3x: Use trace ID API to allocate IDs
>    coresight: perf: traceid: Add perf notifiers for trace ID
>    perf: cs-etm: Update event to read trace ID from sysfs
>    coresight: Remove legacy Trace ID allocation mechanism
>    coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>    coresight: trace-id: Add debug & test macros to trace id allocation
> 
>   drivers/hwtracing/coresight/Makefile          |   2 +-
>   drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>   .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>   drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>   .../coresight/coresight-etm3x-core.c          |  93 ++++---
>   .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>   .../coresight/coresight-etm4x-core.c          |  63 ++++-
>   .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>   drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>   drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>   drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>   .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>   .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>   include/linux/coresight-pmu.h                 |  12 -
>   include/linux/coresight.h                     |   3 -
>   tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>   16 files changed, 530 insertions(+), 175 deletions(-)
>   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-22 10:43   ` Suzuki Kuruppassery Poulose
  0 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-22 10:43 UTC (permalink / raw)
  To: Mike Leach, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan,
	James Clark

+ Cc: James Clark

Hi Mike,

On 08/03/2022 20:49, Mike Leach wrote:
> The current method for allocating trace source ID values to sources is
> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> The STM is allocated ID 0x1.
> 
> This fixed algorithm is used in both the CoreSight driver code, and by
> perf when writing the trace metadata in the AUXTRACE_INFO record.
> 
> The method needs replacing as currently:-
> 1. It is inefficient in using available IDs.
> 2. Does not scale to larger systems with many cores and the algorithm
> has no limits so will generate invalid trace IDs for cpu number > 44.

Thanks for addressing this issue.

> 
> Additionally requirements to allocate additional system IDs on some
> systems have been seen.
> 
> This patch set  introduces an API that allows the allocation of trace IDs
> in a dynamic manner.
> 
> Architecturally reserved IDs are never allocated, and the system is
> limited to allocating only valid IDs.
> 
> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> the new API.
> 
> perf handling is changed so that the ID associated with the CPU is read
> from sysfs. The ID allocator is notified when perf events start and stop
> so CPU based IDs are kept constant throughout any perf session.
> 
> For the ETMx.x devices IDs are allocated on certain events
> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> when the sysfs reset is written.
> b) When using perf, ID is allocated on hardware enable, and freed on
> hardware disable.
> 
> For both cases the ID is allocated when sysfs is read to get the current
> trace ID. This ensures that consistent decode metadata can be extracted
> from the system where this read occurs before device enable.


> 
> Note: This patchset breaks backward compatibility for perf record.
> Because the method for generating the AUXTRACE_INFO meta data has
> changed, using an older perf record will result in metadata that
> does not match the trace IDs used in the recorded trace data.
> This mismatch will cause subsequent decode to fail. Older versions of
> perf will still be able to decode data generated by the updated system.

I have some concerns over this and the future plans for the dynamic
allocation per sink. i.e., we are breaking/modifying the perf now to
accommodate the dynamic nature of the trace id of a given CPU/ETM.
The proposed approach of exposing this via sysfs may (am not sure if
this would be the case) break for the trace-id per sink change, as a
sink could assign different trace-id for a CPU depending.

So, instead if we make the trace-id available in the perf (say, an new
record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
new packet for the trace-id, irrespective of how that is allocated and
remove the locking/linking of the trace-id with that of the sysfs. This
is not something that exists today. (Ideally it would have been nice to
have some additional fields in RECORD_AUXINFO, but we don't. Instead of
breaking/extending that, we could add a new RECORD).

I believe the packet may need to be generated only once for a session
and that will also allow the flexibility of moving trace-id allocation
around (to a sink in the future).

Thoughts ?

Kind regards
Suzuki


> 
> 
> Applies to coresight/next [b54f53bc11a5]
> Tested on DB410c
> 
> Mike Leach (10):
>    coresight: trace-id: Add API to dynamically assign trace ID values
>    coresight: trace-id: Set up source trace ID map for system
>    coresight: stm: Update STM driver to use Trace ID api
>    coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>    coresight: etm3x: Use trace ID API to allocate IDs
>    coresight: perf: traceid: Add perf notifiers for trace ID
>    perf: cs-etm: Update event to read trace ID from sysfs
>    coresight: Remove legacy Trace ID allocation mechanism
>    coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>    coresight: trace-id: Add debug & test macros to trace id allocation
> 
>   drivers/hwtracing/coresight/Makefile          |   2 +-
>   drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>   .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>   drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>   .../coresight/coresight-etm3x-core.c          |  93 ++++---
>   .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>   .../coresight/coresight-etm4x-core.c          |  63 ++++-
>   .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>   drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>   drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>   drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>   .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>   .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>   include/linux/coresight-pmu.h                 |  12 -
>   include/linux/coresight.h                     |   3 -
>   tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>   16 files changed, 530 insertions(+), 175 deletions(-)
>   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-22 10:43   ` Suzuki Kuruppassery Poulose
@ 2022-03-22 11:38     ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-22 11:38 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

HI Suzuki,

On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> + Cc: James Clark
>
> Hi Mike,
>
> On 08/03/2022 20:49, Mike Leach wrote:
> > The current method for allocating trace source ID values to sources is
> > to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> > The STM is allocated ID 0x1.
> >
> > This fixed algorithm is used in both the CoreSight driver code, and by
> > perf when writing the trace metadata in the AUXTRACE_INFO record.
> >
> > The method needs replacing as currently:-
> > 1. It is inefficient in using available IDs.
> > 2. Does not scale to larger systems with many cores and the algorithm
> > has no limits so will generate invalid trace IDs for cpu number > 44.
>
> Thanks for addressing this issue.
>
> >
> > Additionally requirements to allocate additional system IDs on some
> > systems have been seen.
> >
> > This patch set  introduces an API that allows the allocation of trace IDs
> > in a dynamic manner.
> >
> > Architecturally reserved IDs are never allocated, and the system is
> > limited to allocating only valid IDs.
> >
> > Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> > the new API.
> >
> > perf handling is changed so that the ID associated with the CPU is read
> > from sysfs. The ID allocator is notified when perf events start and stop
> > so CPU based IDs are kept constant throughout any perf session.
> >
> > For the ETMx.x devices IDs are allocated on certain events
> > a) When using sysfs, an ID will be allocated on hardware enable, and freed
> > when the sysfs reset is written.
> > b) When using perf, ID is allocated on hardware enable, and freed on
> > hardware disable.
> >
> > For both cases the ID is allocated when sysfs is read to get the current
> > trace ID. This ensures that consistent decode metadata can be extracted
> > from the system where this read occurs before device enable.
>
>
> >
> > Note: This patchset breaks backward compatibility for perf record.
> > Because the method for generating the AUXTRACE_INFO meta data has
> > changed, using an older perf record will result in metadata that
> > does not match the trace IDs used in the recorded trace data.
> > This mismatch will cause subsequent decode to fail. Older versions of
> > perf will still be able to decode data generated by the updated system.
>
> I have some concerns over this and the future plans for the dynamic
> allocation per sink. i.e., we are breaking/modifying the perf now to
> accommodate the dynamic nature of the trace id of a given CPU/ETM.

I don't beleive we have a choice for this - we cannot retain what is
an essentially broken allocation mechanism.

> The proposed approach of exposing this via sysfs may (am not sure if
> this would be the case) break for the trace-id per sink change, as a
> sink could assign different trace-id for a CPU depending.
>

If a path exists between a CPU and a sink - the current framework as
far as I can tell would not allow for a new path to be set up between
the cpu and another sink.

However, if we allow multiple paths per CPU, the implementation does
both allocate on read and allocate on enable. Both API functions take
a input of a trace ID allocation structure. At present this is global,
but if we need to introduce per sink allocation, then the mechanisms
for sink / ID table management will have to ensure that the correct
table is provided for the sink at the end of the path in each case.
Thus the API still works as long as you get the sink ID table
management correct. That is why it was designed to take the TraceID
tables as input to all the functions - it is independent of any per
sink management that might come later.

My view is that any multi-sink system is likely to be multi-socket as
well - where different trace infrastructures exist per socket but need
to be handled by a single software infrastructure.

> So, instead if we make the trace-id available in the perf (say, an new
> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
> new packet for the trace-id, irrespective of how that is allocated and
> remove the locking/linking of the trace-id with that of the sysfs.

The issue here is how to we transmit the information from the driver
to the perf executable?
Even with a new record that problem still exists. The current perf
solves this problem by using the same static algorithm that the driver
uses - so no actual communication is necessary. A similar method is
used to synthesize the value of the etm control register. The command
line options are interpreted by perf, then the same data is passed to
the driver from perf through the event structures and reinterpreted -
hopefully in the same way. All the other values in the perf records
are read directly from sysfs.


>This
> is not something that exists today. (Ideally it would have been nice to
> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
> breaking/extending that, we could add a new RECORD).
>

The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
we have extended this in the past for the additional requirements for
ETE - i.e. an additional ID register - read from sysfs, along with a
version number for the record.

Regards

Mike

> I believe the packet may need to be generated only once for a session
> and that will also allow the flexibility of moving trace-id allocation
> around (to a sink in the future).
>
> Thoughts ?
>
> Kind regards
> Suzuki
>
>
> >
> >
> > Applies to coresight/next [b54f53bc11a5]
> > Tested on DB410c
> >
> > Mike Leach (10):
> >    coresight: trace-id: Add API to dynamically assign trace ID values
> >    coresight: trace-id: Set up source trace ID map for system
> >    coresight: stm: Update STM driver to use Trace ID api
> >    coresight: etm4x: Use trace ID API to dynamically allocate trace ID
> >    coresight: etm3x: Use trace ID API to allocate IDs
> >    coresight: perf: traceid: Add perf notifiers for trace ID
> >    perf: cs-etm: Update event to read trace ID from sysfs
> >    coresight: Remove legacy Trace ID allocation mechanism
> >    coresight: etmX.X: stm: Remove unused legacy source trace ID ops
> >    coresight: trace-id: Add debug & test macros to trace id allocation
> >
> >   drivers/hwtracing/coresight/Makefile          |   2 +-
> >   drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
> >   .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
> >   drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
> >   .../coresight/coresight-etm3x-core.c          |  93 ++++---
> >   .../coresight/coresight-etm3x-sysfs.c         |  28 +-
> >   .../coresight/coresight-etm4x-core.c          |  63 ++++-
> >   .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
> >   drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
> >   drivers/hwtracing/coresight/coresight-priv.h  |   1 +
> >   drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
> >   .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
> >   .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
> >   include/linux/coresight-pmu.h                 |  12 -
> >   include/linux/coresight.h                     |   3 -
> >   tools/perf/arch/arm/util/cs-etm.c             |  12 +-
> >   16 files changed, 530 insertions(+), 175 deletions(-)
> >   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >
>


-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-22 11:38     ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-22 11:38 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

HI Suzuki,

On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> + Cc: James Clark
>
> Hi Mike,
>
> On 08/03/2022 20:49, Mike Leach wrote:
> > The current method for allocating trace source ID values to sources is
> > to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> > The STM is allocated ID 0x1.
> >
> > This fixed algorithm is used in both the CoreSight driver code, and by
> > perf when writing the trace metadata in the AUXTRACE_INFO record.
> >
> > The method needs replacing as currently:-
> > 1. It is inefficient in using available IDs.
> > 2. Does not scale to larger systems with many cores and the algorithm
> > has no limits so will generate invalid trace IDs for cpu number > 44.
>
> Thanks for addressing this issue.
>
> >
> > Additionally requirements to allocate additional system IDs on some
> > systems have been seen.
> >
> > This patch set  introduces an API that allows the allocation of trace IDs
> > in a dynamic manner.
> >
> > Architecturally reserved IDs are never allocated, and the system is
> > limited to allocating only valid IDs.
> >
> > Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> > the new API.
> >
> > perf handling is changed so that the ID associated with the CPU is read
> > from sysfs. The ID allocator is notified when perf events start and stop
> > so CPU based IDs are kept constant throughout any perf session.
> >
> > For the ETMx.x devices IDs are allocated on certain events
> > a) When using sysfs, an ID will be allocated on hardware enable, and freed
> > when the sysfs reset is written.
> > b) When using perf, ID is allocated on hardware enable, and freed on
> > hardware disable.
> >
> > For both cases the ID is allocated when sysfs is read to get the current
> > trace ID. This ensures that consistent decode metadata can be extracted
> > from the system where this read occurs before device enable.
>
>
> >
> > Note: This patchset breaks backward compatibility for perf record.
> > Because the method for generating the AUXTRACE_INFO meta data has
> > changed, using an older perf record will result in metadata that
> > does not match the trace IDs used in the recorded trace data.
> > This mismatch will cause subsequent decode to fail. Older versions of
> > perf will still be able to decode data generated by the updated system.
>
> I have some concerns over this and the future plans for the dynamic
> allocation per sink. i.e., we are breaking/modifying the perf now to
> accommodate the dynamic nature of the trace id of a given CPU/ETM.

I don't beleive we have a choice for this - we cannot retain what is
an essentially broken allocation mechanism.

> The proposed approach of exposing this via sysfs may (am not sure if
> this would be the case) break for the trace-id per sink change, as a
> sink could assign different trace-id for a CPU depending.
>

If a path exists between a CPU and a sink - the current framework as
far as I can tell would not allow for a new path to be set up between
the cpu and another sink.

However, if we allow multiple paths per CPU, the implementation does
both allocate on read and allocate on enable. Both API functions take
a input of a trace ID allocation structure. At present this is global,
but if we need to introduce per sink allocation, then the mechanisms
for sink / ID table management will have to ensure that the correct
table is provided for the sink at the end of the path in each case.
Thus the API still works as long as you get the sink ID table
management correct. That is why it was designed to take the TraceID
tables as input to all the functions - it is independent of any per
sink management that might come later.

My view is that any multi-sink system is likely to be multi-socket as
well - where different trace infrastructures exist per socket but need
to be handled by a single software infrastructure.

> So, instead if we make the trace-id available in the perf (say, an new
> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
> new packet for the trace-id, irrespective of how that is allocated and
> remove the locking/linking of the trace-id with that of the sysfs.

The issue here is how to we transmit the information from the driver
to the perf executable?
Even with a new record that problem still exists. The current perf
solves this problem by using the same static algorithm that the driver
uses - so no actual communication is necessary. A similar method is
used to synthesize the value of the etm control register. The command
line options are interpreted by perf, then the same data is passed to
the driver from perf through the event structures and reinterpreted -
hopefully in the same way. All the other values in the perf records
are read directly from sysfs.


>This
> is not something that exists today. (Ideally it would have been nice to
> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
> breaking/extending that, we could add a new RECORD).
>

The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
we have extended this in the past for the additional requirements for
ETE - i.e. an additional ID register - read from sysfs, along with a
version number for the record.

Regards

Mike

> I believe the packet may need to be generated only once for a session
> and that will also allow the flexibility of moving trace-id allocation
> around (to a sink in the future).
>
> Thoughts ?
>
> Kind regards
> Suzuki
>
>
> >
> >
> > Applies to coresight/next [b54f53bc11a5]
> > Tested on DB410c
> >
> > Mike Leach (10):
> >    coresight: trace-id: Add API to dynamically assign trace ID values
> >    coresight: trace-id: Set up source trace ID map for system
> >    coresight: stm: Update STM driver to use Trace ID api
> >    coresight: etm4x: Use trace ID API to dynamically allocate trace ID
> >    coresight: etm3x: Use trace ID API to allocate IDs
> >    coresight: perf: traceid: Add perf notifiers for trace ID
> >    perf: cs-etm: Update event to read trace ID from sysfs
> >    coresight: Remove legacy Trace ID allocation mechanism
> >    coresight: etmX.X: stm: Remove unused legacy source trace ID ops
> >    coresight: trace-id: Add debug & test macros to trace id allocation
> >
> >   drivers/hwtracing/coresight/Makefile          |   2 +-
> >   drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
> >   .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
> >   drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
> >   .../coresight/coresight-etm3x-core.c          |  93 ++++---
> >   .../coresight/coresight-etm3x-sysfs.c         |  28 +-
> >   .../coresight/coresight-etm4x-core.c          |  63 ++++-
> >   .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
> >   drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
> >   drivers/hwtracing/coresight/coresight-priv.h  |   1 +
> >   drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
> >   .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
> >   .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
> >   include/linux/coresight-pmu.h                 |  12 -
> >   include/linux/coresight.h                     |   3 -
> >   tools/perf/arch/arm/util/cs-etm.c             |  12 +-
> >   16 files changed, 530 insertions(+), 175 deletions(-)
> >   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >   create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >
>


-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-22 11:38     ` Mike Leach
@ 2022-03-22 12:35       ` Suzuki Kuruppassery Poulose
  -1 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-22 12:35 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

On 22/03/2022 11:38, Mike Leach wrote:
> HI Suzuki,
> 
> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> + Cc: James Clark
>>
>> Hi Mike,
>>
>> On 08/03/2022 20:49, Mike Leach wrote:
>>> The current method for allocating trace source ID values to sources is
>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>> The STM is allocated ID 0x1.
>>>
>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>
>>> The method needs replacing as currently:-
>>> 1. It is inefficient in using available IDs.
>>> 2. Does not scale to larger systems with many cores and the algorithm
>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>
>> Thanks for addressing this issue.
>>
>>>
>>> Additionally requirements to allocate additional system IDs on some
>>> systems have been seen.
>>>
>>> This patch set  introduces an API that allows the allocation of trace IDs
>>> in a dynamic manner.
>>>
>>> Architecturally reserved IDs are never allocated, and the system is
>>> limited to allocating only valid IDs.
>>>
>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>> the new API.
>>>
>>> perf handling is changed so that the ID associated with the CPU is read
>>> from sysfs. The ID allocator is notified when perf events start and stop
>>> so CPU based IDs are kept constant throughout any perf session.
>>>
>>> For the ETMx.x devices IDs are allocated on certain events
>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>> when the sysfs reset is written.
>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>> hardware disable.
>>>
>>> For both cases the ID is allocated when sysfs is read to get the current
>>> trace ID. This ensures that consistent decode metadata can be extracted
>>> from the system where this read occurs before device enable.
>>
>>
>>>
>>> Note: This patchset breaks backward compatibility for perf record.
>>> Because the method for generating the AUXTRACE_INFO meta data has
>>> changed, using an older perf record will result in metadata that
>>> does not match the trace IDs used in the recorded trace data.
>>> This mismatch will cause subsequent decode to fail. Older versions of
>>> perf will still be able to decode data generated by the updated system.
>>
>> I have some concerns over this and the future plans for the dynamic
>> allocation per sink. i.e., we are breaking/modifying the perf now to
>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> 
> I don't beleive we have a choice for this - we cannot retain what is
> an essentially broken allocation mechanism.
> 

I completely agree and I am happy with the current step by step approach
of moving to a dynamic allocation scheme. Apologies, this wasn't 
conveyed appropriately.

>> The proposed approach of exposing this via sysfs may (am not sure if
>> this would be the case) break for the trace-id per sink change, as a
>> sink could assign different trace-id for a CPU depending.
>>
> 
> If a path exists between a CPU and a sink - the current framework as
> far as I can tell would not allow for a new path to be set up between
> the cpu and another sink.

e.g, if we have concurrent perf sessions, in the future with sink  based
allocation :

perf record -e cs_etm/@sink1/... payload1
perf record -e cs_etm/@sink2/... payload2
perf record -e cs_etm// ...      payload3

The trace id allocated for first session for CPU0 *could* be different
from that of the second or the third. And it may be tricky to guarantee
that the traceids may be unique across the sinks for a given CPU.

Please note that the different perf sessions may be executing on
different CPUs at the same time as long as they go to different sinks.
So, reading the sysfs could only give out a single traceid, which may
or may not be the correct one for a given "perf".

> 
> However, if we allow multiple paths per CPU, the implementation does
> both allocate on read and allocate on enable. Both API functions take
> a input of a trace ID allocation structure. At present this is global,
> but if we need to introduce per sink allocation, then the mechanisms
> for sink / ID table management will have to ensure that the correct
> table is provided for the sink at the end of the path in each case.
> Thus the API still works as long as you get the sink ID table
> management correct. That is why it was designed to take the TraceID
> tables as input to all the functions - it is independent of any per
> sink management that might come later.
> 
> My view is that any multi-sink system is likely to be multi-socket as
> well - where different trace infrastructures exist per socket but need
> to be handled by a single software infrastructure.
> 
>> So, instead if we make the trace-id available in the perf (say, an new
>> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
>> new packet for the trace-id, irrespective of how that is allocated and
>> remove the locking/linking of the trace-id with that of the sysfs.
> 
> The issue here is how to we transmit the information from the driver
> to the perf executable?

Yes, exactly.

> Even with a new record that problem still exists. The current perf
> solves this problem by using the same static algorithm that the driver
> uses - so no actual communication is necessary. A similar method is
> used to synthesize the value of the etm control register. The command
> line options are interpreted by perf, then the same data is passed to
> the driver from perf through the event structures and reinterpreted -
> hopefully in the same way. All the other values in the perf records
> are read directly from sysfs.

Yes, correct. Now, the trace-id is something that could change per 
session and with the move to sink based allocation, this could break.
So,
> 
> 
>> This
>> is not something that exists today. (Ideally it would have been nice to
>> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
>> breaking/extending that, we could add a new RECORD).
>>
> 
> The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
> we have extended this in the past for the additional requirements for
> ETE - i.e. an additional ID register - read from sysfs, along with a
> version number for the record.

Sorry, I meant the RECORD_AUX (which perf gets emitted for each session
of the ETM, with the offset/size and flags).

There are:

RECORD_AUXINFO -> perf created statically.
RECORD_AUX     -> emitted for each "run" of ETM, offset, size, flags
RECORD_AUXTRACE -> actual hw trace

I see that there is already something that we could use;


  /*
   * Data written to the AUX area by hardware due to aux_output, may need
   * to be matched to the event by an architecture-specific hardware ID.
   * This records the hardware ID, but requires sample_id to provide the
   * event ID. e.g. Intel PT uses this record to disambiguate PEBS-via-PT
   * records from multiple events.
   *
   * struct {
   *     struct perf_event_header        header;
   *     u64                             hw_id;
   *     struct sample_id                sample_id;
   * };
   */
  PERF_RECORD_AUX_OUTPUT_HW_ID           = 21,

My suggestion is to emit a record say :

PERF_RECORD_AUX_OUTPUT_HW_ID for each CPU/ETM for a perf session. And 
have the perf report construct the TraceID map for each ETM at decode
from the PERF_RECORD_AUX_OUTPUT_HW_ID records. That way it is future
proof for the "perf" userspace to find the trace-id for a given ETM
rather than reading the sysfs which could be problematic.

Suzuki


> 
> Regards
> 
> Mike
> 
>> I believe the packet may need to be generated only once for a session
>> and that will also allow the flexibility of moving trace-id allocation
>> around (to a sink in the future).
>>
>> Thoughts ?
>>
>> Kind regards
>> Suzuki
>>
>>
>>>
>>>
>>> Applies to coresight/next [b54f53bc11a5]
>>> Tested on DB410c
>>>
>>> Mike Leach (10):
>>>     coresight: trace-id: Add API to dynamically assign trace ID values
>>>     coresight: trace-id: Set up source trace ID map for system
>>>     coresight: stm: Update STM driver to use Trace ID api
>>>     coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>>>     coresight: etm3x: Use trace ID API to allocate IDs
>>>     coresight: perf: traceid: Add perf notifiers for trace ID
>>>     perf: cs-etm: Update event to read trace ID from sysfs
>>>     coresight: Remove legacy Trace ID allocation mechanism
>>>     coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>>>     coresight: trace-id: Add debug & test macros to trace id allocation
>>>
>>>    drivers/hwtracing/coresight/Makefile          |   2 +-
>>>    drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>>>    .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>>>    drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>>>    .../coresight/coresight-etm3x-core.c          |  93 ++++---
>>>    .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>>>    .../coresight/coresight-etm4x-core.c          |  63 ++++-
>>>    .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>>>    drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>>>    drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>>>    drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>>>    .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>>>    .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>>>    include/linux/coresight-pmu.h                 |  12 -
>>>    include/linux/coresight.h                     |   3 -
>>>    tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>>>    16 files changed, 530 insertions(+), 175 deletions(-)
>>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
>>>
>>
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-22 12:35       ` Suzuki Kuruppassery Poulose
  0 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-22 12:35 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

On 22/03/2022 11:38, Mike Leach wrote:
> HI Suzuki,
> 
> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> + Cc: James Clark
>>
>> Hi Mike,
>>
>> On 08/03/2022 20:49, Mike Leach wrote:
>>> The current method for allocating trace source ID values to sources is
>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>> The STM is allocated ID 0x1.
>>>
>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>
>>> The method needs replacing as currently:-
>>> 1. It is inefficient in using available IDs.
>>> 2. Does not scale to larger systems with many cores and the algorithm
>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>
>> Thanks for addressing this issue.
>>
>>>
>>> Additionally requirements to allocate additional system IDs on some
>>> systems have been seen.
>>>
>>> This patch set  introduces an API that allows the allocation of trace IDs
>>> in a dynamic manner.
>>>
>>> Architecturally reserved IDs are never allocated, and the system is
>>> limited to allocating only valid IDs.
>>>
>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>> the new API.
>>>
>>> perf handling is changed so that the ID associated with the CPU is read
>>> from sysfs. The ID allocator is notified when perf events start and stop
>>> so CPU based IDs are kept constant throughout any perf session.
>>>
>>> For the ETMx.x devices IDs are allocated on certain events
>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>> when the sysfs reset is written.
>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>> hardware disable.
>>>
>>> For both cases the ID is allocated when sysfs is read to get the current
>>> trace ID. This ensures that consistent decode metadata can be extracted
>>> from the system where this read occurs before device enable.
>>
>>
>>>
>>> Note: This patchset breaks backward compatibility for perf record.
>>> Because the method for generating the AUXTRACE_INFO meta data has
>>> changed, using an older perf record will result in metadata that
>>> does not match the trace IDs used in the recorded trace data.
>>> This mismatch will cause subsequent decode to fail. Older versions of
>>> perf will still be able to decode data generated by the updated system.
>>
>> I have some concerns over this and the future plans for the dynamic
>> allocation per sink. i.e., we are breaking/modifying the perf now to
>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> 
> I don't beleive we have a choice for this - we cannot retain what is
> an essentially broken allocation mechanism.
> 

I completely agree and I am happy with the current step by step approach
of moving to a dynamic allocation scheme. Apologies, this wasn't 
conveyed appropriately.

>> The proposed approach of exposing this via sysfs may (am not sure if
>> this would be the case) break for the trace-id per sink change, as a
>> sink could assign different trace-id for a CPU depending.
>>
> 
> If a path exists between a CPU and a sink - the current framework as
> far as I can tell would not allow for a new path to be set up between
> the cpu and another sink.

e.g, if we have concurrent perf sessions, in the future with sink  based
allocation :

perf record -e cs_etm/@sink1/... payload1
perf record -e cs_etm/@sink2/... payload2
perf record -e cs_etm// ...      payload3

The trace id allocated for first session for CPU0 *could* be different
from that of the second or the third. And it may be tricky to guarantee
that the traceids may be unique across the sinks for a given CPU.

Please note that the different perf sessions may be executing on
different CPUs at the same time as long as they go to different sinks.
So, reading the sysfs could only give out a single traceid, which may
or may not be the correct one for a given "perf".

> 
> However, if we allow multiple paths per CPU, the implementation does
> both allocate on read and allocate on enable. Both API functions take
> a input of a trace ID allocation structure. At present this is global,
> but if we need to introduce per sink allocation, then the mechanisms
> for sink / ID table management will have to ensure that the correct
> table is provided for the sink at the end of the path in each case.
> Thus the API still works as long as you get the sink ID table
> management correct. That is why it was designed to take the TraceID
> tables as input to all the functions - it is independent of any per
> sink management that might come later.
> 
> My view is that any multi-sink system is likely to be multi-socket as
> well - where different trace infrastructures exist per socket but need
> to be handled by a single software infrastructure.
> 
>> So, instead if we make the trace-id available in the perf (say, an new
>> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
>> new packet for the trace-id, irrespective of how that is allocated and
>> remove the locking/linking of the trace-id with that of the sysfs.
> 
> The issue here is how to we transmit the information from the driver
> to the perf executable?

Yes, exactly.

> Even with a new record that problem still exists. The current perf
> solves this problem by using the same static algorithm that the driver
> uses - so no actual communication is necessary. A similar method is
> used to synthesize the value of the etm control register. The command
> line options are interpreted by perf, then the same data is passed to
> the driver from perf through the event structures and reinterpreted -
> hopefully in the same way. All the other values in the perf records
> are read directly from sysfs.

Yes, correct. Now, the trace-id is something that could change per 
session and with the move to sink based allocation, this could break.
So,
> 
> 
>> This
>> is not something that exists today. (Ideally it would have been nice to
>> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
>> breaking/extending that, we could add a new RECORD).
>>
> 
> The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
> we have extended this in the past for the additional requirements for
> ETE - i.e. an additional ID register - read from sysfs, along with a
> version number for the record.

Sorry, I meant the RECORD_AUX (which perf gets emitted for each session
of the ETM, with the offset/size and flags).

There are:

RECORD_AUXINFO -> perf created statically.
RECORD_AUX     -> emitted for each "run" of ETM, offset, size, flags
RECORD_AUXTRACE -> actual hw trace

I see that there is already something that we could use;


  /*
   * Data written to the AUX area by hardware due to aux_output, may need
   * to be matched to the event by an architecture-specific hardware ID.
   * This records the hardware ID, but requires sample_id to provide the
   * event ID. e.g. Intel PT uses this record to disambiguate PEBS-via-PT
   * records from multiple events.
   *
   * struct {
   *     struct perf_event_header        header;
   *     u64                             hw_id;
   *     struct sample_id                sample_id;
   * };
   */
  PERF_RECORD_AUX_OUTPUT_HW_ID           = 21,

My suggestion is to emit a record say :

PERF_RECORD_AUX_OUTPUT_HW_ID for each CPU/ETM for a perf session. And 
have the perf report construct the TraceID map for each ETM at decode
from the PERF_RECORD_AUX_OUTPUT_HW_ID records. That way it is future
proof for the "perf" userspace to find the trace-id for a given ETM
rather than reading the sysfs which could be problematic.

Suzuki


> 
> Regards
> 
> Mike
> 
>> I believe the packet may need to be generated only once for a session
>> and that will also allow the flexibility of moving trace-id allocation
>> around (to a sink in the future).
>>
>> Thoughts ?
>>
>> Kind regards
>> Suzuki
>>
>>
>>>
>>>
>>> Applies to coresight/next [b54f53bc11a5]
>>> Tested on DB410c
>>>
>>> Mike Leach (10):
>>>     coresight: trace-id: Add API to dynamically assign trace ID values
>>>     coresight: trace-id: Set up source trace ID map for system
>>>     coresight: stm: Update STM driver to use Trace ID api
>>>     coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>>>     coresight: etm3x: Use trace ID API to allocate IDs
>>>     coresight: perf: traceid: Add perf notifiers for trace ID
>>>     perf: cs-etm: Update event to read trace ID from sysfs
>>>     coresight: Remove legacy Trace ID allocation mechanism
>>>     coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>>>     coresight: trace-id: Add debug & test macros to trace id allocation
>>>
>>>    drivers/hwtracing/coresight/Makefile          |   2 +-
>>>    drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>>>    .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>>>    drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>>>    .../coresight/coresight-etm3x-core.c          |  93 ++++---
>>>    .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>>>    .../coresight/coresight-etm4x-core.c          |  63 ++++-
>>>    .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>>>    drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>>>    drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>>>    drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>>>    .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>>>    .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>>>    include/linux/coresight-pmu.h                 |  12 -
>>>    include/linux/coresight.h                     |   3 -
>>>    tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>>>    16 files changed, 530 insertions(+), 175 deletions(-)
>>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
>>>
>>
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-22 12:35       ` Suzuki Kuruppassery Poulose
@ 2022-03-22 14:27         ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-22 14:27 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki

On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> On 22/03/2022 11:38, Mike Leach wrote:
> > HI Suzuki,
> >
> > On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > <suzuki.poulose@arm.com> wrote:
> >>
> >> + Cc: James Clark
> >>
> >> Hi Mike,
> >>
> >> On 08/03/2022 20:49, Mike Leach wrote:
> >>> The current method for allocating trace source ID values to sources is
> >>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>> The STM is allocated ID 0x1.
> >>>
> >>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>
> >>> The method needs replacing as currently:-
> >>> 1. It is inefficient in using available IDs.
> >>> 2. Does not scale to larger systems with many cores and the algorithm
> >>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>
> >> Thanks for addressing this issue.
> >>
> >>>
> >>> Additionally requirements to allocate additional system IDs on some
> >>> systems have been seen.
> >>>
> >>> This patch set  introduces an API that allows the allocation of trace IDs
> >>> in a dynamic manner.
> >>>
> >>> Architecturally reserved IDs are never allocated, and the system is
> >>> limited to allocating only valid IDs.
> >>>
> >>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>> the new API.
> >>>
> >>> perf handling is changed so that the ID associated with the CPU is read
> >>> from sysfs. The ID allocator is notified when perf events start and stop
> >>> so CPU based IDs are kept constant throughout any perf session.
> >>>
> >>> For the ETMx.x devices IDs are allocated on certain events
> >>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>> when the sysfs reset is written.
> >>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>> hardware disable.
> >>>
> >>> For both cases the ID is allocated when sysfs is read to get the current
> >>> trace ID. This ensures that consistent decode metadata can be extracted
> >>> from the system where this read occurs before device enable.
> >>
> >>
> >>>
> >>> Note: This patchset breaks backward compatibility for perf record.
> >>> Because the method for generating the AUXTRACE_INFO meta data has
> >>> changed, using an older perf record will result in metadata that
> >>> does not match the trace IDs used in the recorded trace data.
> >>> This mismatch will cause subsequent decode to fail. Older versions of
> >>> perf will still be able to decode data generated by the updated system.
> >>
> >> I have some concerns over this and the future plans for the dynamic
> >> allocation per sink. i.e., we are breaking/modifying the perf now to
> >> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >
> > I don't beleive we have a choice for this - we cannot retain what is
> > an essentially broken allocation mechanism.
> >
>
> I completely agree and I am happy with the current step by step approach
> of moving to a dynamic allocation scheme. Apologies, this wasn't
> conveyed appropriately.
>
> >> The proposed approach of exposing this via sysfs may (am not sure if
> >> this would be the case) break for the trace-id per sink change, as a
> >> sink could assign different trace-id for a CPU depending.
> >>
> >
> > If a path exists between a CPU and a sink - the current framework as
> > far as I can tell would not allow for a new path to be set up between
> > the cpu and another sink.
>
> e.g, if we have concurrent perf sessions, in the future with sink  based
> allocation :
>
> perf record -e cs_etm/@sink1/... payload1
> perf record -e cs_etm/@sink2/... payload2
> perf record -e cs_etm// ...      payload3
>
> The trace id allocated for first session for CPU0 *could* be different
> from that of the second or the third.

If these sessions run concurrently then the same Trace ID will be used
for CPU0 for all the sessions.
We ensure this by notifications that a cs_etm session is starting and
stopping - and keep a refcount.
Only when the perf session refcount hits zero can Trace IDs be
released and re-used. Otherwise the association between CPU x and
Trace ID y is maintained - the first session using CPU0 will assign
the ID, the last session to terminate will release the ID from CPU0
(and any other IDs that were allocated during the sessions).

> And it may be tricky to guarantee
> that the traceids may be unique across the sinks for a given CPU.
>




> Please note that the different perf sessions may be executing on
> different CPUs at the same time as long as they go to different sinks.
> So, reading the sysfs could only give out a single traceid, which may
> or may not be the correct one for a given "perf".

As above - reading the trace ID for a given CPU will work for
concurrent sessions.

>
> >
> > However, if we allow multiple paths per CPU, the implementation does
> > both allocate on read and allocate on enable. Both API functions take
> > a input of a trace ID allocation structure. At present this is global,
> > but if we need to introduce per sink allocation, then the mechanisms
> > for sink / ID table management will have to ensure that the correct
> > table is provided for the sink at the end of the path in each case.
> > Thus the API still works as long as you get the sink ID table
> > management correct. That is why it was designed to take the TraceID
> > tables as input to all the functions - it is independent of any per
> > sink management that might come later.
> >
> > My view is that any multi-sink system is likely to be multi-socket as
> > well - where different trace infrastructures exist per socket but need
> > to be handled by a single software infrastructure.
> >
> >> So, instead if we make the trace-id available in the perf (say, an new
> >> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
> >> new packet for the trace-id, irrespective of how that is allocated and
> >> remove the locking/linking of the trace-id with that of the sysfs.
> >
> > The issue here is how to we transmit the information from the driver
> > to the perf executable?
>
> Yes, exactly.
>
> > Even with a new record that problem still exists. The current perf
> > solves this problem by using the same static algorithm that the driver
> > uses - so no actual communication is necessary. A similar method is
> > used to synthesize the value of the etm control register. The command
> > line options are interpreted by perf, then the same data is passed to
> > the driver from perf through the event structures and reinterpreted -
> > hopefully in the same way. All the other values in the perf records
> > are read directly from sysfs.
>
> Yes, correct. Now, the trace-id is something that could change per
> session and with the move to sink based allocation, this could break.
> So,
> >
> >
> >> This
> >> is not something that exists today. (Ideally it would have been nice to
> >> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
> >> breaking/extending that, we could add a new RECORD).
> >>
> >
> > The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
> > we have extended this in the past for the additional requirements for
> > ETE - i.e. an additional ID register - read from sysfs, along with a
> > version number for the record.
>
> Sorry, I meant the RECORD_AUX (which perf gets emitted for each session
> of the ETM, with the offset/size and flags).
>
> There are:
>
> RECORD_AUXINFO -> perf created statically.
> RECORD_AUX     -> emitted for each "run" of ETM, offset, size, flags
> RECORD_AUXTRACE -> actual hw trace
>
> I see that there is already something that we could use;
>
>
>   /*
>    * Data written to the AUX area by hardware due to aux_output, may need
>    * to be matched to the event by an architecture-specific hardware ID.
>    * This records the hardware ID, but requires sample_id to provide the
>    * event ID. e.g. Intel PT uses this record to disambiguate PEBS-via-PT
>    * records from multiple events.
>    *
>    * struct {
>    *     struct perf_event_header        header;
>    *     u64                             hw_id;
>    *     struct sample_id                sample_id;
>    * };
>    */
>   PERF_RECORD_AUX_OUTPUT_HW_ID           = 21,
>
> My suggestion is to emit a record say :
>
> PERF_RECORD_AUX_OUTPUT_HW_ID for each CPU/ETM for a perf session. And
> have the perf report construct the TraceID map for each ETM at decode
> from the PERF_RECORD_AUX_OUTPUT_HW_ID records. That way it is future
> proof for the "perf" userspace to find the trace-id for a given ETM
> rather than reading the sysfs which could be problematic.
>

I think this is an interesting idea - we would effectively drop the
use of the Trace ID in AUXINFO and replace it with this new record -
presumably emitted from somewhere in the etm driver.
It is still a compatibility breaking solution. In fact more so than
the current patch set. With the current patch set you need the driver
changes, and the kernel perf changes to generate a useable file that
will work with earlier versions of the userspace perf report.
With this change you need a change to the drivers, kernel perf and
userspace perf.

However - this is a perf only solution - it does not help when driving
trace from sysfs directly.

I think this could be done. But I think this is a separate task from
the current patch set - and could easily be added later if required.
It involves much more change to the user side of perf which are not
required at present.

Regards

Mike

> Suzuki
>
>
> >
> > Regards
> >
> > Mike
> >
> >> I believe the packet may need to be generated only once for a session
> >> and that will also allow the flexibility of moving trace-id allocation
> >> around (to a sink in the future).
> >>
> >> Thoughts ?
> >>
> >> Kind regards
> >> Suzuki
> >>
> >>
> >>>
> >>>
> >>> Applies to coresight/next [b54f53bc11a5]
> >>> Tested on DB410c
> >>>
> >>> Mike Leach (10):
> >>>     coresight: trace-id: Add API to dynamically assign trace ID values
> >>>     coresight: trace-id: Set up source trace ID map for system
> >>>     coresight: stm: Update STM driver to use Trace ID api
> >>>     coresight: etm4x: Use trace ID API to dynamically allocate trace ID
> >>>     coresight: etm3x: Use trace ID API to allocate IDs
> >>>     coresight: perf: traceid: Add perf notifiers for trace ID
> >>>     perf: cs-etm: Update event to read trace ID from sysfs
> >>>     coresight: Remove legacy Trace ID allocation mechanism
> >>>     coresight: etmX.X: stm: Remove unused legacy source trace ID ops
> >>>     coresight: trace-id: Add debug & test macros to trace id allocation
> >>>
> >>>    drivers/hwtracing/coresight/Makefile          |   2 +-
> >>>    drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
> >>>    .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
> >>>    drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
> >>>    .../coresight/coresight-etm3x-core.c          |  93 ++++---
> >>>    .../coresight/coresight-etm3x-sysfs.c         |  28 +-
> >>>    .../coresight/coresight-etm4x-core.c          |  63 ++++-
> >>>    .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
> >>>    drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
> >>>    drivers/hwtracing/coresight/coresight-priv.h  |   1 +
> >>>    drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
> >>>    .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
> >>>    .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
> >>>    include/linux/coresight-pmu.h                 |  12 -
> >>>    include/linux/coresight.h                     |   3 -
> >>>    tools/perf/arch/arm/util/cs-etm.c             |  12 +-
> >>>    16 files changed, 530 insertions(+), 175 deletions(-)
> >>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >>>
> >>
> >
> >
>


-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-22 14:27         ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-22 14:27 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki

On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> On 22/03/2022 11:38, Mike Leach wrote:
> > HI Suzuki,
> >
> > On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > <suzuki.poulose@arm.com> wrote:
> >>
> >> + Cc: James Clark
> >>
> >> Hi Mike,
> >>
> >> On 08/03/2022 20:49, Mike Leach wrote:
> >>> The current method for allocating trace source ID values to sources is
> >>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>> The STM is allocated ID 0x1.
> >>>
> >>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>
> >>> The method needs replacing as currently:-
> >>> 1. It is inefficient in using available IDs.
> >>> 2. Does not scale to larger systems with many cores and the algorithm
> >>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>
> >> Thanks for addressing this issue.
> >>
> >>>
> >>> Additionally requirements to allocate additional system IDs on some
> >>> systems have been seen.
> >>>
> >>> This patch set  introduces an API that allows the allocation of trace IDs
> >>> in a dynamic manner.
> >>>
> >>> Architecturally reserved IDs are never allocated, and the system is
> >>> limited to allocating only valid IDs.
> >>>
> >>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>> the new API.
> >>>
> >>> perf handling is changed so that the ID associated with the CPU is read
> >>> from sysfs. The ID allocator is notified when perf events start and stop
> >>> so CPU based IDs are kept constant throughout any perf session.
> >>>
> >>> For the ETMx.x devices IDs are allocated on certain events
> >>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>> when the sysfs reset is written.
> >>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>> hardware disable.
> >>>
> >>> For both cases the ID is allocated when sysfs is read to get the current
> >>> trace ID. This ensures that consistent decode metadata can be extracted
> >>> from the system where this read occurs before device enable.
> >>
> >>
> >>>
> >>> Note: This patchset breaks backward compatibility for perf record.
> >>> Because the method for generating the AUXTRACE_INFO meta data has
> >>> changed, using an older perf record will result in metadata that
> >>> does not match the trace IDs used in the recorded trace data.
> >>> This mismatch will cause subsequent decode to fail. Older versions of
> >>> perf will still be able to decode data generated by the updated system.
> >>
> >> I have some concerns over this and the future plans for the dynamic
> >> allocation per sink. i.e., we are breaking/modifying the perf now to
> >> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >
> > I don't beleive we have a choice for this - we cannot retain what is
> > an essentially broken allocation mechanism.
> >
>
> I completely agree and I am happy with the current step by step approach
> of moving to a dynamic allocation scheme. Apologies, this wasn't
> conveyed appropriately.
>
> >> The proposed approach of exposing this via sysfs may (am not sure if
> >> this would be the case) break for the trace-id per sink change, as a
> >> sink could assign different trace-id for a CPU depending.
> >>
> >
> > If a path exists between a CPU and a sink - the current framework as
> > far as I can tell would not allow for a new path to be set up between
> > the cpu and another sink.
>
> e.g, if we have concurrent perf sessions, in the future with sink  based
> allocation :
>
> perf record -e cs_etm/@sink1/... payload1
> perf record -e cs_etm/@sink2/... payload2
> perf record -e cs_etm// ...      payload3
>
> The trace id allocated for first session for CPU0 *could* be different
> from that of the second or the third.

If these sessions run concurrently then the same Trace ID will be used
for CPU0 for all the sessions.
We ensure this by notifications that a cs_etm session is starting and
stopping - and keep a refcount.
Only when the perf session refcount hits zero can Trace IDs be
released and re-used. Otherwise the association between CPU x and
Trace ID y is maintained - the first session using CPU0 will assign
the ID, the last session to terminate will release the ID from CPU0
(and any other IDs that were allocated during the sessions).

> And it may be tricky to guarantee
> that the traceids may be unique across the sinks for a given CPU.
>




> Please note that the different perf sessions may be executing on
> different CPUs at the same time as long as they go to different sinks.
> So, reading the sysfs could only give out a single traceid, which may
> or may not be the correct one for a given "perf".

As above - reading the trace ID for a given CPU will work for
concurrent sessions.

>
> >
> > However, if we allow multiple paths per CPU, the implementation does
> > both allocate on read and allocate on enable. Both API functions take
> > a input of a trace ID allocation structure. At present this is global,
> > but if we need to introduce per sink allocation, then the mechanisms
> > for sink / ID table management will have to ensure that the correct
> > table is provided for the sink at the end of the path in each case.
> > Thus the API still works as long as you get the sink ID table
> > management correct. That is why it was designed to take the TraceID
> > tables as input to all the functions - it is independent of any per
> > sink management that might come later.
> >
> > My view is that any multi-sink system is likely to be multi-socket as
> > well - where different trace infrastructures exist per socket but need
> > to be handled by a single software infrastructure.
> >
> >> So, instead if we make the trace-id available in the perf (say, an new
> >> record format, PERF_RECORD_CS_ETM_TRACEID ?) record, we can rely on the
> >> new packet for the trace-id, irrespective of how that is allocated and
> >> remove the locking/linking of the trace-id with that of the sysfs.
> >
> > The issue here is how to we transmit the information from the driver
> > to the perf executable?
>
> Yes, exactly.
>
> > Even with a new record that problem still exists. The current perf
> > solves this problem by using the same static algorithm that the driver
> > uses - so no actual communication is necessary. A similar method is
> > used to synthesize the value of the etm control register. The command
> > line options are interpreted by perf, then the same data is passed to
> > the driver from perf through the event structures and reinterpreted -
> > hopefully in the same way. All the other values in the perf records
> > are read directly from sysfs.
>
> Yes, correct. Now, the trace-id is something that could change per
> session and with the move to sink based allocation, this could break.
> So,
> >
> >
> >> This
> >> is not something that exists today. (Ideally it would have been nice to
> >> have some additional fields in RECORD_AUXINFO, but we don't. Instead of
> >> breaking/extending that, we could add a new RECORD).
> >>
> >
> > The trace ID is currently part of RECORD_AUXTRACE_INFO is it not? And
> > we have extended this in the past for the additional requirements for
> > ETE - i.e. an additional ID register - read from sysfs, along with a
> > version number for the record.
>
> Sorry, I meant the RECORD_AUX (which perf gets emitted for each session
> of the ETM, with the offset/size and flags).
>
> There are:
>
> RECORD_AUXINFO -> perf created statically.
> RECORD_AUX     -> emitted for each "run" of ETM, offset, size, flags
> RECORD_AUXTRACE -> actual hw trace
>
> I see that there is already something that we could use;
>
>
>   /*
>    * Data written to the AUX area by hardware due to aux_output, may need
>    * to be matched to the event by an architecture-specific hardware ID.
>    * This records the hardware ID, but requires sample_id to provide the
>    * event ID. e.g. Intel PT uses this record to disambiguate PEBS-via-PT
>    * records from multiple events.
>    *
>    * struct {
>    *     struct perf_event_header        header;
>    *     u64                             hw_id;
>    *     struct sample_id                sample_id;
>    * };
>    */
>   PERF_RECORD_AUX_OUTPUT_HW_ID           = 21,
>
> My suggestion is to emit a record say :
>
> PERF_RECORD_AUX_OUTPUT_HW_ID for each CPU/ETM for a perf session. And
> have the perf report construct the TraceID map for each ETM at decode
> from the PERF_RECORD_AUX_OUTPUT_HW_ID records. That way it is future
> proof for the "perf" userspace to find the trace-id for a given ETM
> rather than reading the sysfs which could be problematic.
>

I think this is an interesting idea - we would effectively drop the
use of the Trace ID in AUXINFO and replace it with this new record -
presumably emitted from somewhere in the etm driver.
It is still a compatibility breaking solution. In fact more so than
the current patch set. With the current patch set you need the driver
changes, and the kernel perf changes to generate a useable file that
will work with earlier versions of the userspace perf report.
With this change you need a change to the drivers, kernel perf and
userspace perf.

However - this is a perf only solution - it does not help when driving
trace from sysfs directly.

I think this could be done. But I think this is a separate task from
the current patch set - and could easily be added later if required.
It involves much more change to the user side of perf which are not
required at present.

Regards

Mike

> Suzuki
>
>
> >
> > Regards
> >
> > Mike
> >
> >> I believe the packet may need to be generated only once for a session
> >> and that will also allow the flexibility of moving trace-id allocation
> >> around (to a sink in the future).
> >>
> >> Thoughts ?
> >>
> >> Kind regards
> >> Suzuki
> >>
> >>
> >>>
> >>>
> >>> Applies to coresight/next [b54f53bc11a5]
> >>> Tested on DB410c
> >>>
> >>> Mike Leach (10):
> >>>     coresight: trace-id: Add API to dynamically assign trace ID values
> >>>     coresight: trace-id: Set up source trace ID map for system
> >>>     coresight: stm: Update STM driver to use Trace ID api
> >>>     coresight: etm4x: Use trace ID API to dynamically allocate trace ID
> >>>     coresight: etm3x: Use trace ID API to allocate IDs
> >>>     coresight: perf: traceid: Add perf notifiers for trace ID
> >>>     perf: cs-etm: Update event to read trace ID from sysfs
> >>>     coresight: Remove legacy Trace ID allocation mechanism
> >>>     coresight: etmX.X: stm: Remove unused legacy source trace ID ops
> >>>     coresight: trace-id: Add debug & test macros to trace id allocation
> >>>
> >>>    drivers/hwtracing/coresight/Makefile          |   2 +-
> >>>    drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
> >>>    .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
> >>>    drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
> >>>    .../coresight/coresight-etm3x-core.c          |  93 ++++---
> >>>    .../coresight/coresight-etm3x-sysfs.c         |  28 +-
> >>>    .../coresight/coresight-etm4x-core.c          |  63 ++++-
> >>>    .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
> >>>    drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
> >>>    drivers/hwtracing/coresight/coresight-priv.h  |   1 +
> >>>    drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
> >>>    .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
> >>>    .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
> >>>    include/linux/coresight-pmu.h                 |  12 -
> >>>    include/linux/coresight.h                     |   3 -
> >>>    tools/perf/arch/arm/util/cs-etm.c             |  12 +-
> >>>    16 files changed, 530 insertions(+), 175 deletions(-)
> >>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >>>    create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >>>
> >>
> >
> >
>


-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-22 14:27         ` Mike Leach
@ 2022-03-22 18:52           ` Suzuki K Poulose
  -1 siblings, 0 replies; 70+ messages in thread
From: Suzuki K Poulose @ 2022-03-22 18:52 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 22/03/2022 14:27, Mike Leach wrote:
> Hi Suzuki
> 
> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> On 22/03/2022 11:38, Mike Leach wrote:
>>> HI Suzuki,
>>>
>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>> <suzuki.poulose@arm.com> wrote:
>>>>
>>>> + Cc: James Clark
>>>>
>>>> Hi Mike,
>>>>
>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>> The current method for allocating trace source ID values to sources is
>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>> The STM is allocated ID 0x1.
>>>>>
>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>
>>>>> The method needs replacing as currently:-
>>>>> 1. It is inefficient in using available IDs.
>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>
>>>> Thanks for addressing this issue.
>>>>
>>>>>
>>>>> Additionally requirements to allocate additional system IDs on some
>>>>> systems have been seen.
>>>>>
>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>> in a dynamic manner.
>>>>>
>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>> limited to allocating only valid IDs.
>>>>>
>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>> the new API.
>>>>>
>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>
>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>> when the sysfs reset is written.
>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>> hardware disable.
>>>>>
>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>> from the system where this read occurs before device enable.
>>>>
>>>>
>>>>>
>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>> changed, using an older perf record will result in metadata that
>>>>> does not match the trace IDs used in the recorded trace data.
>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>> perf will still be able to decode data generated by the updated system.
>>>>
>>>> I have some concerns over this and the future plans for the dynamic
>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>
>>> I don't beleive we have a choice for this - we cannot retain what is
>>> an essentially broken allocation mechanism.
>>>
>>
>> I completely agree and I am happy with the current step by step approach
>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>> conveyed appropriately.
>>
>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>> this would be the case) break for the trace-id per sink change, as a
>>>> sink could assign different trace-id for a CPU depending.
>>>>
>>>
>>> If a path exists between a CPU and a sink - the current framework as
>>> far as I can tell would not allow for a new path to be set up between
>>> the cpu and another sink.
>>
>> e.g, if we have concurrent perf sessions, in the future with sink  based
>> allocation :
>>
>> perf record -e cs_etm/@sink1/... payload1
>> perf record -e cs_etm/@sink2/... payload2
>> perf record -e cs_etm// ...      payload3
>>
>> The trace id allocated for first session for CPU0 *could* be different
>> from that of the second or the third.
> 
> If these sessions run concurrently then the same Trace ID will be used
> for CPU0 for all the sessions.
> We ensure this by notifications that a cs_etm session is starting and
> stopping - and keep a refcount.

The scheme is fine now, with a global trace-id map. But with per-sink
allocation, this could cause problems.

e.g., there could be a situation where:

trace_id[CPU0][sink0] == trace_id[CPU1][sink1]

So if we have a session where both CPU0 and CPU1 trace to a common sink,
we get the trace mixed with no way of splitting them. As the perf will
read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.

So my point is, we are changing the ABI for perf to grab the TraceID
with your patches. And clearly this approach could break easily when
we extend to sink-based idmap. So, lets make the ABI change for perf
scalable and bullet proof (as far as we can) by exposing this
information via the perf RECORD. That way any future changes in the
scheme won't affect the perf as long as it has a reliable information
within each "record".


My point is, let us fix this once and for all, so that we don't
need to change this again. I understand this involves more work
in the perf tool. I believe that is for better

Thoughts ?

Suzuki

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-22 18:52           ` Suzuki K Poulose
  0 siblings, 0 replies; 70+ messages in thread
From: Suzuki K Poulose @ 2022-03-22 18:52 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 22/03/2022 14:27, Mike Leach wrote:
> Hi Suzuki
> 
> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> On 22/03/2022 11:38, Mike Leach wrote:
>>> HI Suzuki,
>>>
>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>> <suzuki.poulose@arm.com> wrote:
>>>>
>>>> + Cc: James Clark
>>>>
>>>> Hi Mike,
>>>>
>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>> The current method for allocating trace source ID values to sources is
>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>> The STM is allocated ID 0x1.
>>>>>
>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>
>>>>> The method needs replacing as currently:-
>>>>> 1. It is inefficient in using available IDs.
>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>
>>>> Thanks for addressing this issue.
>>>>
>>>>>
>>>>> Additionally requirements to allocate additional system IDs on some
>>>>> systems have been seen.
>>>>>
>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>> in a dynamic manner.
>>>>>
>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>> limited to allocating only valid IDs.
>>>>>
>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>> the new API.
>>>>>
>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>
>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>> when the sysfs reset is written.
>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>> hardware disable.
>>>>>
>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>> from the system where this read occurs before device enable.
>>>>
>>>>
>>>>>
>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>> changed, using an older perf record will result in metadata that
>>>>> does not match the trace IDs used in the recorded trace data.
>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>> perf will still be able to decode data generated by the updated system.
>>>>
>>>> I have some concerns over this and the future plans for the dynamic
>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>
>>> I don't beleive we have a choice for this - we cannot retain what is
>>> an essentially broken allocation mechanism.
>>>
>>
>> I completely agree and I am happy with the current step by step approach
>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>> conveyed appropriately.
>>
>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>> this would be the case) break for the trace-id per sink change, as a
>>>> sink could assign different trace-id for a CPU depending.
>>>>
>>>
>>> If a path exists between a CPU and a sink - the current framework as
>>> far as I can tell would not allow for a new path to be set up between
>>> the cpu and another sink.
>>
>> e.g, if we have concurrent perf sessions, in the future with sink  based
>> allocation :
>>
>> perf record -e cs_etm/@sink1/... payload1
>> perf record -e cs_etm/@sink2/... payload2
>> perf record -e cs_etm// ...      payload3
>>
>> The trace id allocated for first session for CPU0 *could* be different
>> from that of the second or the third.
> 
> If these sessions run concurrently then the same Trace ID will be used
> for CPU0 for all the sessions.
> We ensure this by notifications that a cs_etm session is starting and
> stopping - and keep a refcount.

The scheme is fine now, with a global trace-id map. But with per-sink
allocation, this could cause problems.

e.g., there could be a situation where:

trace_id[CPU0][sink0] == trace_id[CPU1][sink1]

So if we have a session where both CPU0 and CPU1 trace to a common sink,
we get the trace mixed with no way of splitting them. As the perf will
read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.

So my point is, we are changing the ABI for perf to grab the TraceID
with your patches. And clearly this approach could break easily when
we extend to sink-based idmap. So, lets make the ABI change for perf
scalable and bullet proof (as far as we can) by exposing this
information via the perf RECORD. That way any future changes in the
scheme won't affect the perf as long as it has a reliable information
within each "record".


My point is, let us fix this once and for all, so that we don't
need to change this again. I understand this involves more work
in the perf tool. I believe that is for better

Thoughts ?

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-22 18:52           ` Suzuki K Poulose
@ 2022-03-23 10:07             ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 10:07 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki,

On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>
> Hi Mike
>
> On 22/03/2022 14:27, Mike Leach wrote:
> > Hi Suzuki
> >
> > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > <suzuki.poulose@arm.com> wrote:
> >>
> >> On 22/03/2022 11:38, Mike Leach wrote:
> >>> HI Suzuki,
> >>>
> >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> >>> <suzuki.poulose@arm.com> wrote:
> >>>>
> >>>> + Cc: James Clark
> >>>>
> >>>> Hi Mike,
> >>>>
> >>>> On 08/03/2022 20:49, Mike Leach wrote:
> >>>>> The current method for allocating trace source ID values to sources is
> >>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>>>> The STM is allocated ID 0x1.
> >>>>>
> >>>>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>>>
> >>>>> The method needs replacing as currently:-
> >>>>> 1. It is inefficient in using available IDs.
> >>>>> 2. Does not scale to larger systems with many cores and the algorithm
> >>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>>>
> >>>> Thanks for addressing this issue.
> >>>>
> >>>>>
> >>>>> Additionally requirements to allocate additional system IDs on some
> >>>>> systems have been seen.
> >>>>>
> >>>>> This patch set  introduces an API that allows the allocation of trace IDs
> >>>>> in a dynamic manner.
> >>>>>
> >>>>> Architecturally reserved IDs are never allocated, and the system is
> >>>>> limited to allocating only valid IDs.
> >>>>>
> >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>>>> the new API.
> >>>>>
> >>>>> perf handling is changed so that the ID associated with the CPU is read
> >>>>> from sysfs. The ID allocator is notified when perf events start and stop
> >>>>> so CPU based IDs are kept constant throughout any perf session.
> >>>>>
> >>>>> For the ETMx.x devices IDs are allocated on certain events
> >>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>>>> when the sysfs reset is written.
> >>>>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>>>> hardware disable.
> >>>>>
> >>>>> For both cases the ID is allocated when sysfs is read to get the current
> >>>>> trace ID. This ensures that consistent decode metadata can be extracted
> >>>>> from the system where this read occurs before device enable.
> >>>>
> >>>>
> >>>>>
> >>>>> Note: This patchset breaks backward compatibility for perf record.
> >>>>> Because the method for generating the AUXTRACE_INFO meta data has
> >>>>> changed, using an older perf record will result in metadata that
> >>>>> does not match the trace IDs used in the recorded trace data.
> >>>>> This mismatch will cause subsequent decode to fail. Older versions of
> >>>>> perf will still be able to decode data generated by the updated system.
> >>>>
> >>>> I have some concerns over this and the future plans for the dynamic
> >>>> allocation per sink. i.e., we are breaking/modifying the perf now to
> >>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >>>
> >>> I don't beleive we have a choice for this - we cannot retain what is
> >>> an essentially broken allocation mechanism.
> >>>
> >>
> >> I completely agree and I am happy with the current step by step approach
> >> of moving to a dynamic allocation scheme. Apologies, this wasn't
> >> conveyed appropriately.
> >>
> >>>> The proposed approach of exposing this via sysfs may (am not sure if
> >>>> this would be the case) break for the trace-id per sink change, as a
> >>>> sink could assign different trace-id for a CPU depending.
> >>>>
> >>>
> >>> If a path exists between a CPU and a sink - the current framework as
> >>> far as I can tell would not allow for a new path to be set up between
> >>> the cpu and another sink.
> >>
> >> e.g, if we have concurrent perf sessions, in the future with sink  based
> >> allocation :
> >>
> >> perf record -e cs_etm/@sink1/... payload1
> >> perf record -e cs_etm/@sink2/... payload2
> >> perf record -e cs_etm// ...      payload3
> >>
> >> The trace id allocated for first session for CPU0 *could* be different
> >> from that of the second or the third.
> >
> > If these sessions run concurrently then the same Trace ID will be used
> > for CPU0 for all the sessions.
> > We ensure this by notifications that a cs_etm session is starting and
> > stopping - and keep a refcount.
>
> The scheme is fine now, with a global trace-id map. But with per-sink
> allocation, this could cause problems.
>
> e.g., there could be a situation where:
>
> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>
> So if we have a session where both CPU0 and CPU1 trace to a common sink,
> we get the trace mixed with no way of splitting them. As the perf will
> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.

I think we need to consider the CoreSight hardware topology here.

Any CPUx that can trace to a sink reachable by another CPUy must
always get trace IDs from the same pool as CPUy.

Consider the options for multi sink topologies:-

CPU0->funnel0->ETF->ETR
CPU1--+^

Clearly - in-line sinks can never have simultaneous independent
sessions - the session into ETR traces through ETF

Now we could have replicators / programmable replicators -

ATB->Prog replicator->ETR0
                                 +->ETR1

however programmable replicators use trace ID for filtering - this is
effectively a single sink on the input, so once again the Trace IDs
must come from the same pool.

Now, we could have independent per cluster / socket topology
Cluster 0
CPU0->funnel0->ETF0->ETR0
CPU1--+^

Cluster 1
CPU2->funnel1->ETF1->ETR1
CPU3--+^

Here cluster 0 & 1 can have independent sets of trace IDs as their
respective cores can never trace to the same sink.

Finally we have the ETE+TRBE 1:1 type topologies. These could actually
not bother allocating any trace ID when in 1:1 mode, which should
probably be a follow on incremental addition to this initial set.

So, my conclusion when I was considering all this is that "per-sink"
trace Id allocation is in fact "per unique trace path set" allocation.



>
> So my point is, we are changing the ABI for perf to grab the TraceID
> with your patches. And clearly this approach could break easily when
> we extend to sink-based idmap. So, lets make the ABI change for perf
> scalable and bullet proof (as far as we can) by exposing this
> information via the perf RECORD. That way any future changes in the
> scheme won't affect the perf as long as it has a reliable information
> within each "record".
>
>
> My point is, let us fix this once and for all, so that we don't
> need to change this again. I understand this involves more work
> in the perf tool. I believe that is for better
>
> Thoughts ?
>

My preference is the incremental approach.
Fix the trace ID allocation issues that partners are having now, then
update to the perf record approach in a separate follow up patchset.
Then when we start to see systems that require it - update to using
the per-unique-path trace ID pools.

Regards

Mike

> Suzuki



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 10:07             ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 10:07 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki,

On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>
> Hi Mike
>
> On 22/03/2022 14:27, Mike Leach wrote:
> > Hi Suzuki
> >
> > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > <suzuki.poulose@arm.com> wrote:
> >>
> >> On 22/03/2022 11:38, Mike Leach wrote:
> >>> HI Suzuki,
> >>>
> >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> >>> <suzuki.poulose@arm.com> wrote:
> >>>>
> >>>> + Cc: James Clark
> >>>>
> >>>> Hi Mike,
> >>>>
> >>>> On 08/03/2022 20:49, Mike Leach wrote:
> >>>>> The current method for allocating trace source ID values to sources is
> >>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>>>> The STM is allocated ID 0x1.
> >>>>>
> >>>>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>>>
> >>>>> The method needs replacing as currently:-
> >>>>> 1. It is inefficient in using available IDs.
> >>>>> 2. Does not scale to larger systems with many cores and the algorithm
> >>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>>>
> >>>> Thanks for addressing this issue.
> >>>>
> >>>>>
> >>>>> Additionally requirements to allocate additional system IDs on some
> >>>>> systems have been seen.
> >>>>>
> >>>>> This patch set  introduces an API that allows the allocation of trace IDs
> >>>>> in a dynamic manner.
> >>>>>
> >>>>> Architecturally reserved IDs are never allocated, and the system is
> >>>>> limited to allocating only valid IDs.
> >>>>>
> >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>>>> the new API.
> >>>>>
> >>>>> perf handling is changed so that the ID associated with the CPU is read
> >>>>> from sysfs. The ID allocator is notified when perf events start and stop
> >>>>> so CPU based IDs are kept constant throughout any perf session.
> >>>>>
> >>>>> For the ETMx.x devices IDs are allocated on certain events
> >>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>>>> when the sysfs reset is written.
> >>>>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>>>> hardware disable.
> >>>>>
> >>>>> For both cases the ID is allocated when sysfs is read to get the current
> >>>>> trace ID. This ensures that consistent decode metadata can be extracted
> >>>>> from the system where this read occurs before device enable.
> >>>>
> >>>>
> >>>>>
> >>>>> Note: This patchset breaks backward compatibility for perf record.
> >>>>> Because the method for generating the AUXTRACE_INFO meta data has
> >>>>> changed, using an older perf record will result in metadata that
> >>>>> does not match the trace IDs used in the recorded trace data.
> >>>>> This mismatch will cause subsequent decode to fail. Older versions of
> >>>>> perf will still be able to decode data generated by the updated system.
> >>>>
> >>>> I have some concerns over this and the future plans for the dynamic
> >>>> allocation per sink. i.e., we are breaking/modifying the perf now to
> >>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >>>
> >>> I don't beleive we have a choice for this - we cannot retain what is
> >>> an essentially broken allocation mechanism.
> >>>
> >>
> >> I completely agree and I am happy with the current step by step approach
> >> of moving to a dynamic allocation scheme. Apologies, this wasn't
> >> conveyed appropriately.
> >>
> >>>> The proposed approach of exposing this via sysfs may (am not sure if
> >>>> this would be the case) break for the trace-id per sink change, as a
> >>>> sink could assign different trace-id for a CPU depending.
> >>>>
> >>>
> >>> If a path exists between a CPU and a sink - the current framework as
> >>> far as I can tell would not allow for a new path to be set up between
> >>> the cpu and another sink.
> >>
> >> e.g, if we have concurrent perf sessions, in the future with sink  based
> >> allocation :
> >>
> >> perf record -e cs_etm/@sink1/... payload1
> >> perf record -e cs_etm/@sink2/... payload2
> >> perf record -e cs_etm// ...      payload3
> >>
> >> The trace id allocated for first session for CPU0 *could* be different
> >> from that of the second or the third.
> >
> > If these sessions run concurrently then the same Trace ID will be used
> > for CPU0 for all the sessions.
> > We ensure this by notifications that a cs_etm session is starting and
> > stopping - and keep a refcount.
>
> The scheme is fine now, with a global trace-id map. But with per-sink
> allocation, this could cause problems.
>
> e.g., there could be a situation where:
>
> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>
> So if we have a session where both CPU0 and CPU1 trace to a common sink,
> we get the trace mixed with no way of splitting them. As the perf will
> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.

I think we need to consider the CoreSight hardware topology here.

Any CPUx that can trace to a sink reachable by another CPUy must
always get trace IDs from the same pool as CPUy.

Consider the options for multi sink topologies:-

CPU0->funnel0->ETF->ETR
CPU1--+^

Clearly - in-line sinks can never have simultaneous independent
sessions - the session into ETR traces through ETF

Now we could have replicators / programmable replicators -

ATB->Prog replicator->ETR0
                                 +->ETR1

however programmable replicators use trace ID for filtering - this is
effectively a single sink on the input, so once again the Trace IDs
must come from the same pool.

Now, we could have independent per cluster / socket topology
Cluster 0
CPU0->funnel0->ETF0->ETR0
CPU1--+^

Cluster 1
CPU2->funnel1->ETF1->ETR1
CPU3--+^

Here cluster 0 & 1 can have independent sets of trace IDs as their
respective cores can never trace to the same sink.

Finally we have the ETE+TRBE 1:1 type topologies. These could actually
not bother allocating any trace ID when in 1:1 mode, which should
probably be a follow on incremental addition to this initial set.

So, my conclusion when I was considering all this is that "per-sink"
trace Id allocation is in fact "per unique trace path set" allocation.



>
> So my point is, we are changing the ABI for perf to grab the TraceID
> with your patches. And clearly this approach could break easily when
> we extend to sink-based idmap. So, lets make the ABI change for perf
> scalable and bullet proof (as far as we can) by exposing this
> information via the perf RECORD. That way any future changes in the
> scheme won't affect the perf as long as it has a reliable information
> within each "record".
>
>
> My point is, let us fix this once and for all, so that we don't
> need to change this again. I understand this involves more work
> in the perf tool. I believe that is for better
>
> Thoughts ?
>

My preference is the incremental approach.
Fix the trace ID allocation issues that partners are having now, then
update to the perf record approach in a separate follow up patchset.
Then when we start to see systems that require it - update to using
the per-unique-path trace ID pools.

Regards

Mike

> Suzuki



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* RE: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-23 10:07             ` Mike Leach
@ 2022-03-23 10:35               ` Al Grant
  -1 siblings, 0 replies; 70+ messages in thread
From: Al Grant @ 2022-03-23 10:35 UTC (permalink / raw)
  To: Mike Leach, Suzuki Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, peterz, mingo, acme,
	linux-perf-users, James Clark

> -----Original Message-----
> From: Mike Leach <mike.leach@linaro.org>
> Sent: 23 March 2022 10:08
> To: Suzuki Poulose <Suzuki.Poulose@arm.com>
> Cc: coresight@lists.linaro.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org; peterz@infradead.org; mingo@redhat.com;
> acme@kernel.org; linux-perf-users@vger.kernel.org; James Clark
> <James.Clark@arm.com>
> Subject: Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID
> values
> 
> Hi Suzuki,
> 
> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com>
> wrote:
> >
> > Hi Mike
> >
> > On 22/03/2022 14:27, Mike Leach wrote:
> > > Hi Suzuki
> > >
> > > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > > <suzuki.poulose@arm.com> wrote:
> > >>
> > >> On 22/03/2022 11:38, Mike Leach wrote:
> > >>> HI Suzuki,
> > >>>
> > >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > >>> <suzuki.poulose@arm.com> wrote:
> > >>>>
> > >>>> + Cc: James Clark
> > >>>>
> > >>>> Hi Mike,
> > >>>>
> > >>>> On 08/03/2022 20:49, Mike Leach wrote:
> > >>>>> The current method for allocating trace source ID values to
> > >>>>> sources is to use a fixed algorithm for CPU based sources of
> (cpu_num * 2 + 0x10).
> > >>>>> The STM is allocated ID 0x1.
> > >>>>>
> > >>>>> This fixed algorithm is used in both the CoreSight driver code,
> > >>>>> and by perf when writing the trace metadata in the AUXTRACE_INFO
> record.
> > >>>>>
> > >>>>> The method needs replacing as currently:- 1. It is inefficient
> > >>>>> in using available IDs.
> > >>>>> 2. Does not scale to larger systems with many cores and the
> > >>>>> algorithm has no limits so will generate invalid trace IDs for cpu
> number > 44.
> > >>>>
> > >>>> Thanks for addressing this issue.
> > >>>>
> > >>>>>
> > >>>>> Additionally requirements to allocate additional system IDs on
> > >>>>> some systems have been seen.
> > >>>>>
> > >>>>> This patch set  introduces an API that allows the allocation of
> > >>>>> trace IDs in a dynamic manner.
> > >>>>>
> > >>>>> Architecturally reserved IDs are never allocated, and the system
> > >>>>> is limited to allocating only valid IDs.
> > >>>>>
> > >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is
> > >>>>> updated to use the new API.
> > >>>>>
> > >>>>> perf handling is changed so that the ID associated with the CPU
> > >>>>> is read from sysfs. The ID allocator is notified when perf
> > >>>>> events start and stop so CPU based IDs are kept constant
> throughout any perf session.
> > >>>>>
> > >>>>> For the ETMx.x devices IDs are allocated on certain events
> > >>>>> a) When using sysfs, an ID will be allocated on hardware enable,
> > >>>>> and freed when the sysfs reset is written.
> > >>>>> b) When using perf, ID is allocated on hardware enable, and
> > >>>>> freed on hardware disable.
> > >>>>>
> > >>>>> For both cases the ID is allocated when sysfs is read to get the
> > >>>>> current trace ID. This ensures that consistent decode metadata
> > >>>>> can be extracted from the system where this read occurs before
> device enable.
> > >>>>
> > >>>>
> > >>>>>
> > >>>>> Note: This patchset breaks backward compatibility for perf record.
> > >>>>> Because the method for generating the AUXTRACE_INFO meta data
> > >>>>> has changed, using an older perf record will result in metadata
> > >>>>> that does not match the trace IDs used in the recorded trace data.
> > >>>>> This mismatch will cause subsequent decode to fail. Older
> > >>>>> versions of perf will still be able to decode data generated by the
> updated system.
> > >>>>
> > >>>> I have some concerns over this and the future plans for the
> > >>>> dynamic allocation per sink. i.e., we are breaking/modifying the
> > >>>> perf now to accommodate the dynamic nature of the trace id of a
> given CPU/ETM.
> > >>>
> > >>> I don't beleive we have a choice for this - we cannot retain what
> > >>> is an essentially broken allocation mechanism.
> > >>>
> > >>
> > >> I completely agree and I am happy with the current step by step
> > >> approach of moving to a dynamic allocation scheme. Apologies, this
> > >> wasn't conveyed appropriately.
> > >>
> > >>>> The proposed approach of exposing this via sysfs may (am not sure
> > >>>> if this would be the case) break for the trace-id per sink
> > >>>> change, as a sink could assign different trace-id for a CPU depending.
> > >>>>
> > >>>
> > >>> If a path exists between a CPU and a sink - the current framework
> > >>> as far as I can tell would not allow for a new path to be set up
> > >>> between the cpu and another sink.
> > >>
> > >> e.g, if we have concurrent perf sessions, in the future with sink
> > >> based allocation :
> > >>
> > >> perf record -e cs_etm/@sink1/... payload1 perf record -e
> > >> cs_etm/@sink2/... payload2
> > >> perf record -e cs_etm// ...      payload3
> > >>
> > >> The trace id allocated for first session for CPU0 *could* be
> > >> different from that of the second or the third.
> > >
> > > If these sessions run concurrently then the same Trace ID will be
> > > used for CPU0 for all the sessions.
> > > We ensure this by notifications that a cs_etm session is starting
> > > and stopping - and keep a refcount.
> >
> > The scheme is fine now, with a global trace-id map. But with per-sink
> > allocation, this could cause problems.
> >
> > e.g., there could be a situation where:
> >
> > trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> >
> > So if we have a session where both CPU0 and CPU1 trace to a common
> > sink, we get the trace mixed with no way of splitting them. As the
> > perf will read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> 
> I think we need to consider the CoreSight hardware topology here.
> 
> Any CPUx that can trace to a sink reachable by another CPUy must always
> get trace IDs from the same pool as CPUy.
> 
> Consider the options for multi sink topologies:-
> 
> CPU0->funnel0->ETF->ETR
> CPU1--+^
> 
> Clearly - in-line sinks can never have simultaneous independent sessions -
> the session into ETR traces through ETF
> 
> Now we could have replicators / programmable replicators -
> 
> ATB->Prog replicator->ETR0
>                                  +->ETR1
> 
> however programmable replicators use trace ID for filtering - this is
> effectively a single sink on the input, so once again the Trace IDs must come
> from the same pool.
> 
> Now, we could have independent per cluster / socket topology Cluster 0
> CPU0->funnel0->ETF0->ETR0
> CPU1--+^
> 
> Cluster 1
> CPU2->funnel1->ETF1->ETR1
> CPU3--+^
> 
> Here cluster 0 & 1 can have independent sets of trace IDs as their respective
> cores can never trace to the same sink.
> 
> Finally we have the ETE+TRBE 1:1 type topologies. These could actually not
> bother allocating any trace ID when in 1:1 mode, which should probably be a
> follow on incremental addition to this initial set.
> 
> So, my conclusion when I was considering all this is that "per-sink"
> trace Id allocation is in fact "per unique trace path set" allocation.

The id pools can't always be defined statically in all situations.
E.g. if you have 128 CPUs each with their own ETR, and also
replicated into a funnel network leading to a common ETR.

This topology supports (at least) two distinct modes: (a) all CPUs
enabled for tracing to their own ETRs (b) a selection of CPUs
(up to some limit), combined together. Both are valid dynamic
configurations. Perf might have a preference on which one to use,
but both are valid.

But there's no static id pool that works for both. The pool for
(b) has to allocate to some random selection of 128 CPUs,
from only around 120 numbers. The pool has to take account of
which CPUs are selected.

So your comment "any CPU that can trace..." has to be interpreted as
"any CPU that can trace in the currently configured dynamic topology"
rather than "any CPU that can be dynamically configured to trace..."...
is that what you meant?

Alternatively, we could just declare that such systems are too
complicated to support, and say that we wouldn't support the
use of a global sink that (statically) was reachable by 128 CPUs.

Al


> 
> 
> 
> >
> > So my point is, we are changing the ABI for perf to grab the TraceID
> > with your patches. And clearly this approach could break easily when
> > we extend to sink-based idmap. So, lets make the ABI change for perf
> > scalable and bullet proof (as far as we can) by exposing this
> > information via the perf RECORD. That way any future changes in the
> > scheme won't affect the perf as long as it has a reliable information
> > within each "record".
> >
> >
> > My point is, let us fix this once and for all, so that we don't need
> > to change this again. I understand this involves more work in the perf
> > tool. I believe that is for better
> >
> > Thoughts ?
> >
> 
> My preference is the incremental approach.
> Fix the trace ID allocation issues that partners are having now, then update
> to the perf record approach in a separate follow up patchset.
> Then when we start to see systems that require it - update to using the per-
> unique-path trace ID pools.
> 
> Regards
> 
> Mike
> 
> > Suzuki
> 
> 
> 
> --
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK
> _______________________________________________
> CoreSight mailing list -- coresight@lists.linaro.org To unsubscribe send an
> email to coresight-leave@lists.linaro.org

^ permalink raw reply	[flat|nested] 70+ messages in thread

* RE: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 10:35               ` Al Grant
  0 siblings, 0 replies; 70+ messages in thread
From: Al Grant @ 2022-03-23 10:35 UTC (permalink / raw)
  To: Mike Leach, Suzuki Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, peterz, mingo, acme,
	linux-perf-users, James Clark

> -----Original Message-----
> From: Mike Leach <mike.leach@linaro.org>
> Sent: 23 March 2022 10:08
> To: Suzuki Poulose <Suzuki.Poulose@arm.com>
> Cc: coresight@lists.linaro.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org; peterz@infradead.org; mingo@redhat.com;
> acme@kernel.org; linux-perf-users@vger.kernel.org; James Clark
> <James.Clark@arm.com>
> Subject: Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID
> values
> 
> Hi Suzuki,
> 
> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com>
> wrote:
> >
> > Hi Mike
> >
> > On 22/03/2022 14:27, Mike Leach wrote:
> > > Hi Suzuki
> > >
> > > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > > <suzuki.poulose@arm.com> wrote:
> > >>
> > >> On 22/03/2022 11:38, Mike Leach wrote:
> > >>> HI Suzuki,
> > >>>
> > >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > >>> <suzuki.poulose@arm.com> wrote:
> > >>>>
> > >>>> + Cc: James Clark
> > >>>>
> > >>>> Hi Mike,
> > >>>>
> > >>>> On 08/03/2022 20:49, Mike Leach wrote:
> > >>>>> The current method for allocating trace source ID values to
> > >>>>> sources is to use a fixed algorithm for CPU based sources of
> (cpu_num * 2 + 0x10).
> > >>>>> The STM is allocated ID 0x1.
> > >>>>>
> > >>>>> This fixed algorithm is used in both the CoreSight driver code,
> > >>>>> and by perf when writing the trace metadata in the AUXTRACE_INFO
> record.
> > >>>>>
> > >>>>> The method needs replacing as currently:- 1. It is inefficient
> > >>>>> in using available IDs.
> > >>>>> 2. Does not scale to larger systems with many cores and the
> > >>>>> algorithm has no limits so will generate invalid trace IDs for cpu
> number > 44.
> > >>>>
> > >>>> Thanks for addressing this issue.
> > >>>>
> > >>>>>
> > >>>>> Additionally requirements to allocate additional system IDs on
> > >>>>> some systems have been seen.
> > >>>>>
> > >>>>> This patch set  introduces an API that allows the allocation of
> > >>>>> trace IDs in a dynamic manner.
> > >>>>>
> > >>>>> Architecturally reserved IDs are never allocated, and the system
> > >>>>> is limited to allocating only valid IDs.
> > >>>>>
> > >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is
> > >>>>> updated to use the new API.
> > >>>>>
> > >>>>> perf handling is changed so that the ID associated with the CPU
> > >>>>> is read from sysfs. The ID allocator is notified when perf
> > >>>>> events start and stop so CPU based IDs are kept constant
> throughout any perf session.
> > >>>>>
> > >>>>> For the ETMx.x devices IDs are allocated on certain events
> > >>>>> a) When using sysfs, an ID will be allocated on hardware enable,
> > >>>>> and freed when the sysfs reset is written.
> > >>>>> b) When using perf, ID is allocated on hardware enable, and
> > >>>>> freed on hardware disable.
> > >>>>>
> > >>>>> For both cases the ID is allocated when sysfs is read to get the
> > >>>>> current trace ID. This ensures that consistent decode metadata
> > >>>>> can be extracted from the system where this read occurs before
> device enable.
> > >>>>
> > >>>>
> > >>>>>
> > >>>>> Note: This patchset breaks backward compatibility for perf record.
> > >>>>> Because the method for generating the AUXTRACE_INFO meta data
> > >>>>> has changed, using an older perf record will result in metadata
> > >>>>> that does not match the trace IDs used in the recorded trace data.
> > >>>>> This mismatch will cause subsequent decode to fail. Older
> > >>>>> versions of perf will still be able to decode data generated by the
> updated system.
> > >>>>
> > >>>> I have some concerns over this and the future plans for the
> > >>>> dynamic allocation per sink. i.e., we are breaking/modifying the
> > >>>> perf now to accommodate the dynamic nature of the trace id of a
> given CPU/ETM.
> > >>>
> > >>> I don't beleive we have a choice for this - we cannot retain what
> > >>> is an essentially broken allocation mechanism.
> > >>>
> > >>
> > >> I completely agree and I am happy with the current step by step
> > >> approach of moving to a dynamic allocation scheme. Apologies, this
> > >> wasn't conveyed appropriately.
> > >>
> > >>>> The proposed approach of exposing this via sysfs may (am not sure
> > >>>> if this would be the case) break for the trace-id per sink
> > >>>> change, as a sink could assign different trace-id for a CPU depending.
> > >>>>
> > >>>
> > >>> If a path exists between a CPU and a sink - the current framework
> > >>> as far as I can tell would not allow for a new path to be set up
> > >>> between the cpu and another sink.
> > >>
> > >> e.g, if we have concurrent perf sessions, in the future with sink
> > >> based allocation :
> > >>
> > >> perf record -e cs_etm/@sink1/... payload1 perf record -e
> > >> cs_etm/@sink2/... payload2
> > >> perf record -e cs_etm// ...      payload3
> > >>
> > >> The trace id allocated for first session for CPU0 *could* be
> > >> different from that of the second or the third.
> > >
> > > If these sessions run concurrently then the same Trace ID will be
> > > used for CPU0 for all the sessions.
> > > We ensure this by notifications that a cs_etm session is starting
> > > and stopping - and keep a refcount.
> >
> > The scheme is fine now, with a global trace-id map. But with per-sink
> > allocation, this could cause problems.
> >
> > e.g., there could be a situation where:
> >
> > trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> >
> > So if we have a session where both CPU0 and CPU1 trace to a common
> > sink, we get the trace mixed with no way of splitting them. As the
> > perf will read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> 
> I think we need to consider the CoreSight hardware topology here.
> 
> Any CPUx that can trace to a sink reachable by another CPUy must always
> get trace IDs from the same pool as CPUy.
> 
> Consider the options for multi sink topologies:-
> 
> CPU0->funnel0->ETF->ETR
> CPU1--+^
> 
> Clearly - in-line sinks can never have simultaneous independent sessions -
> the session into ETR traces through ETF
> 
> Now we could have replicators / programmable replicators -
> 
> ATB->Prog replicator->ETR0
>                                  +->ETR1
> 
> however programmable replicators use trace ID for filtering - this is
> effectively a single sink on the input, so once again the Trace IDs must come
> from the same pool.
> 
> Now, we could have independent per cluster / socket topology Cluster 0
> CPU0->funnel0->ETF0->ETR0
> CPU1--+^
> 
> Cluster 1
> CPU2->funnel1->ETF1->ETR1
> CPU3--+^
> 
> Here cluster 0 & 1 can have independent sets of trace IDs as their respective
> cores can never trace to the same sink.
> 
> Finally we have the ETE+TRBE 1:1 type topologies. These could actually not
> bother allocating any trace ID when in 1:1 mode, which should probably be a
> follow on incremental addition to this initial set.
> 
> So, my conclusion when I was considering all this is that "per-sink"
> trace Id allocation is in fact "per unique trace path set" allocation.

The id pools can't always be defined statically in all situations.
E.g. if you have 128 CPUs each with their own ETR, and also
replicated into a funnel network leading to a common ETR.

This topology supports (at least) two distinct modes: (a) all CPUs
enabled for tracing to their own ETRs (b) a selection of CPUs
(up to some limit), combined together. Both are valid dynamic
configurations. Perf might have a preference on which one to use,
but both are valid.

But there's no static id pool that works for both. The pool for
(b) has to allocate to some random selection of 128 CPUs,
from only around 120 numbers. The pool has to take account of
which CPUs are selected.

So your comment "any CPU that can trace..." has to be interpreted as
"any CPU that can trace in the currently configured dynamic topology"
rather than "any CPU that can be dynamically configured to trace..."...
is that what you meant?

Alternatively, we could just declare that such systems are too
complicated to support, and say that we wouldn't support the
use of a global sink that (statically) was reachable by 128 CPUs.

Al


> 
> 
> 
> >
> > So my point is, we are changing the ABI for perf to grab the TraceID
> > with your patches. And clearly this approach could break easily when
> > we extend to sink-based idmap. So, lets make the ABI change for perf
> > scalable and bullet proof (as far as we can) by exposing this
> > information via the perf RECORD. That way any future changes in the
> > scheme won't affect the perf as long as it has a reliable information
> > within each "record".
> >
> >
> > My point is, let us fix this once and for all, so that we don't need
> > to change this again. I understand this involves more work in the perf
> > tool. I believe that is for better
> >
> > Thoughts ?
> >
> 
> My preference is the incremental approach.
> Fix the trace ID allocation issues that partners are having now, then update
> to the perf record approach in a separate follow up patchset.
> Then when we start to see systems that require it - update to using the per-
> unique-path trace ID pools.
> 
> Regards
> 
> Mike
> 
> > Suzuki
> 
> 
> 
> --
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK
> _______________________________________________
> CoreSight mailing list -- coresight@lists.linaro.org To unsubscribe send an
> email to coresight-leave@lists.linaro.org

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-23 10:07             ` Mike Leach
@ 2022-03-23 10:41               ` Suzuki Kuruppassery Poulose
  -1 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-23 10:41 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 23/03/2022 10:07, Mike Leach wrote:
> Hi Suzuki,
> 
> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>>
>> Hi Mike
>>
>> On 22/03/2022 14:27, Mike Leach wrote:
>>> Hi Suzuki
>>>
>>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
>>> <suzuki.poulose@arm.com> wrote:
>>>>
>>>> On 22/03/2022 11:38, Mike Leach wrote:
>>>>> HI Suzuki,
>>>>>
>>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>
>>>>>> + Cc: James Clark
>>>>>>
>>>>>> Hi Mike,
>>>>>>
>>>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>>>> The current method for allocating trace source ID values to sources is
>>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>>>> The STM is allocated ID 0x1.
>>>>>>>
>>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>>>
>>>>>>> The method needs replacing as currently:-
>>>>>>> 1. It is inefficient in using available IDs.
>>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>>>
>>>>>> Thanks for addressing this issue.
>>>>>>
>>>>>>>
>>>>>>> Additionally requirements to allocate additional system IDs on some
>>>>>>> systems have been seen.
>>>>>>>
>>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>>>> in a dynamic manner.
>>>>>>>
>>>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>>>> limited to allocating only valid IDs.
>>>>>>>
>>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>>>> the new API.
>>>>>>>
>>>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>>>
>>>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>>>> when the sysfs reset is written.
>>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>>>> hardware disable.
>>>>>>>
>>>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>>>> from the system where this read occurs before device enable.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>>>> changed, using an older perf record will result in metadata that
>>>>>>> does not match the trace IDs used in the recorded trace data.
>>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>>>> perf will still be able to decode data generated by the updated system.
>>>>>>
>>>>>> I have some concerns over this and the future plans for the dynamic
>>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>>>
>>>>> I don't beleive we have a choice for this - we cannot retain what is
>>>>> an essentially broken allocation mechanism.
>>>>>
>>>>
>>>> I completely agree and I am happy with the current step by step approach
>>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>>>> conveyed appropriately.
>>>>
>>>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>>>> this would be the case) break for the trace-id per sink change, as a
>>>>>> sink could assign different trace-id for a CPU depending.
>>>>>>
>>>>>
>>>>> If a path exists between a CPU and a sink - the current framework as
>>>>> far as I can tell would not allow for a new path to be set up between
>>>>> the cpu and another sink.
>>>>
>>>> e.g, if we have concurrent perf sessions, in the future with sink  based
>>>> allocation :
>>>>
>>>> perf record -e cs_etm/@sink1/... payload1
>>>> perf record -e cs_etm/@sink2/... payload2
>>>> perf record -e cs_etm// ...      payload3
>>>>
>>>> The trace id allocated for first session for CPU0 *could* be different
>>>> from that of the second or the third.
>>>
>>> If these sessions run concurrently then the same Trace ID will be used
>>> for CPU0 for all the sessions.
>>> We ensure this by notifications that a cs_etm session is starting and
>>> stopping - and keep a refcount.
>>
>> The scheme is fine now, with a global trace-id map. But with per-sink
>> allocation, this could cause problems.
>>
>> e.g., there could be a situation where:
>>
>> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>>
>> So if we have a session where both CPU0 and CPU1 trace to a common sink,
>> we get the trace mixed with no way of splitting them. As the perf will
>> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> 
> I think we need to consider the CoreSight hardware topology here.
> 
> Any CPUx that can trace to a sink reachable by another CPUy must
> always get trace IDs from the same pool as CPUy.
> 
> Consider the options for multi sink topologies:-
> 
> CPU0->funnel0->ETF->ETR
> CPU1--+^
> 
> Clearly - in-line sinks can never have simultaneous independent
> sessions - the session into ETR traces through ETF
> 
> Now we could have replicators / programmable replicators -
> 
> ATB->Prog replicator->ETR0
>                                   +->ETR1
> 
> however programmable replicators use trace ID for filtering - this is
> effectively a single sink on the input, so once again the Trace IDs
> must come from the same pool.
> 
> Now, we could have independent per cluster / socket topology
> Cluster 0
> CPU0->funnel0->ETF0->ETR0
> CPU1--+^
> 
> Cluster 1
> CPU2->funnel1->ETF1->ETR1
> CPU3--+^
> 

What if the ETR was a common one ? i.e.

Cluster0
CPU0 -> ETF0 .....
                    \
Cluster1            -- ETR0
CPU1 -> ETF1 ..... /

And lets there are 3 perf sessions in parallel, started in the
order below

perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing 
trace ids from the allocations.

So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
ETR with the same trace-id of 0x50.

Suzuki

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 10:41               ` Suzuki Kuruppassery Poulose
  0 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-23 10:41 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 23/03/2022 10:07, Mike Leach wrote:
> Hi Suzuki,
> 
> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>>
>> Hi Mike
>>
>> On 22/03/2022 14:27, Mike Leach wrote:
>>> Hi Suzuki
>>>
>>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
>>> <suzuki.poulose@arm.com> wrote:
>>>>
>>>> On 22/03/2022 11:38, Mike Leach wrote:
>>>>> HI Suzuki,
>>>>>
>>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>
>>>>>> + Cc: James Clark
>>>>>>
>>>>>> Hi Mike,
>>>>>>
>>>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>>>> The current method for allocating trace source ID values to sources is
>>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>>>> The STM is allocated ID 0x1.
>>>>>>>
>>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>>>
>>>>>>> The method needs replacing as currently:-
>>>>>>> 1. It is inefficient in using available IDs.
>>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>>>
>>>>>> Thanks for addressing this issue.
>>>>>>
>>>>>>>
>>>>>>> Additionally requirements to allocate additional system IDs on some
>>>>>>> systems have been seen.
>>>>>>>
>>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>>>> in a dynamic manner.
>>>>>>>
>>>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>>>> limited to allocating only valid IDs.
>>>>>>>
>>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>>>> the new API.
>>>>>>>
>>>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>>>
>>>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>>>> when the sysfs reset is written.
>>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>>>> hardware disable.
>>>>>>>
>>>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>>>> from the system where this read occurs before device enable.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>>>> changed, using an older perf record will result in metadata that
>>>>>>> does not match the trace IDs used in the recorded trace data.
>>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>>>> perf will still be able to decode data generated by the updated system.
>>>>>>
>>>>>> I have some concerns over this and the future plans for the dynamic
>>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>>>
>>>>> I don't beleive we have a choice for this - we cannot retain what is
>>>>> an essentially broken allocation mechanism.
>>>>>
>>>>
>>>> I completely agree and I am happy with the current step by step approach
>>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>>>> conveyed appropriately.
>>>>
>>>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>>>> this would be the case) break for the trace-id per sink change, as a
>>>>>> sink could assign different trace-id for a CPU depending.
>>>>>>
>>>>>
>>>>> If a path exists between a CPU and a sink - the current framework as
>>>>> far as I can tell would not allow for a new path to be set up between
>>>>> the cpu and another sink.
>>>>
>>>> e.g, if we have concurrent perf sessions, in the future with sink  based
>>>> allocation :
>>>>
>>>> perf record -e cs_etm/@sink1/... payload1
>>>> perf record -e cs_etm/@sink2/... payload2
>>>> perf record -e cs_etm// ...      payload3
>>>>
>>>> The trace id allocated for first session for CPU0 *could* be different
>>>> from that of the second or the third.
>>>
>>> If these sessions run concurrently then the same Trace ID will be used
>>> for CPU0 for all the sessions.
>>> We ensure this by notifications that a cs_etm session is starting and
>>> stopping - and keep a refcount.
>>
>> The scheme is fine now, with a global trace-id map. But with per-sink
>> allocation, this could cause problems.
>>
>> e.g., there could be a situation where:
>>
>> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>>
>> So if we have a session where both CPU0 and CPU1 trace to a common sink,
>> we get the trace mixed with no way of splitting them. As the perf will
>> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> 
> I think we need to consider the CoreSight hardware topology here.
> 
> Any CPUx that can trace to a sink reachable by another CPUy must
> always get trace IDs from the same pool as CPUy.
> 
> Consider the options for multi sink topologies:-
> 
> CPU0->funnel0->ETF->ETR
> CPU1--+^
> 
> Clearly - in-line sinks can never have simultaneous independent
> sessions - the session into ETR traces through ETF
> 
> Now we could have replicators / programmable replicators -
> 
> ATB->Prog replicator->ETR0
>                                   +->ETR1
> 
> however programmable replicators use trace ID for filtering - this is
> effectively a single sink on the input, so once again the Trace IDs
> must come from the same pool.
> 
> Now, we could have independent per cluster / socket topology
> Cluster 0
> CPU0->funnel0->ETF0->ETR0
> CPU1--+^
> 
> Cluster 1
> CPU2->funnel1->ETF1->ETR1
> CPU3--+^
> 

What if the ETR was a common one ? i.e.

Cluster0
CPU0 -> ETF0 .....
                    \
Cluster1            -- ETR0
CPU1 -> ETF1 ..... /

And lets there are 3 perf sessions in parallel, started in the
order below

perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing 
trace ids from the allocations.

So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
ETR with the same trace-id of 0x50.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-23 10:35               ` Al Grant
@ 2022-03-23 11:05                 ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 11:05 UTC (permalink / raw)
  To: Al Grant
  Cc: Suzuki Poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, James Clark

Hi Al,

On Wed, 23 Mar 2022 at 10:36, Al Grant <Al.Grant@arm.com> wrote:
>
> > -----Original Message-----
> > From: Mike Leach <mike.leach@linaro.org>
> > Sent: 23 March 2022 10:08
> > To: Suzuki Poulose <Suzuki.Poulose@arm.com>
> > Cc: coresight@lists.linaro.org; linux-arm-kernel@lists.infradead.org; linux-
> > kernel@vger.kernel.org; peterz@infradead.org; mingo@redhat.com;
> > acme@kernel.org; linux-perf-users@vger.kernel.org; James Clark
> > <James.Clark@arm.com>
> > Subject: Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID
> > values
> >
> > Hi Suzuki,
> >
> > On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com>
> > wrote:
> > >
> > > Hi Mike
> > >
> > > On 22/03/2022 14:27, Mike Leach wrote:
> > > > Hi Suzuki
> > > >
> > > > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > > > <suzuki.poulose@arm.com> wrote:
> > > >>
> > > >> On 22/03/2022 11:38, Mike Leach wrote:
> > > >>> HI Suzuki,
> > > >>>
> > > >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > > >>> <suzuki.poulose@arm.com> wrote:
> > > >>>>
> > > >>>> + Cc: James Clark
> > > >>>>
> > > >>>> Hi Mike,
> > > >>>>
> > > >>>> On 08/03/2022 20:49, Mike Leach wrote:
> > > >>>>> The current method for allocating trace source ID values to
> > > >>>>> sources is to use a fixed algorithm for CPU based sources of
> > (cpu_num * 2 + 0x10).
> > > >>>>> The STM is allocated ID 0x1.
> > > >>>>>
> > > >>>>> This fixed algorithm is used in both the CoreSight driver code,
> > > >>>>> and by perf when writing the trace metadata in the AUXTRACE_INFO
> > record.
> > > >>>>>
> > > >>>>> The method needs replacing as currently:- 1. It is inefficient
> > > >>>>> in using available IDs.
> > > >>>>> 2. Does not scale to larger systems with many cores and the
> > > >>>>> algorithm has no limits so will generate invalid trace IDs for cpu
> > number > 44.
> > > >>>>
> > > >>>> Thanks for addressing this issue.
> > > >>>>
> > > >>>>>
> > > >>>>> Additionally requirements to allocate additional system IDs on
> > > >>>>> some systems have been seen.
> > > >>>>>
> > > >>>>> This patch set  introduces an API that allows the allocation of
> > > >>>>> trace IDs in a dynamic manner.
> > > >>>>>
> > > >>>>> Architecturally reserved IDs are never allocated, and the system
> > > >>>>> is limited to allocating only valid IDs.
> > > >>>>>
> > > >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is
> > > >>>>> updated to use the new API.
> > > >>>>>
> > > >>>>> perf handling is changed so that the ID associated with the CPU
> > > >>>>> is read from sysfs. The ID allocator is notified when perf
> > > >>>>> events start and stop so CPU based IDs are kept constant
> > throughout any perf session.
> > > >>>>>
> > > >>>>> For the ETMx.x devices IDs are allocated on certain events
> > > >>>>> a) When using sysfs, an ID will be allocated on hardware enable,
> > > >>>>> and freed when the sysfs reset is written.
> > > >>>>> b) When using perf, ID is allocated on hardware enable, and
> > > >>>>> freed on hardware disable.
> > > >>>>>
> > > >>>>> For both cases the ID is allocated when sysfs is read to get the
> > > >>>>> current trace ID. This ensures that consistent decode metadata
> > > >>>>> can be extracted from the system where this read occurs before
> > device enable.
> > > >>>>
> > > >>>>
> > > >>>>>
> > > >>>>> Note: This patchset breaks backward compatibility for perf record.
> > > >>>>> Because the method for generating the AUXTRACE_INFO meta data
> > > >>>>> has changed, using an older perf record will result in metadata
> > > >>>>> that does not match the trace IDs used in the recorded trace data.
> > > >>>>> This mismatch will cause subsequent decode to fail. Older
> > > >>>>> versions of perf will still be able to decode data generated by the
> > updated system.
> > > >>>>
> > > >>>> I have some concerns over this and the future plans for the
> > > >>>> dynamic allocation per sink. i.e., we are breaking/modifying the
> > > >>>> perf now to accommodate the dynamic nature of the trace id of a
> > given CPU/ETM.
> > > >>>
> > > >>> I don't beleive we have a choice for this - we cannot retain what
> > > >>> is an essentially broken allocation mechanism.
> > > >>>
> > > >>
> > > >> I completely agree and I am happy with the current step by step
> > > >> approach of moving to a dynamic allocation scheme. Apologies, this
> > > >> wasn't conveyed appropriately.
> > > >>
> > > >>>> The proposed approach of exposing this via sysfs may (am not sure
> > > >>>> if this would be the case) break for the trace-id per sink
> > > >>>> change, as a sink could assign different trace-id for a CPU depending.
> > > >>>>
> > > >>>
> > > >>> If a path exists between a CPU and a sink - the current framework
> > > >>> as far as I can tell would not allow for a new path to be set up
> > > >>> between the cpu and another sink.
> > > >>
> > > >> e.g, if we have concurrent perf sessions, in the future with sink
> > > >> based allocation :
> > > >>
> > > >> perf record -e cs_etm/@sink1/... payload1 perf record -e
> > > >> cs_etm/@sink2/... payload2
> > > >> perf record -e cs_etm// ...      payload3
> > > >>
> > > >> The trace id allocated for first session for CPU0 *could* be
> > > >> different from that of the second or the third.
> > > >
> > > > If these sessions run concurrently then the same Trace ID will be
> > > > used for CPU0 for all the sessions.
> > > > We ensure this by notifications that a cs_etm session is starting
> > > > and stopping - and keep a refcount.
> > >
> > > The scheme is fine now, with a global trace-id map. But with per-sink
> > > allocation, this could cause problems.
> > >
> > > e.g., there could be a situation where:
> > >
> > > trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> > >
> > > So if we have a session where both CPU0 and CPU1 trace to a common
> > > sink, we get the trace mixed with no way of splitting them. As the
> > > perf will read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> >
> > I think we need to consider the CoreSight hardware topology here.
> >
> > Any CPUx that can trace to a sink reachable by another CPUy must always
> > get trace IDs from the same pool as CPUy.
> >
> > Consider the options for multi sink topologies:-
> >
> > CPU0->funnel0->ETF->ETR
> > CPU1--+^
> >
> > Clearly - in-line sinks can never have simultaneous independent sessions -
> > the session into ETR traces through ETF
> >
> > Now we could have replicators / programmable replicators -
> >
> > ATB->Prog replicator->ETR0
> >                                  +->ETR1
> >
> > however programmable replicators use trace ID for filtering - this is
> > effectively a single sink on the input, so once again the Trace IDs must come
> > from the same pool.
> >
> > Now, we could have independent per cluster / socket topology Cluster 0
> > CPU0->funnel0->ETF0->ETR0
> > CPU1--+^
> >
> > Cluster 1
> > CPU2->funnel1->ETF1->ETR1
> > CPU3--+^
> >
> > Here cluster 0 & 1 can have independent sets of trace IDs as their respective
> > cores can never trace to the same sink.
> >
> > Finally we have the ETE+TRBE 1:1 type topologies. These could actually not
> > bother allocating any trace ID when in 1:1 mode, which should probably be a
> > follow on incremental addition to this initial set.
> >
> > So, my conclusion when I was considering all this is that "per-sink"
> > trace Id allocation is in fact "per unique trace path set" allocation.
>
> The id pools can't always be defined statically in all situations.
> E.g. if you have 128 CPUs each with their own ETR, and also
> replicated into a funnel network leading to a common ETR.
>

Agreed - and this will be an issue that is needed to be considered
when implementing multiple ID pools.
This is a possibility when ETE traces with TRBE switched off.


> This topology supports (at least) two distinct modes: (a) all CPUs
> enabled for tracing to their own ETRs (b) a selection of CPUs
> (up to some limit), combined together. Both are valid dynamic
> configurations. Perf might have a preference on which one to use,
> but both are valid.
>
> But there's no static id pool that works for both. The pool for
> (b) has to allocate to some random selection of 128 CPUs,
> from only around 120 numbers. The pool has to take account of
> which CPUs are selected.
>

Also agreed - it is entirely possible to run out of IDs. The choice
then is to not trace on any, or simply trace on the first N that get
valid IDs.
Which has to be a function of the perf handing code. IDs can be
allocated when perf is running through the code to set up the trace
paths from the selected cores.
The present system "allows" trace to go from all cores by allocating
invalid trace IDs once we get past too many cores (46 at present).


> So your comment "any CPU that can trace..." has to be interpreted as
> "any CPU that can trace in the currently configured dynamic topology"
> rather than "any CPU that can be dynamically configured to trace..."...
> is that what you meant?

Pretty much. The real problems with "per-sink"  / Trace ID pools is
recognizing the current set up topology when this can be dynamic.
I anticipated that the metadata attached to allocating trace IDs will
have to expand to recognise this one we get to advanced topologies -
which is why the API was designed to always pass in the metadata.


>
> Alternatively, we could just declare that such systems are too
> complicated to support, and say that we wouldn't support the
> use of a global sink that (statically) was reachable by 128 CPUs.
>

I think that we can support these systems - it is just that the user /
perf will have to accept that there is a limit to the number of trace
IDs that can be allocated & hence the number of CPUs that will
actually trace the event.
That will mean selecting a set of CPUS to trace on, or not scheduling
the event on a core that cannot trace due to the lack of a trace ID.
The infrastructure will not enable a trace path if that path cannot
allocate a unique ID for the current trace ID set. (this has always
been the case - the only change is that it is handled by the dynamic
allocator now)

Alternatively we could simply say that Trace IDs are a limited
resource - we will only support using a single pool at once (111
possible IDs when you take into account the reserved values) - which
is the situation with this patchset.


Mike

> Al
>
>
> >
> >
> >
> > >
> > > So my point is, we are changing the ABI for perf to grab the TraceID
> > > with your patches. And clearly this approach could break easily when
> > > we extend to sink-based idmap. So, lets make the ABI change for perf
> > > scalable and bullet proof (as far as we can) by exposing this
> > > information via the perf RECORD. That way any future changes in the
> > > scheme won't affect the perf as long as it has a reliable information
> > > within each "record".
> > >
> > >
> > > My point is, let us fix this once and for all, so that we don't need
> > > to change this again. I understand this involves more work in the perf
> > > tool. I believe that is for better
> > >
> > > Thoughts ?
> > >
> >
> > My preference is the incremental approach.
> > Fix the trace ID allocation issues that partners are having now, then update
> > to the perf record approach in a separate follow up patchset.
> > Then when we start to see systems that require it - update to using the per-
> > unique-path trace ID pools.
> >
> > Regards
> >
> > Mike
> >
> > > Suzuki
> >
> >
> >
> > --
> > Mike Leach
> > Principal Engineer, ARM Ltd.
> > Manchester Design Centre. UK
> > _______________________________________________
> > CoreSight mailing list -- coresight@lists.linaro.org To unsubscribe send an
> > email to coresight-leave@lists.linaro.org



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 11:05                 ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 11:05 UTC (permalink / raw)
  To: Al Grant
  Cc: Suzuki Poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, James Clark

Hi Al,

On Wed, 23 Mar 2022 at 10:36, Al Grant <Al.Grant@arm.com> wrote:
>
> > -----Original Message-----
> > From: Mike Leach <mike.leach@linaro.org>
> > Sent: 23 March 2022 10:08
> > To: Suzuki Poulose <Suzuki.Poulose@arm.com>
> > Cc: coresight@lists.linaro.org; linux-arm-kernel@lists.infradead.org; linux-
> > kernel@vger.kernel.org; peterz@infradead.org; mingo@redhat.com;
> > acme@kernel.org; linux-perf-users@vger.kernel.org; James Clark
> > <James.Clark@arm.com>
> > Subject: Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID
> > values
> >
> > Hi Suzuki,
> >
> > On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com>
> > wrote:
> > >
> > > Hi Mike
> > >
> > > On 22/03/2022 14:27, Mike Leach wrote:
> > > > Hi Suzuki
> > > >
> > > > On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> > > > <suzuki.poulose@arm.com> wrote:
> > > >>
> > > >> On 22/03/2022 11:38, Mike Leach wrote:
> > > >>> HI Suzuki,
> > > >>>
> > > >>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> > > >>> <suzuki.poulose@arm.com> wrote:
> > > >>>>
> > > >>>> + Cc: James Clark
> > > >>>>
> > > >>>> Hi Mike,
> > > >>>>
> > > >>>> On 08/03/2022 20:49, Mike Leach wrote:
> > > >>>>> The current method for allocating trace source ID values to
> > > >>>>> sources is to use a fixed algorithm for CPU based sources of
> > (cpu_num * 2 + 0x10).
> > > >>>>> The STM is allocated ID 0x1.
> > > >>>>>
> > > >>>>> This fixed algorithm is used in both the CoreSight driver code,
> > > >>>>> and by perf when writing the trace metadata in the AUXTRACE_INFO
> > record.
> > > >>>>>
> > > >>>>> The method needs replacing as currently:- 1. It is inefficient
> > > >>>>> in using available IDs.
> > > >>>>> 2. Does not scale to larger systems with many cores and the
> > > >>>>> algorithm has no limits so will generate invalid trace IDs for cpu
> > number > 44.
> > > >>>>
> > > >>>> Thanks for addressing this issue.
> > > >>>>
> > > >>>>>
> > > >>>>> Additionally requirements to allocate additional system IDs on
> > > >>>>> some systems have been seen.
> > > >>>>>
> > > >>>>> This patch set  introduces an API that allows the allocation of
> > > >>>>> trace IDs in a dynamic manner.
> > > >>>>>
> > > >>>>> Architecturally reserved IDs are never allocated, and the system
> > > >>>>> is limited to allocating only valid IDs.
> > > >>>>>
> > > >>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is
> > > >>>>> updated to use the new API.
> > > >>>>>
> > > >>>>> perf handling is changed so that the ID associated with the CPU
> > > >>>>> is read from sysfs. The ID allocator is notified when perf
> > > >>>>> events start and stop so CPU based IDs are kept constant
> > throughout any perf session.
> > > >>>>>
> > > >>>>> For the ETMx.x devices IDs are allocated on certain events
> > > >>>>> a) When using sysfs, an ID will be allocated on hardware enable,
> > > >>>>> and freed when the sysfs reset is written.
> > > >>>>> b) When using perf, ID is allocated on hardware enable, and
> > > >>>>> freed on hardware disable.
> > > >>>>>
> > > >>>>> For both cases the ID is allocated when sysfs is read to get the
> > > >>>>> current trace ID. This ensures that consistent decode metadata
> > > >>>>> can be extracted from the system where this read occurs before
> > device enable.
> > > >>>>
> > > >>>>
> > > >>>>>
> > > >>>>> Note: This patchset breaks backward compatibility for perf record.
> > > >>>>> Because the method for generating the AUXTRACE_INFO meta data
> > > >>>>> has changed, using an older perf record will result in metadata
> > > >>>>> that does not match the trace IDs used in the recorded trace data.
> > > >>>>> This mismatch will cause subsequent decode to fail. Older
> > > >>>>> versions of perf will still be able to decode data generated by the
> > updated system.
> > > >>>>
> > > >>>> I have some concerns over this and the future plans for the
> > > >>>> dynamic allocation per sink. i.e., we are breaking/modifying the
> > > >>>> perf now to accommodate the dynamic nature of the trace id of a
> > given CPU/ETM.
> > > >>>
> > > >>> I don't beleive we have a choice for this - we cannot retain what
> > > >>> is an essentially broken allocation mechanism.
> > > >>>
> > > >>
> > > >> I completely agree and I am happy with the current step by step
> > > >> approach of moving to a dynamic allocation scheme. Apologies, this
> > > >> wasn't conveyed appropriately.
> > > >>
> > > >>>> The proposed approach of exposing this via sysfs may (am not sure
> > > >>>> if this would be the case) break for the trace-id per sink
> > > >>>> change, as a sink could assign different trace-id for a CPU depending.
> > > >>>>
> > > >>>
> > > >>> If a path exists between a CPU and a sink - the current framework
> > > >>> as far as I can tell would not allow for a new path to be set up
> > > >>> between the cpu and another sink.
> > > >>
> > > >> e.g, if we have concurrent perf sessions, in the future with sink
> > > >> based allocation :
> > > >>
> > > >> perf record -e cs_etm/@sink1/... payload1 perf record -e
> > > >> cs_etm/@sink2/... payload2
> > > >> perf record -e cs_etm// ...      payload3
> > > >>
> > > >> The trace id allocated for first session for CPU0 *could* be
> > > >> different from that of the second or the third.
> > > >
> > > > If these sessions run concurrently then the same Trace ID will be
> > > > used for CPU0 for all the sessions.
> > > > We ensure this by notifications that a cs_etm session is starting
> > > > and stopping - and keep a refcount.
> > >
> > > The scheme is fine now, with a global trace-id map. But with per-sink
> > > allocation, this could cause problems.
> > >
> > > e.g., there could be a situation where:
> > >
> > > trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> > >
> > > So if we have a session where both CPU0 and CPU1 trace to a common
> > > sink, we get the trace mixed with no way of splitting them. As the
> > > perf will read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> >
> > I think we need to consider the CoreSight hardware topology here.
> >
> > Any CPUx that can trace to a sink reachable by another CPUy must always
> > get trace IDs from the same pool as CPUy.
> >
> > Consider the options for multi sink topologies:-
> >
> > CPU0->funnel0->ETF->ETR
> > CPU1--+^
> >
> > Clearly - in-line sinks can never have simultaneous independent sessions -
> > the session into ETR traces through ETF
> >
> > Now we could have replicators / programmable replicators -
> >
> > ATB->Prog replicator->ETR0
> >                                  +->ETR1
> >
> > however programmable replicators use trace ID for filtering - this is
> > effectively a single sink on the input, so once again the Trace IDs must come
> > from the same pool.
> >
> > Now, we could have independent per cluster / socket topology Cluster 0
> > CPU0->funnel0->ETF0->ETR0
> > CPU1--+^
> >
> > Cluster 1
> > CPU2->funnel1->ETF1->ETR1
> > CPU3--+^
> >
> > Here cluster 0 & 1 can have independent sets of trace IDs as their respective
> > cores can never trace to the same sink.
> >
> > Finally we have the ETE+TRBE 1:1 type topologies. These could actually not
> > bother allocating any trace ID when in 1:1 mode, which should probably be a
> > follow on incremental addition to this initial set.
> >
> > So, my conclusion when I was considering all this is that "per-sink"
> > trace Id allocation is in fact "per unique trace path set" allocation.
>
> The id pools can't always be defined statically in all situations.
> E.g. if you have 128 CPUs each with their own ETR, and also
> replicated into a funnel network leading to a common ETR.
>

Agreed - and this will be an issue that is needed to be considered
when implementing multiple ID pools.
This is a possibility when ETE traces with TRBE switched off.


> This topology supports (at least) two distinct modes: (a) all CPUs
> enabled for tracing to their own ETRs (b) a selection of CPUs
> (up to some limit), combined together. Both are valid dynamic
> configurations. Perf might have a preference on which one to use,
> but both are valid.
>
> But there's no static id pool that works for both. The pool for
> (b) has to allocate to some random selection of 128 CPUs,
> from only around 120 numbers. The pool has to take account of
> which CPUs are selected.
>

Also agreed - it is entirely possible to run out of IDs. The choice
then is to not trace on any, or simply trace on the first N that get
valid IDs.
Which has to be a function of the perf handing code. IDs can be
allocated when perf is running through the code to set up the trace
paths from the selected cores.
The present system "allows" trace to go from all cores by allocating
invalid trace IDs once we get past too many cores (46 at present).


> So your comment "any CPU that can trace..." has to be interpreted as
> "any CPU that can trace in the currently configured dynamic topology"
> rather than "any CPU that can be dynamically configured to trace..."...
> is that what you meant?

Pretty much. The real problems with "per-sink"  / Trace ID pools is
recognizing the current set up topology when this can be dynamic.
I anticipated that the metadata attached to allocating trace IDs will
have to expand to recognise this one we get to advanced topologies -
which is why the API was designed to always pass in the metadata.


>
> Alternatively, we could just declare that such systems are too
> complicated to support, and say that we wouldn't support the
> use of a global sink that (statically) was reachable by 128 CPUs.
>

I think that we can support these systems - it is just that the user /
perf will have to accept that there is a limit to the number of trace
IDs that can be allocated & hence the number of CPUs that will
actually trace the event.
That will mean selecting a set of CPUS to trace on, or not scheduling
the event on a core that cannot trace due to the lack of a trace ID.
The infrastructure will not enable a trace path if that path cannot
allocate a unique ID for the current trace ID set. (this has always
been the case - the only change is that it is handled by the dynamic
allocator now)

Alternatively we could simply say that Trace IDs are a limited
resource - we will only support using a single pool at once (111
possible IDs when you take into account the reserved values) - which
is the situation with this patchset.


Mike

> Al
>
>
> >
> >
> >
> > >
> > > So my point is, we are changing the ABI for perf to grab the TraceID
> > > with your patches. And clearly this approach could break easily when
> > > we extend to sink-based idmap. So, lets make the ABI change for perf
> > > scalable and bullet proof (as far as we can) by exposing this
> > > information via the perf RECORD. That way any future changes in the
> > > scheme won't affect the perf as long as it has a reliable information
> > > within each "record".
> > >
> > >
> > > My point is, let us fix this once and for all, so that we don't need
> > > to change this again. I understand this involves more work in the perf
> > > tool. I believe that is for better
> > >
> > > Thoughts ?
> > >
> >
> > My preference is the incremental approach.
> > Fix the trace ID allocation issues that partners are having now, then update
> > to the perf record approach in a separate follow up patchset.
> > Then when we start to see systems that require it - update to using the per-
> > unique-path trace ID pools.
> >
> > Regards
> >
> > Mike
> >
> > > Suzuki
> >
> >
> >
> > --
> > Mike Leach
> > Principal Engineer, ARM Ltd.
> > Manchester Design Centre. UK
> > _______________________________________________
> > CoreSight mailing list -- coresight@lists.linaro.org To unsubscribe send an
> > email to coresight-leave@lists.linaro.org



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-23 10:41               ` Suzuki Kuruppassery Poulose
@ 2022-03-23 11:35                 ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 11:35 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki

On Wed, 23 Mar 2022 at 10:41, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> Hi Mike
>
> On 23/03/2022 10:07, Mike Leach wrote:
> > Hi Suzuki,
> >
> > On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
> >>
> >> Hi Mike
> >>
> >> On 22/03/2022 14:27, Mike Leach wrote:
> >>> Hi Suzuki
> >>>
> >>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> >>> <suzuki.poulose@arm.com> wrote:
> >>>>
> >>>> On 22/03/2022 11:38, Mike Leach wrote:
> >>>>> HI Suzuki,
> >>>>>
> >>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> >>>>> <suzuki.poulose@arm.com> wrote:
> >>>>>>
> >>>>>> + Cc: James Clark
> >>>>>>
> >>>>>> Hi Mike,
> >>>>>>
> >>>>>> On 08/03/2022 20:49, Mike Leach wrote:
> >>>>>>> The current method for allocating trace source ID values to sources is
> >>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>>>>>> The STM is allocated ID 0x1.
> >>>>>>>
> >>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>>>>>
> >>>>>>> The method needs replacing as currently:-
> >>>>>>> 1. It is inefficient in using available IDs.
> >>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
> >>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>>>>>
> >>>>>> Thanks for addressing this issue.
> >>>>>>
> >>>>>>>
> >>>>>>> Additionally requirements to allocate additional system IDs on some
> >>>>>>> systems have been seen.
> >>>>>>>
> >>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
> >>>>>>> in a dynamic manner.
> >>>>>>>
> >>>>>>> Architecturally reserved IDs are never allocated, and the system is
> >>>>>>> limited to allocating only valid IDs.
> >>>>>>>
> >>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>>>>>> the new API.
> >>>>>>>
> >>>>>>> perf handling is changed so that the ID associated with the CPU is read
> >>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
> >>>>>>> so CPU based IDs are kept constant throughout any perf session.
> >>>>>>>
> >>>>>>> For the ETMx.x devices IDs are allocated on certain events
> >>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>>>>>> when the sysfs reset is written.
> >>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>>>>>> hardware disable.
> >>>>>>>
> >>>>>>> For both cases the ID is allocated when sysfs is read to get the current
> >>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
> >>>>>>> from the system where this read occurs before device enable.
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> Note: This patchset breaks backward compatibility for perf record.
> >>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
> >>>>>>> changed, using an older perf record will result in metadata that
> >>>>>>> does not match the trace IDs used in the recorded trace data.
> >>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
> >>>>>>> perf will still be able to decode data generated by the updated system.
> >>>>>>
> >>>>>> I have some concerns over this and the future plans for the dynamic
> >>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
> >>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >>>>>
> >>>>> I don't beleive we have a choice for this - we cannot retain what is
> >>>>> an essentially broken allocation mechanism.
> >>>>>
> >>>>
> >>>> I completely agree and I am happy with the current step by step approach
> >>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
> >>>> conveyed appropriately.
> >>>>
> >>>>>> The proposed approach of exposing this via sysfs may (am not sure if
> >>>>>> this would be the case) break for the trace-id per sink change, as a
> >>>>>> sink could assign different trace-id for a CPU depending.
> >>>>>>
> >>>>>
> >>>>> If a path exists between a CPU and a sink - the current framework as
> >>>>> far as I can tell would not allow for a new path to be set up between
> >>>>> the cpu and another sink.
> >>>>
> >>>> e.g, if we have concurrent perf sessions, in the future with sink  based
> >>>> allocation :
> >>>>
> >>>> perf record -e cs_etm/@sink1/... payload1
> >>>> perf record -e cs_etm/@sink2/... payload2
> >>>> perf record -e cs_etm// ...      payload3
> >>>>
> >>>> The trace id allocated for first session for CPU0 *could* be different
> >>>> from that of the second or the third.
> >>>
> >>> If these sessions run concurrently then the same Trace ID will be used
> >>> for CPU0 for all the sessions.
> >>> We ensure this by notifications that a cs_etm session is starting and
> >>> stopping - and keep a refcount.
> >>
> >> The scheme is fine now, with a global trace-id map. But with per-sink
> >> allocation, this could cause problems.
> >>
> >> e.g., there could be a situation where:
> >>
> >> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> >>
> >> So if we have a session where both CPU0 and CPU1 trace to a common sink,
> >> we get the trace mixed with no way of splitting them. As the perf will
> >> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> >
> > I think we need to consider the CoreSight hardware topology here.
> >
> > Any CPUx that can trace to a sink reachable by another CPUy must
> > always get trace IDs from the same pool as CPUy.
> >
> > Consider the options for multi sink topologies:-
> >
> > CPU0->funnel0->ETF->ETR
> > CPU1--+^
> >
> > Clearly - in-line sinks can never have simultaneous independent
> > sessions - the session into ETR traces through ETF
> >
> > Now we could have replicators / programmable replicators -
> >
> > ATB->Prog replicator->ETR0
> >                                   +->ETR1
> >
> > however programmable replicators use trace ID for filtering - this is
> > effectively a single sink on the input, so once again the Trace IDs
> > must come from the same pool.
> >
> > Now, we could have independent per cluster / socket topology
> > Cluster 0
> > CPU0->funnel0->ETF0->ETR0
> > CPU1--+^
> >
> > Cluster 1
> > CPU2->funnel1->ETF1->ETR1
> > CPU3--+^
> >
>
> What if the ETR was a common one ? i.e.
>
> Cluster0
> CPU0 -> ETF0 .....
>                     \
> Cluster1            -- ETR0
> CPU1 -> ETF1 ..... /
>
> And lets there are 3 perf sessions in parallel, started in the
> order below
>
> perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
> perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
> perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing
> trace ids from the allocations.
>

This could be treated as a single combined topology - as soon as any
sink is reachable by CPU0 and CPU1 then we have to treat this as
having a single pool of trace IDs and so CPU0 and CPU1 cannot have the
same ID.
Alternatively, once the allocation metadata is expanded to recognize
trace ID pools - it is entirely possible to ensure that the CPU / ID
fixing is done on a per pool basis.

One of the reasons we need to ensure that the CPU / ID allocation
remains constant is that an event can be scheduled on a CPU multiple
times for a single aux buffer - resulting in multiple trace blocks in
the buffer / multiple buffers in the data file, so we cannot have the
ID change mid buffer / file without significant changes to the decode
process and tracking of CPU / ID changes on a intra buffer basis.

Mike

> So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
> ETR with the same trace-id of 0x50.
>
> Suzuki



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 11:35                 ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-03-23 11:35 UTC (permalink / raw)
  To: Suzuki Kuruppassery Poulose
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Suzuki

On Wed, 23 Mar 2022 at 10:41, Suzuki Kuruppassery Poulose
<suzuki.poulose@arm.com> wrote:
>
> Hi Mike
>
> On 23/03/2022 10:07, Mike Leach wrote:
> > Hi Suzuki,
> >
> > On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
> >>
> >> Hi Mike
> >>
> >> On 22/03/2022 14:27, Mike Leach wrote:
> >>> Hi Suzuki
> >>>
> >>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
> >>> <suzuki.poulose@arm.com> wrote:
> >>>>
> >>>> On 22/03/2022 11:38, Mike Leach wrote:
> >>>>> HI Suzuki,
> >>>>>
> >>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
> >>>>> <suzuki.poulose@arm.com> wrote:
> >>>>>>
> >>>>>> + Cc: James Clark
> >>>>>>
> >>>>>> Hi Mike,
> >>>>>>
> >>>>>> On 08/03/2022 20:49, Mike Leach wrote:
> >>>>>>> The current method for allocating trace source ID values to sources is
> >>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> >>>>>>> The STM is allocated ID 0x1.
> >>>>>>>
> >>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
> >>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
> >>>>>>>
> >>>>>>> The method needs replacing as currently:-
> >>>>>>> 1. It is inefficient in using available IDs.
> >>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
> >>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
> >>>>>>
> >>>>>> Thanks for addressing this issue.
> >>>>>>
> >>>>>>>
> >>>>>>> Additionally requirements to allocate additional system IDs on some
> >>>>>>> systems have been seen.
> >>>>>>>
> >>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
> >>>>>>> in a dynamic manner.
> >>>>>>>
> >>>>>>> Architecturally reserved IDs are never allocated, and the system is
> >>>>>>> limited to allocating only valid IDs.
> >>>>>>>
> >>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> >>>>>>> the new API.
> >>>>>>>
> >>>>>>> perf handling is changed so that the ID associated with the CPU is read
> >>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
> >>>>>>> so CPU based IDs are kept constant throughout any perf session.
> >>>>>>>
> >>>>>>> For the ETMx.x devices IDs are allocated on certain events
> >>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> >>>>>>> when the sysfs reset is written.
> >>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
> >>>>>>> hardware disable.
> >>>>>>>
> >>>>>>> For both cases the ID is allocated when sysfs is read to get the current
> >>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
> >>>>>>> from the system where this read occurs before device enable.
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> Note: This patchset breaks backward compatibility for perf record.
> >>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
> >>>>>>> changed, using an older perf record will result in metadata that
> >>>>>>> does not match the trace IDs used in the recorded trace data.
> >>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
> >>>>>>> perf will still be able to decode data generated by the updated system.
> >>>>>>
> >>>>>> I have some concerns over this and the future plans for the dynamic
> >>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
> >>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
> >>>>>
> >>>>> I don't beleive we have a choice for this - we cannot retain what is
> >>>>> an essentially broken allocation mechanism.
> >>>>>
> >>>>
> >>>> I completely agree and I am happy with the current step by step approach
> >>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
> >>>> conveyed appropriately.
> >>>>
> >>>>>> The proposed approach of exposing this via sysfs may (am not sure if
> >>>>>> this would be the case) break for the trace-id per sink change, as a
> >>>>>> sink could assign different trace-id for a CPU depending.
> >>>>>>
> >>>>>
> >>>>> If a path exists between a CPU and a sink - the current framework as
> >>>>> far as I can tell would not allow for a new path to be set up between
> >>>>> the cpu and another sink.
> >>>>
> >>>> e.g, if we have concurrent perf sessions, in the future with sink  based
> >>>> allocation :
> >>>>
> >>>> perf record -e cs_etm/@sink1/... payload1
> >>>> perf record -e cs_etm/@sink2/... payload2
> >>>> perf record -e cs_etm// ...      payload3
> >>>>
> >>>> The trace id allocated for first session for CPU0 *could* be different
> >>>> from that of the second or the third.
> >>>
> >>> If these sessions run concurrently then the same Trace ID will be used
> >>> for CPU0 for all the sessions.
> >>> We ensure this by notifications that a cs_etm session is starting and
> >>> stopping - and keep a refcount.
> >>
> >> The scheme is fine now, with a global trace-id map. But with per-sink
> >> allocation, this could cause problems.
> >>
> >> e.g., there could be a situation where:
> >>
> >> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
> >>
> >> So if we have a session where both CPU0 and CPU1 trace to a common sink,
> >> we get the trace mixed with no way of splitting them. As the perf will
> >> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
> >
> > I think we need to consider the CoreSight hardware topology here.
> >
> > Any CPUx that can trace to a sink reachable by another CPUy must
> > always get trace IDs from the same pool as CPUy.
> >
> > Consider the options for multi sink topologies:-
> >
> > CPU0->funnel0->ETF->ETR
> > CPU1--+^
> >
> > Clearly - in-line sinks can never have simultaneous independent
> > sessions - the session into ETR traces through ETF
> >
> > Now we could have replicators / programmable replicators -
> >
> > ATB->Prog replicator->ETR0
> >                                   +->ETR1
> >
> > however programmable replicators use trace ID for filtering - this is
> > effectively a single sink on the input, so once again the Trace IDs
> > must come from the same pool.
> >
> > Now, we could have independent per cluster / socket topology
> > Cluster 0
> > CPU0->funnel0->ETF0->ETR0
> > CPU1--+^
> >
> > Cluster 1
> > CPU2->funnel1->ETF1->ETR1
> > CPU3--+^
> >
>
> What if the ETR was a common one ? i.e.
>
> Cluster0
> CPU0 -> ETF0 .....
>                     \
> Cluster1            -- ETR0
> CPU1 -> ETF1 ..... /
>
> And lets there are 3 perf sessions in parallel, started in the
> order below
>
> perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
> perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
> perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing
> trace ids from the allocations.
>

This could be treated as a single combined topology - as soon as any
sink is reachable by CPU0 and CPU1 then we have to treat this as
having a single pool of trace IDs and so CPU0 and CPU1 cannot have the
same ID.
Alternatively, once the allocation metadata is expanded to recognize
trace ID pools - it is entirely possible to ensure that the CPU / ID
fixing is done on a per pool basis.

One of the reasons we need to ensure that the CPU / ID allocation
remains constant is that an event can be scheduled on a CPU multiple
times for a single aux buffer - resulting in multiple trace blocks in
the buffer / multiple buffers in the data file, so we cannot have the
ID change mid buffer / file without significant changes to the decode
process and tracking of CPU / ID changes on a intra buffer basis.

Mike

> So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
> ETR with the same trace-id of 0x50.
>
> Suzuki



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-23 11:35                 ` Mike Leach
@ 2022-03-23 12:08                   ` Suzuki Kuruppassery Poulose
  -1 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-23 12:08 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 23/03/2022 11:35, Mike Leach wrote:
> Hi Suzuki
> 
> On Wed, 23 Mar 2022 at 10:41, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> Hi Mike
>>
>> On 23/03/2022 10:07, Mike Leach wrote:
>>> Hi Suzuki,
>>>
>>> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>>>>
>>>> Hi Mike
>>>>
>>>> On 22/03/2022 14:27, Mike Leach wrote:
>>>>> Hi Suzuki
>>>>>
>>>>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>
>>>>>> On 22/03/2022 11:38, Mike Leach wrote:
>>>>>>> HI Suzuki,
>>>>>>>
>>>>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>>>
>>>>>>>> + Cc: James Clark
>>>>>>>>
>>>>>>>> Hi Mike,
>>>>>>>>
>>>>>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>>>>>> The current method for allocating trace source ID values to sources is
>>>>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>>>>>> The STM is allocated ID 0x1.
>>>>>>>>>
>>>>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>>>>>
>>>>>>>>> The method needs replacing as currently:-
>>>>>>>>> 1. It is inefficient in using available IDs.
>>>>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>>>>>
>>>>>>>> Thanks for addressing this issue.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Additionally requirements to allocate additional system IDs on some
>>>>>>>>> systems have been seen.
>>>>>>>>>
>>>>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>>>>>> in a dynamic manner.
>>>>>>>>>
>>>>>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>>>>>> limited to allocating only valid IDs.
>>>>>>>>>
>>>>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>>>>>> the new API.
>>>>>>>>>
>>>>>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>>>>>
>>>>>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>>>>>> when the sysfs reset is written.
>>>>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>>>>>> hardware disable.
>>>>>>>>>
>>>>>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>>>>>> from the system where this read occurs before device enable.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>>>>>> changed, using an older perf record will result in metadata that
>>>>>>>>> does not match the trace IDs used in the recorded trace data.
>>>>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>>>>>> perf will still be able to decode data generated by the updated system.
>>>>>>>>
>>>>>>>> I have some concerns over this and the future plans for the dynamic
>>>>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>>>>>
>>>>>>> I don't beleive we have a choice for this - we cannot retain what is
>>>>>>> an essentially broken allocation mechanism.
>>>>>>>
>>>>>>
>>>>>> I completely agree and I am happy with the current step by step approach
>>>>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>>>>>> conveyed appropriately.
>>>>>>
>>>>>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>>>>>> this would be the case) break for the trace-id per sink change, as a
>>>>>>>> sink could assign different trace-id for a CPU depending.
>>>>>>>>
>>>>>>>
>>>>>>> If a path exists between a CPU and a sink - the current framework as
>>>>>>> far as I can tell would not allow for a new path to be set up between
>>>>>>> the cpu and another sink.
>>>>>>
>>>>>> e.g, if we have concurrent perf sessions, in the future with sink  based
>>>>>> allocation :
>>>>>>
>>>>>> perf record -e cs_etm/@sink1/... payload1
>>>>>> perf record -e cs_etm/@sink2/... payload2
>>>>>> perf record -e cs_etm// ...      payload3
>>>>>>
>>>>>> The trace id allocated for first session for CPU0 *could* be different
>>>>>> from that of the second or the third.
>>>>>
>>>>> If these sessions run concurrently then the same Trace ID will be used
>>>>> for CPU0 for all the sessions.
>>>>> We ensure this by notifications that a cs_etm session is starting and
>>>>> stopping - and keep a refcount.
>>>>
>>>> The scheme is fine now, with a global trace-id map. But with per-sink
>>>> allocation, this could cause problems.
>>>>
>>>> e.g., there could be a situation where:
>>>>
>>>> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>>>>
>>>> So if we have a session where both CPU0 and CPU1 trace to a common sink,
>>>> we get the trace mixed with no way of splitting them. As the perf will
>>>> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
>>>
>>> I think we need to consider the CoreSight hardware topology here.
>>>
>>> Any CPUx that can trace to a sink reachable by another CPUy must
>>> always get trace IDs from the same pool as CPUy.
>>>
>>> Consider the options for multi sink topologies:-
>>>
>>> CPU0->funnel0->ETF->ETR
>>> CPU1--+^
>>>
>>> Clearly - in-line sinks can never have simultaneous independent
>>> sessions - the session into ETR traces through ETF
>>>
>>> Now we could have replicators / programmable replicators -
>>>
>>> ATB->Prog replicator->ETR0
>>>                                    +->ETR1
>>>
>>> however programmable replicators use trace ID for filtering - this is
>>> effectively a single sink on the input, so once again the Trace IDs
>>> must come from the same pool.
>>>
>>> Now, we could have independent per cluster / socket topology
>>> Cluster 0
>>> CPU0->funnel0->ETF0->ETR0
>>> CPU1--+^
>>>
>>> Cluster 1
>>> CPU2->funnel1->ETF1->ETR1
>>> CPU3--+^
>>>
>>
>> What if the ETR was a common one ? i.e.
>>
>> Cluster0
>> CPU0 -> ETF0 .....
>>                      \
>> Cluster1            -- ETR0
>> CPU1 -> ETF1 ..... /
>>
>> And lets there are 3 perf sessions in parallel, started in the
>> order below
>>
>> perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
>> perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
>> perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing
>> trace ids from the allocations.
>>
> 
> This could be treated as a single combined topology - as soon as any
> sink is reachable by CPU0 and CPU1 then we have to treat this as
> having a single pool of trace IDs and so CPU0 and CPU1 cannot have the
> same ID.

IIUC, that is indeed going to be much more complex than, allocating
trace-id per sink. Moreover, we are going to end up "enforcing" the
pool (with a global system wide ETR) restriction to a sink that is local
to a cluster for e.g. And thus could be back to square 1.

> Alternatively, once the allocation metadata is expanded to recognize
> trace ID pools - it is entirely possible to ensure that the CPU / ID
> fixing is done on a per pool basis.
> 
> One of the reasons we need to ensure that the CPU / ID allocation
> remains constant is that an event can be scheduled on a CPU multiple
> times for a single aux buffer - resulting in multiple trace blocks in
> the buffer / multiple buffers in the data file, so we cannot have the
> ID change mid buffer / file without significant changes to the decode
> process and tracking of CPU / ID changes on a intra buffer basis.

Correct, and we must not. My proposal is not to change the traceid of a 
CPU for a given "perf record". But, instead, since the sink for a CPU is
fixed for a given "perf record" and it can't change, we can allocate
a traceid map per sink, which will remain the same in a given record.

Cheers
Suzuki



> Mike
> 
>> So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
>> ETR with the same trace-id of 0x50.
>>
>> Suzuki
> 
> 
> 


^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-03-23 12:08                   ` Suzuki Kuruppassery Poulose
  0 siblings, 0 replies; 70+ messages in thread
From: Suzuki Kuruppassery Poulose @ 2022-03-23 12:08 UTC (permalink / raw)
  To: Mike Leach
  Cc: coresight, linux-arm-kernel, linux-kernel, mathieu.poirier,
	peterz, mingo, acme, linux-perf-users, leo.yan, James Clark

Hi Mike

On 23/03/2022 11:35, Mike Leach wrote:
> Hi Suzuki
> 
> On Wed, 23 Mar 2022 at 10:41, Suzuki Kuruppassery Poulose
> <suzuki.poulose@arm.com> wrote:
>>
>> Hi Mike
>>
>> On 23/03/2022 10:07, Mike Leach wrote:
>>> Hi Suzuki,
>>>
>>> On Tue, 22 Mar 2022 at 18:52, Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>>>>
>>>> Hi Mike
>>>>
>>>> On 22/03/2022 14:27, Mike Leach wrote:
>>>>> Hi Suzuki
>>>>>
>>>>> On Tue, 22 Mar 2022 at 12:35, Suzuki Kuruppassery Poulose
>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>
>>>>>> On 22/03/2022 11:38, Mike Leach wrote:
>>>>>>> HI Suzuki,
>>>>>>>
>>>>>>> On Tue, 22 Mar 2022 at 10:43, Suzuki Kuruppassery Poulose
>>>>>>> <suzuki.poulose@arm.com> wrote:
>>>>>>>>
>>>>>>>> + Cc: James Clark
>>>>>>>>
>>>>>>>> Hi Mike,
>>>>>>>>
>>>>>>>> On 08/03/2022 20:49, Mike Leach wrote:
>>>>>>>>> The current method for allocating trace source ID values to sources is
>>>>>>>>> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
>>>>>>>>> The STM is allocated ID 0x1.
>>>>>>>>>
>>>>>>>>> This fixed algorithm is used in both the CoreSight driver code, and by
>>>>>>>>> perf when writing the trace metadata in the AUXTRACE_INFO record.
>>>>>>>>>
>>>>>>>>> The method needs replacing as currently:-
>>>>>>>>> 1. It is inefficient in using available IDs.
>>>>>>>>> 2. Does not scale to larger systems with many cores and the algorithm
>>>>>>>>> has no limits so will generate invalid trace IDs for cpu number > 44.
>>>>>>>>
>>>>>>>> Thanks for addressing this issue.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Additionally requirements to allocate additional system IDs on some
>>>>>>>>> systems have been seen.
>>>>>>>>>
>>>>>>>>> This patch set  introduces an API that allows the allocation of trace IDs
>>>>>>>>> in a dynamic manner.
>>>>>>>>>
>>>>>>>>> Architecturally reserved IDs are never allocated, and the system is
>>>>>>>>> limited to allocating only valid IDs.
>>>>>>>>>
>>>>>>>>> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
>>>>>>>>> the new API.
>>>>>>>>>
>>>>>>>>> perf handling is changed so that the ID associated with the CPU is read
>>>>>>>>> from sysfs. The ID allocator is notified when perf events start and stop
>>>>>>>>> so CPU based IDs are kept constant throughout any perf session.
>>>>>>>>>
>>>>>>>>> For the ETMx.x devices IDs are allocated on certain events
>>>>>>>>> a) When using sysfs, an ID will be allocated on hardware enable, and freed
>>>>>>>>> when the sysfs reset is written.
>>>>>>>>> b) When using perf, ID is allocated on hardware enable, and freed on
>>>>>>>>> hardware disable.
>>>>>>>>>
>>>>>>>>> For both cases the ID is allocated when sysfs is read to get the current
>>>>>>>>> trace ID. This ensures that consistent decode metadata can be extracted
>>>>>>>>> from the system where this read occurs before device enable.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Note: This patchset breaks backward compatibility for perf record.
>>>>>>>>> Because the method for generating the AUXTRACE_INFO meta data has
>>>>>>>>> changed, using an older perf record will result in metadata that
>>>>>>>>> does not match the trace IDs used in the recorded trace data.
>>>>>>>>> This mismatch will cause subsequent decode to fail. Older versions of
>>>>>>>>> perf will still be able to decode data generated by the updated system.
>>>>>>>>
>>>>>>>> I have some concerns over this and the future plans for the dynamic
>>>>>>>> allocation per sink. i.e., we are breaking/modifying the perf now to
>>>>>>>> accommodate the dynamic nature of the trace id of a given CPU/ETM.
>>>>>>>
>>>>>>> I don't beleive we have a choice for this - we cannot retain what is
>>>>>>> an essentially broken allocation mechanism.
>>>>>>>
>>>>>>
>>>>>> I completely agree and I am happy with the current step by step approach
>>>>>> of moving to a dynamic allocation scheme. Apologies, this wasn't
>>>>>> conveyed appropriately.
>>>>>>
>>>>>>>> The proposed approach of exposing this via sysfs may (am not sure if
>>>>>>>> this would be the case) break for the trace-id per sink change, as a
>>>>>>>> sink could assign different trace-id for a CPU depending.
>>>>>>>>
>>>>>>>
>>>>>>> If a path exists between a CPU and a sink - the current framework as
>>>>>>> far as I can tell would not allow for a new path to be set up between
>>>>>>> the cpu and another sink.
>>>>>>
>>>>>> e.g, if we have concurrent perf sessions, in the future with sink  based
>>>>>> allocation :
>>>>>>
>>>>>> perf record -e cs_etm/@sink1/... payload1
>>>>>> perf record -e cs_etm/@sink2/... payload2
>>>>>> perf record -e cs_etm// ...      payload3
>>>>>>
>>>>>> The trace id allocated for first session for CPU0 *could* be different
>>>>>> from that of the second or the third.
>>>>>
>>>>> If these sessions run concurrently then the same Trace ID will be used
>>>>> for CPU0 for all the sessions.
>>>>> We ensure this by notifications that a cs_etm session is starting and
>>>>> stopping - and keep a refcount.
>>>>
>>>> The scheme is fine now, with a global trace-id map. But with per-sink
>>>> allocation, this could cause problems.
>>>>
>>>> e.g., there could be a situation where:
>>>>
>>>> trace_id[CPU0][sink0] == trace_id[CPU1][sink1]
>>>>
>>>> So if we have a session where both CPU0 and CPU1 trace to a common sink,
>>>> we get the trace mixed with no way of splitting them. As the perf will
>>>> read the trace-id for CPU0 from that of sink0 and CPU1 from sink1.
>>>
>>> I think we need to consider the CoreSight hardware topology here.
>>>
>>> Any CPUx that can trace to a sink reachable by another CPUy must
>>> always get trace IDs from the same pool as CPUy.
>>>
>>> Consider the options for multi sink topologies:-
>>>
>>> CPU0->funnel0->ETF->ETR
>>> CPU1--+^
>>>
>>> Clearly - in-line sinks can never have simultaneous independent
>>> sessions - the session into ETR traces through ETF
>>>
>>> Now we could have replicators / programmable replicators -
>>>
>>> ATB->Prog replicator->ETR0
>>>                                    +->ETR1
>>>
>>> however programmable replicators use trace ID for filtering - this is
>>> effectively a single sink on the input, so once again the Trace IDs
>>> must come from the same pool.
>>>
>>> Now, we could have independent per cluster / socket topology
>>> Cluster 0
>>> CPU0->funnel0->ETF0->ETR0
>>> CPU1--+^
>>>
>>> Cluster 1
>>> CPU2->funnel1->ETF1->ETR1
>>> CPU3--+^
>>>
>>
>> What if the ETR was a common one ? i.e.
>>
>> Cluster0
>> CPU0 -> ETF0 .....
>>                      \
>> Cluster1            -- ETR0
>> CPU1 -> ETF1 ..... /
>>
>> And lets there are 3 perf sessions in parallel, started in the
>> order below
>>
>> perf record -e cs_etm/@etf0/ app1 # CPU0 gets a trace-id[etf0] -> 0x50
>> perf record -e cs_etm/@etf1/ app2 # CPU1 gets a trace-id[etf1] -> 0x50
>> perf record -e cs_etm/@etr/  app3 # CPU0 and CPU1 both use the existing
>> trace ids from the allocations.
>>
> 
> This could be treated as a single combined topology - as soon as any
> sink is reachable by CPU0 and CPU1 then we have to treat this as
> having a single pool of trace IDs and so CPU0 and CPU1 cannot have the
> same ID.

IIUC, that is indeed going to be much more complex than, allocating
trace-id per sink. Moreover, we are going to end up "enforcing" the
pool (with a global system wide ETR) restriction to a sink that is local
to a cluster for e.g. And thus could be back to square 1.

> Alternatively, once the allocation metadata is expanded to recognize
> trace ID pools - it is entirely possible to ensure that the CPU / ID
> fixing is done on a per pool basis.
> 
> One of the reasons we need to ensure that the CPU / ID allocation
> remains constant is that an event can be scheduled on a CPU multiple
> times for a single aux buffer - resulting in multiple trace blocks in
> the buffer / multiple buffers in the data file, so we cannot have the
> ID change mid buffer / file without significant changes to the decode
> process and tracking of CPU / ID changes on a intra buffer basis.

Correct, and we must not. My proposal is not to change the traceid of a 
CPU for a given "perf record". But, instead, since the sink for a CPU is
fixed for a given "perf record" and it can't change, we can allocate
a traceid map per sink, which will remain the same in a given record.

Cheers
Suzuki



> Mike
> 
>> So, when app3 threads are scheduled on CPU0 & CPU1, we get the trace in
>> ETR with the same trace-id of 0x50.
>>
>> Suzuki
> 
> 
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
  2022-03-08 20:49 ` Mike Leach
@ 2022-04-04 16:15   ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-04 16:15 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

On Tue, Mar 08, 2022 at 08:49:50PM +0000, Mike Leach wrote:
> The current method for allocating trace source ID values to sources is
> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> The STM is allocated ID 0x1.
> 
> This fixed algorithm is used in both the CoreSight driver code, and by
> perf when writing the trace metadata in the AUXTRACE_INFO record.
> 
> The method needs replacing as currently:-
> 1. It is inefficient in using available IDs.
> 2. Does not scale to larger systems with many cores and the algorithm
> has no limits so will generate invalid trace IDs for cpu number > 44.
> 
> Additionally requirements to allocate additional system IDs on some
> systems have been seen.
> 
> This patch set  introduces an API that allows the allocation of trace IDs
> in a dynamic manner.
> 
> Architecturally reserved IDs are never allocated, and the system is
> limited to allocating only valid IDs.
> 
> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> the new API.
> 
> perf handling is changed so that the ID associated with the CPU is read
> from sysfs. The ID allocator is notified when perf events start and stop
> so CPU based IDs are kept constant throughout any perf session.
> 
> For the ETMx.x devices IDs are allocated on certain events
> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> when the sysfs reset is written.
> b) When using perf, ID is allocated on hardware enable, and freed on
> hardware disable.
> 
> For both cases the ID is allocated when sysfs is read to get the current
> trace ID. This ensures that consistent decode metadata can be extracted
> from the system where this read occurs before device enable.
> 
> Note: This patchset breaks backward compatibility for perf record.
> Because the method for generating the AUXTRACE_INFO meta data has
> changed, using an older perf record will result in metadata that
> does not match the trace IDs used in the recorded trace data.
> This mismatch will cause subsequent decode to fail. Older versions of
> perf will still be able to decode data generated by the updated system.
> 

I have started looking at this set, comments to follow shortly.

Thanks,
Mathieu

> 
> Applies to coresight/next [b54f53bc11a5]
> Tested on DB410c
> 
> Mike Leach (10):
>   coresight: trace-id: Add API to dynamically assign trace ID values
>   coresight: trace-id: Set up source trace ID map for system
>   coresight: stm: Update STM driver to use Trace ID api
>   coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>   coresight: etm3x: Use trace ID API to allocate IDs
>   coresight: perf: traceid: Add perf notifiers for trace ID
>   perf: cs-etm: Update event to read trace ID from sysfs
>   coresight: Remove legacy Trace ID allocation mechanism
>   coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>   coresight: trace-id: Add debug & test macros to trace id allocation
> 
>  drivers/hwtracing/coresight/Makefile          |   2 +-
>  drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>  .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>  drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>  .../coresight/coresight-etm3x-core.c          |  93 ++++---
>  .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>  .../coresight/coresight-etm4x-core.c          |  63 ++++-
>  .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>  drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>  drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>  drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>  .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>  .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>  include/linux/coresight-pmu.h                 |  12 -
>  include/linux/coresight.h                     |   3 -
>  tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>  16 files changed, 530 insertions(+), 175 deletions(-)
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 00/10] coresight: Add new API to allocate trace source ID values
@ 2022-04-04 16:15   ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-04 16:15 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

On Tue, Mar 08, 2022 at 08:49:50PM +0000, Mike Leach wrote:
> The current method for allocating trace source ID values to sources is
> to use a fixed algorithm for CPU based sources of (cpu_num * 2 + 0x10).
> The STM is allocated ID 0x1.
> 
> This fixed algorithm is used in both the CoreSight driver code, and by
> perf when writing the trace metadata in the AUXTRACE_INFO record.
> 
> The method needs replacing as currently:-
> 1. It is inefficient in using available IDs.
> 2. Does not scale to larger systems with many cores and the algorithm
> has no limits so will generate invalid trace IDs for cpu number > 44.
> 
> Additionally requirements to allocate additional system IDs on some
> systems have been seen.
> 
> This patch set  introduces an API that allows the allocation of trace IDs
> in a dynamic manner.
> 
> Architecturally reserved IDs are never allocated, and the system is
> limited to allocating only valid IDs.
> 
> Each of the current trace sources ETM3.x, ETM4.x and STM is updated to use
> the new API.
> 
> perf handling is changed so that the ID associated with the CPU is read
> from sysfs. The ID allocator is notified when perf events start and stop
> so CPU based IDs are kept constant throughout any perf session.
> 
> For the ETMx.x devices IDs are allocated on certain events
> a) When using sysfs, an ID will be allocated on hardware enable, and freed
> when the sysfs reset is written.
> b) When using perf, ID is allocated on hardware enable, and freed on
> hardware disable.
> 
> For both cases the ID is allocated when sysfs is read to get the current
> trace ID. This ensures that consistent decode metadata can be extracted
> from the system where this read occurs before device enable.
> 
> Note: This patchset breaks backward compatibility for perf record.
> Because the method for generating the AUXTRACE_INFO meta data has
> changed, using an older perf record will result in metadata that
> does not match the trace IDs used in the recorded trace data.
> This mismatch will cause subsequent decode to fail. Older versions of
> perf will still be able to decode data generated by the updated system.
> 

I have started looking at this set, comments to follow shortly.

Thanks,
Mathieu

> 
> Applies to coresight/next [b54f53bc11a5]
> Tested on DB410c
> 
> Mike Leach (10):
>   coresight: trace-id: Add API to dynamically assign trace ID values
>   coresight: trace-id: Set up source trace ID map for system
>   coresight: stm: Update STM driver to use Trace ID api
>   coresight: etm4x: Use trace ID API to dynamically allocate trace ID
>   coresight: etm3x: Use trace ID API to allocate IDs
>   coresight: perf: traceid: Add perf notifiers for trace ID
>   perf: cs-etm: Update event to read trace ID from sysfs
>   coresight: Remove legacy Trace ID allocation mechanism
>   coresight: etmX.X: stm: Remove unused legacy source trace ID ops
>   coresight: trace-id: Add debug & test macros to trace id allocation
> 
>  drivers/hwtracing/coresight/Makefile          |   2 +-
>  drivers/hwtracing/coresight/coresight-core.c  |  64 ++---
>  .../hwtracing/coresight/coresight-etm-perf.c  |  16 +-
>  drivers/hwtracing/coresight/coresight-etm.h   |   3 +-
>  .../coresight/coresight-etm3x-core.c          |  93 ++++---
>  .../coresight/coresight-etm3x-sysfs.c         |  28 +-
>  .../coresight/coresight-etm4x-core.c          |  63 ++++-
>  .../coresight/coresight-etm4x-sysfs.c         |  32 ++-
>  drivers/hwtracing/coresight/coresight-etm4x.h |   3 +
>  drivers/hwtracing/coresight/coresight-priv.h  |   1 +
>  drivers/hwtracing/coresight/coresight-stm.c   |  49 +---
>  .../hwtracing/coresight/coresight-trace-id.c  | 255 ++++++++++++++++++
>  .../hwtracing/coresight/coresight-trace-id.h  |  69 +++++
>  include/linux/coresight-pmu.h                 |  12 -
>  include/linux/coresight.h                     |   3 -
>  tools/perf/arch/arm/util/cs-etm.c             |  12 +-
>  16 files changed, 530 insertions(+), 175 deletions(-)
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
  2022-03-08 20:49   ` Mike Leach
@ 2022-04-05 17:02     ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:02 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

To avoid getting tunnel vision, I haven't red any of the comments already posted
about this set.  As such some of my observations may be redundant.

On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> The existing mechanism to assign Trace ID values to sources is limited
> and does not scale for larger multicore / multi trace source systems.
> 
> The API introduces functions that reserve IDs based on availabilty
> represented by a coresight_trace_id_map structure. This records the
> used and free IDs in a bitmap.
> 
> CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> coresight_trace_id_put_cpu_id pair of functions. The API will record
> the ID associated with the CPU. This ensures that the same ID will be
> re-used while perf events are active on the CPU. The put_cpu_id function
> will pend release of the ID until all perf cs_etm sessions are complete.
> 
> Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> coresight_trace_id_put_system_id.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/Makefile          |   2 +-
>  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
>  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
>  3 files changed, 292 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 
> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> index b6c4a48140ec..329a0c704b87 100644
> --- a/drivers/hwtracing/coresight/Makefile
> +++ b/drivers/hwtracing/coresight/Makefile
> @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
>  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
>  		coresight-sysfs.o coresight-syscfg.o coresight-config.o \
>  		coresight-cfg-preload.o coresight-cfg-afdo.o \
> -		coresight-syscfg-configfs.o
> +		coresight-syscfg-configfs.o coresight-trace-id.o
>  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
>  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
>  		      coresight-tmc-etr.o
> diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> new file mode 100644
> index 000000000000..ce6c7d7b55d6
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> @@ -0,0 +1,222 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> + * Author: Mike Leach <mike.leach@linaro.org>
> + */
> +#include <linux/kernel.h>
> +#include <linux/types.h>
> +#include <linux/spinlock.h>
> +
> +#include "coresight-trace-id.h"
> +
> +/* need to keep data on ids & association with cpus. */
> +struct cpu_id_info {
> +	int id;
> +	bool pend_rel;
> +};
> +
> +/* maintain a record of the current mapping of cpu IDs */
> +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> +
> +/* a list of currently used id_maps */
> +static LIST_HEAD(id_map_list);
> +
> +/* perf session active flag */
> +static int perf_cs_etm_session_active;
> +
> +/* lock to protect id_map list and cpu data  */
> +static DEFINE_SPINLOCK(id_map_lock);
> +
> +/* ID 0 is reserved */
> +#define CORESIGHT_TRACE_ID_RES_0 0
> +
> +/* ID 0x70 onwards are reserved */
> +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> +
> +#define IS_VALID_ID(id)	\
> +	((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> +
> +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		set_bit(id, id_map->avail_ids);
> +}
> +
> +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		clear_bit(id, id_map->avail_ids);
> +}
> +
> +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		set_bit(id, id_map->pend_rel_ids);
> +}
> +
> +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		clear_bit(id, id_map->pend_rel_ids);
> +}
> +
> +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> +{
> +	int id;
> +
> +	id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> +	if (id >= CORESIGHT_TRACE_IDS_MAX)
> +		id = -EINVAL;
> +	return id;
> +}
> +
> +/* release all pending IDs for all current maps & clear CPU associations */
> +static void coresight_trace_id_release_all_pending(void)
> +{
> +	struct coresight_trace_id_map *id_map;
> +	int cpu, bit;
> +
> +	list_for_each_entry(id_map, &id_map_list, node) {
> +		for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> +			clear_bit(bit, id_map->avail_ids);
> +			clear_bit(bit, id_map->pend_rel_ids);
> +		}
> +	}
> +
> +	for_each_possible_cpu(cpu) {
> +		if (per_cpu(cpu_ids, cpu).pend_rel) {
> +			per_cpu(cpu_ids, cpu).pend_rel = false;
> +			per_cpu(cpu_ids, cpu).id = 0;
> +		}
> +	}
> +}
> +
> +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = per_cpu(cpu_ids, cpu).id;
> +	if (!id) {
> +		id = coresight_trace_id_find_new_id(id_map);
> +		if (id < 0)
> +			goto get_cpu_id_out;
> +	}
> +
> +	per_cpu(cpu_ids, cpu).id = id;
> +	per_cpu(cpu_ids, cpu).pend_rel = false;
> +	coresight_trace_id_set_inuse(id, id_map);
> +	coresight_trace_id_clear_pend_rel(id, id_map);

The above should have been done when the ID for this CPU has been set for the
first time.  Therefore we should simply release the spinlock and return if an ID
has already been set.

> +
> +get_cpu_id_out:
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +	return id;
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> +
> +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = per_cpu(cpu_ids, cpu).id;
> +	if (!id)
> +		goto put_cpu_id_out;
> +
> +	if (perf_cs_etm_session_active) {
> +		/* set release at pending if perf still active */
> +		coresight_trace_id_set_pend_rel(id, id_map);
> +		per_cpu(cpu_ids, cpu).pend_rel = true;
> +	} else {
> +		/* otherwise clear id */
> +		coresight_trace_id_clear_inuse(id, id_map);
> +		per_cpu(cpu_ids, cpu).id = 0;
> +	}
> +
> + put_cpu_id_out:
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> +
> +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> +{

So far I have reviewed until patch 05 and I find passing the id_map every time
we need an ID quite heavy, especially since it is always the same one. 

> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = coresight_trace_id_find_new_id(id_map);
> +	if (id > 0)
> +		coresight_trace_id_set_inuse(id, id_map);
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +
> +	return id;
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> +
> +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	coresight_trace_id_clear_inuse(id, id_map);
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> +
> +void coresight_trace_id_perf_start(void)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	perf_cs_etm_session_active++;
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> +
> +void coresight_trace_id_perf_stop(void)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	perf_cs_etm_session_active--;
> +	if (!perf_cs_etm_session_active)
> +		coresight_trace_id_release_all_pending();
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> +
> +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int bit;
> +
> +	/* set all reserved bits as in-use */
> +	set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> +	for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> +	     bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> +		set_bit(bit, id_map->avail_ids);
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +
> +	/* add id_map to the list */
> +	list_add(&id_map->node, &id_map_list);
> +
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> +
> +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +
> +	/* remove id_map from list */
> +	list_del(&id_map->node);
> +
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> new file mode 100644
> index 000000000000..01db2441cee6
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> @@ -0,0 +1,69 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> + * Author: Mike Leach <mike.leach@linaro.org>
> + */
> +
> +#ifndef _CORESIGHT_TRACE_ID_H
> +#define _CORESIGHT_TRACE_ID_H
> +
> +/*
> + * Coresight trace ID allocation API
> + *
> + * With multi cpu systems, and more additional trace sources a scalable
> + * trace ID reservation system is required.
> + *
> + * The system will allocate Ids on a demand basis, and allow them to be
> + * released when done.
> + *
> + * In order to ensure that a consistent cpu / ID matching is maintained
> + * throughout a perf cs_etm event session - a session in progress flag will
> + * be maintained, and released IDs not cleared until the perf session is
> + * complete. This allows the same CPU to be re-allocated its prior ID.
> + *
> + *
> + * Trace ID maps will be created and initialised to prevent architecturally
> + * reserved IDs from being allocated.
> + *
> + * API permits multiple maps to be maintained - for large systems where
> + * different sets of cpus trace into different independent sinks.
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/types.h>
> +
> +
> +/* architecturally we have 128 IDs some of which are reserved */
> +#define CORESIGHT_TRACE_IDS_MAX 128
> +
> +/**
> + * Trace ID map.
> + *
> + * @avail_ids:	Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> + *		Initialised so that the reserved IDs are permanently marked as in use.
> + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> + *                as available, to allow re-allocation to the same CPU during a perf session.
> + * @node:	List entry to add to list of managed trace id maps.
> + */
> +struct coresight_trace_id_map {
> +	DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> +	DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> +	struct list_head node;
> +};
> +
> +/* Allocate and release IDs */
> +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> +
> +/* initialise ID map - block reserved IDs, add to list of ID maps */
> +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> +/* remove ID maps from list of maps */
> +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> +
> +/* notifiers for perf session start and stop */
> +void coresight_trace_id_perf_start(void);
> +void coresight_trace_id_perf_stop(void);
> +
> +#endif /* _CORESIGHT_TRACE_ID_H */
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
@ 2022-04-05 17:02     ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:02 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

To avoid getting tunnel vision, I haven't red any of the comments already posted
about this set.  As such some of my observations may be redundant.

On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> The existing mechanism to assign Trace ID values to sources is limited
> and does not scale for larger multicore / multi trace source systems.
> 
> The API introduces functions that reserve IDs based on availabilty
> represented by a coresight_trace_id_map structure. This records the
> used and free IDs in a bitmap.
> 
> CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> coresight_trace_id_put_cpu_id pair of functions. The API will record
> the ID associated with the CPU. This ensures that the same ID will be
> re-used while perf events are active on the CPU. The put_cpu_id function
> will pend release of the ID until all perf cs_etm sessions are complete.
> 
> Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> coresight_trace_id_put_system_id.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/Makefile          |   2 +-
>  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
>  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
>  3 files changed, 292 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
>  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> 
> diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> index b6c4a48140ec..329a0c704b87 100644
> --- a/drivers/hwtracing/coresight/Makefile
> +++ b/drivers/hwtracing/coresight/Makefile
> @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
>  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
>  		coresight-sysfs.o coresight-syscfg.o coresight-config.o \
>  		coresight-cfg-preload.o coresight-cfg-afdo.o \
> -		coresight-syscfg-configfs.o
> +		coresight-syscfg-configfs.o coresight-trace-id.o
>  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
>  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
>  		      coresight-tmc-etr.o
> diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> new file mode 100644
> index 000000000000..ce6c7d7b55d6
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> @@ -0,0 +1,222 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> + * Author: Mike Leach <mike.leach@linaro.org>
> + */
> +#include <linux/kernel.h>
> +#include <linux/types.h>
> +#include <linux/spinlock.h>
> +
> +#include "coresight-trace-id.h"
> +
> +/* need to keep data on ids & association with cpus. */
> +struct cpu_id_info {
> +	int id;
> +	bool pend_rel;
> +};
> +
> +/* maintain a record of the current mapping of cpu IDs */
> +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> +
> +/* a list of currently used id_maps */
> +static LIST_HEAD(id_map_list);
> +
> +/* perf session active flag */
> +static int perf_cs_etm_session_active;
> +
> +/* lock to protect id_map list and cpu data  */
> +static DEFINE_SPINLOCK(id_map_lock);
> +
> +/* ID 0 is reserved */
> +#define CORESIGHT_TRACE_ID_RES_0 0
> +
> +/* ID 0x70 onwards are reserved */
> +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> +
> +#define IS_VALID_ID(id)	\
> +	((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> +
> +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		set_bit(id, id_map->avail_ids);
> +}
> +
> +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		clear_bit(id, id_map->avail_ids);
> +}
> +
> +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		set_bit(id, id_map->pend_rel_ids);
> +}
> +
> +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> +{
> +	if (IS_VALID_ID(id))
> +		clear_bit(id, id_map->pend_rel_ids);
> +}
> +
> +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> +{
> +	int id;
> +
> +	id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> +	if (id >= CORESIGHT_TRACE_IDS_MAX)
> +		id = -EINVAL;
> +	return id;
> +}
> +
> +/* release all pending IDs for all current maps & clear CPU associations */
> +static void coresight_trace_id_release_all_pending(void)
> +{
> +	struct coresight_trace_id_map *id_map;
> +	int cpu, bit;
> +
> +	list_for_each_entry(id_map, &id_map_list, node) {
> +		for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> +			clear_bit(bit, id_map->avail_ids);
> +			clear_bit(bit, id_map->pend_rel_ids);
> +		}
> +	}
> +
> +	for_each_possible_cpu(cpu) {
> +		if (per_cpu(cpu_ids, cpu).pend_rel) {
> +			per_cpu(cpu_ids, cpu).pend_rel = false;
> +			per_cpu(cpu_ids, cpu).id = 0;
> +		}
> +	}
> +}
> +
> +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = per_cpu(cpu_ids, cpu).id;
> +	if (!id) {
> +		id = coresight_trace_id_find_new_id(id_map);
> +		if (id < 0)
> +			goto get_cpu_id_out;
> +	}
> +
> +	per_cpu(cpu_ids, cpu).id = id;
> +	per_cpu(cpu_ids, cpu).pend_rel = false;
> +	coresight_trace_id_set_inuse(id, id_map);
> +	coresight_trace_id_clear_pend_rel(id, id_map);

The above should have been done when the ID for this CPU has been set for the
first time.  Therefore we should simply release the spinlock and return if an ID
has already been set.

> +
> +get_cpu_id_out:
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +	return id;
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> +
> +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = per_cpu(cpu_ids, cpu).id;
> +	if (!id)
> +		goto put_cpu_id_out;
> +
> +	if (perf_cs_etm_session_active) {
> +		/* set release at pending if perf still active */
> +		coresight_trace_id_set_pend_rel(id, id_map);
> +		per_cpu(cpu_ids, cpu).pend_rel = true;
> +	} else {
> +		/* otherwise clear id */
> +		coresight_trace_id_clear_inuse(id, id_map);
> +		per_cpu(cpu_ids, cpu).id = 0;
> +	}
> +
> + put_cpu_id_out:
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> +
> +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> +{

So far I have reviewed until patch 05 and I find passing the id_map every time
we need an ID quite heavy, especially since it is always the same one. 

> +	unsigned long flags;
> +	int id;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	id = coresight_trace_id_find_new_id(id_map);
> +	if (id > 0)
> +		coresight_trace_id_set_inuse(id, id_map);
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +
> +	return id;
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> +
> +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	coresight_trace_id_clear_inuse(id, id_map);
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> +
> +void coresight_trace_id_perf_start(void)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	perf_cs_etm_session_active++;
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> +
> +void coresight_trace_id_perf_stop(void)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +	perf_cs_etm_session_active--;
> +	if (!perf_cs_etm_session_active)
> +		coresight_trace_id_release_all_pending();
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> +
> +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +	int bit;
> +
> +	/* set all reserved bits as in-use */
> +	set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> +	for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> +	     bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> +		set_bit(bit, id_map->avail_ids);
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +
> +	/* add id_map to the list */
> +	list_add(&id_map->node, &id_map_list);
> +
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> +
> +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&id_map_lock, flags);
> +
> +	/* remove id_map from list */
> +	list_del(&id_map->node);
> +
> +	spin_unlock_irqrestore(&id_map_lock, flags);
> +}
> +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> new file mode 100644
> index 000000000000..01db2441cee6
> --- /dev/null
> +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> @@ -0,0 +1,69 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> + * Author: Mike Leach <mike.leach@linaro.org>
> + */
> +
> +#ifndef _CORESIGHT_TRACE_ID_H
> +#define _CORESIGHT_TRACE_ID_H
> +
> +/*
> + * Coresight trace ID allocation API
> + *
> + * With multi cpu systems, and more additional trace sources a scalable
> + * trace ID reservation system is required.
> + *
> + * The system will allocate Ids on a demand basis, and allow them to be
> + * released when done.
> + *
> + * In order to ensure that a consistent cpu / ID matching is maintained
> + * throughout a perf cs_etm event session - a session in progress flag will
> + * be maintained, and released IDs not cleared until the perf session is
> + * complete. This allows the same CPU to be re-allocated its prior ID.
> + *
> + *
> + * Trace ID maps will be created and initialised to prevent architecturally
> + * reserved IDs from being allocated.
> + *
> + * API permits multiple maps to be maintained - for large systems where
> + * different sets of cpus trace into different independent sinks.
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/types.h>
> +
> +
> +/* architecturally we have 128 IDs some of which are reserved */
> +#define CORESIGHT_TRACE_IDS_MAX 128
> +
> +/**
> + * Trace ID map.
> + *
> + * @avail_ids:	Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> + *		Initialised so that the reserved IDs are permanently marked as in use.
> + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> + *                as available, to allow re-allocation to the same CPU during a perf session.
> + * @node:	List entry to add to list of managed trace id maps.
> + */
> +struct coresight_trace_id_map {
> +	DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> +	DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> +	struct list_head node;
> +};
> +
> +/* Allocate and release IDs */
> +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> +
> +/* initialise ID map - block reserved IDs, add to list of ID maps */
> +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> +/* remove ID maps from list of maps */
> +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> +
> +/* notifiers for perf session start and stop */
> +void coresight_trace_id_perf_start(void);
> +void coresight_trace_id_perf_stop(void);
> +
> +#endif /* _CORESIGHT_TRACE_ID_H */
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
  2022-03-08 20:49   ` Mike Leach
@ 2022-04-05 17:22     ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:22 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:55PM +0000, Mike Leach wrote:
> Use the TraceID API to allocate ETM trace IDs dynamically.
> 
> As with the etm4x we allocate on enable / disable for perf,
> allocate on enable / reset for sysfs.
> 
> Additionally we allocate on sysfs file read as both perf and sysfs
> can read the ID before enabling the hardware.
> 
> Remove sysfs option to write trace ID - which is inconsistent with
> both the dynamic allocation method and the fixed allocation method
> previously used.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/coresight-etm.h   |  2 +
>  .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
>  .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
>  3 files changed, 71 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
> index f3ab96eaf44e..3667428d38b6 100644
> --- a/drivers/hwtracing/coresight/coresight-etm.h
> +++ b/drivers/hwtracing/coresight/coresight-etm.h
> @@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
>  void etm_set_default(struct etm_config *config);
>  void etm_config_trace_mode(struct etm_config *config);
>  struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
> +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
> +void etm_release_trace_id(struct etm_drvdata *drvdata);
>  #endif
> diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> index 7d413ba8b823..98213503bd09 100644
> --- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> @@ -32,6 +32,7 @@
>  
>  #include "coresight-etm.h"
>  #include "coresight-etm-perf.h"
> +#include "coresight-trace-id.h"
>  
>  /*
>   * Not really modular but using module_param is the easiest way to
> @@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
>  	return etm_get_trace_id(drvdata);
>  }
>  
> +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
> +{
> +	int trace_id;
> +
> +	/*
> +	 * This will allocate a trace ID to the cpu,
> +	 * or return the one currently allocated.
> +	 */
> +	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> +						 coresight_get_trace_id_map());
> +	if (trace_id > 0) {
> +		spin_lock(&drvdata->spinlock);
> +		drvdata->traceid = (u8)trace_id;
> +		spin_unlock(&drvdata->spinlock);
> +	} else {
> +		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> +		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
> +	}
> +	return trace_id;
> +}
> +
> +void etm_release_trace_id(struct etm_drvdata *drvdata)
> +{
> +	coresight_trace_id_put_cpu_id(drvdata->cpu,
> +				      coresight_get_trace_id_map());
> +	spin_lock(&drvdata->spinlock);
> +	drvdata->traceid = 0;
> +	spin_unlock(&drvdata->spinlock);
> +}
> +
>  static int etm_enable_perf(struct coresight_device *csdev,
>  			   struct perf_event *event)
>  {
>  	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> +	int ret;
>  
>  	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
>  		return -EINVAL;
>  
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
>

For etm4x, this was done after etm_parse_event_config().  Please do the same
here.


>  	/* Configure the tracer based on the session's specifics */
>  	etm_parse_event_config(drvdata, event);
>  	/* And enable it */
> -	return etm_enable_hw(drvdata);
> +	ret = etm_enable_hw(drvdata);
> +
> +	if (ret)
> +		etm_release_trace_id(drvdata);
> +	return ret;
>  }
>  
>  static int etm_enable_sysfs(struct coresight_device *csdev)
> @@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
>  	struct etm_enable_arg arg = { };
>  	int ret;
>  
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
> +

Please add the same comment as in etm4_disable_sysfs() in etm_disable_sysfs().

>  	spin_lock(&drvdata->spinlock);
>  
>  	/*
> @@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
>  
>  	if (!ret)
>  		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> +	else
> +		etm_release_trace_id(drvdata);
>  	return ret;
>  }
>  
> @@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
>  	coresight_disclaim_device_unlocked(csdev);
>  
>  	CS_LOCK(drvdata->base);
> +
> +	etm_release_trace_id(drvdata);
>  }
>  
>  static void etm_disable_sysfs(struct coresight_device *csdev)
> @@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
>  	CS_LOCK(drvdata->base);
>  }
>  
> -static void etm_init_trace_id(struct etm_drvdata *drvdata)
> -{
> -	drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
> -}
> -
>  static int __init etm_hp_setup(void)
>  {
>  	int ret;
> @@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  	if (etm_arch_supported(drvdata->arch) == false)
>  		return -EINVAL;
>  
> -	etm_init_trace_id(drvdata);
>  	etm_set_default(&drvdata->config);
>  
>  	pdata = coresight_get_platform_data(dev);
> @@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  		return PTR_ERR(drvdata->csdev);
>  
>  	ret = etm_perf_symlink(drvdata->csdev, true);
> -	if (ret) {
> -		coresight_unregister(drvdata->csdev);
> -		return ret;
> -	}
> +	if (ret)
> +		goto cs_unregister;
> +
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		goto cs_unregister;
>  
>  	etmdrvdata[drvdata->cpu] = drvdata;
>  
> @@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  	}
>  
>  	return 0;
> +
> +cs_unregister:
> +	coresight_unregister(drvdata->csdev);
> +	return ret;
>  }
>  
>  static void clear_etmdrvdata(void *info)
> @@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
>  
>  	etm_perf_symlink(drvdata->csdev, false);
>  
> +	etm_release_trace_id(drvdata);
> +
>  	/*
>  	 * Taking hotplug lock here to avoid racing between etm_remove and
>  	 * CPU hotplug call backs.
> diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> index e8c7649f123e..3ee70b174240 100644
> --- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> +++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> @@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
>  
>  		etm_set_default(config);
>  		spin_unlock(&drvdata->spinlock);
> +		/* release trace id outside the spinlock as this fn uses it */
> +		etm_release_trace_id(drvdata);
>  	}
>  
>  	return size;
> @@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
>  static ssize_t traceid_show(struct device *dev,
>  			    struct device_attribute *attr, char *buf)
>  {
> -	unsigned long val;
> -	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> -
> -	val = etm_get_trace_id(drvdata);
> -
> -	return sprintf(buf, "%#lx\n", val);
> -}
> -
> -static ssize_t traceid_store(struct device *dev,
> -			     struct device_attribute *attr,
> -			     const char *buf, size_t size)
> -{
> -	int ret;
> -	unsigned long val;
> +	int trace_id;
>  	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
>  
> -	ret = kstrtoul(buf, 16, &val);
> -	if (ret)
> -		return ret;
> +	trace_id = etm_read_alloc_trace_id(drvdata);
> +	if (trace_id < 0)
> +		return trace_id;
>  
> -	drvdata->traceid = val & ETM_TRACEID_MASK;
> -	return size;
> +	return sprintf(buf, "%#x\n", trace_id);
>  }
> -static DEVICE_ATTR_RW(traceid);
> +static DEVICE_ATTR_RO(traceid);
>  
>  static struct attribute *coresight_etm_attrs[] = {
>  	&dev_attr_nr_addr_cmp.attr,
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
@ 2022-04-05 17:22     ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:22 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:55PM +0000, Mike Leach wrote:
> Use the TraceID API to allocate ETM trace IDs dynamically.
> 
> As with the etm4x we allocate on enable / disable for perf,
> allocate on enable / reset for sysfs.
> 
> Additionally we allocate on sysfs file read as both perf and sysfs
> can read the ID before enabling the hardware.
> 
> Remove sysfs option to write trace ID - which is inconsistent with
> both the dynamic allocation method and the fixed allocation method
> previously used.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/coresight-etm.h   |  2 +
>  .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
>  .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
>  3 files changed, 71 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
> index f3ab96eaf44e..3667428d38b6 100644
> --- a/drivers/hwtracing/coresight/coresight-etm.h
> +++ b/drivers/hwtracing/coresight/coresight-etm.h
> @@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
>  void etm_set_default(struct etm_config *config);
>  void etm_config_trace_mode(struct etm_config *config);
>  struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
> +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
> +void etm_release_trace_id(struct etm_drvdata *drvdata);
>  #endif
> diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> index 7d413ba8b823..98213503bd09 100644
> --- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> @@ -32,6 +32,7 @@
>  
>  #include "coresight-etm.h"
>  #include "coresight-etm-perf.h"
> +#include "coresight-trace-id.h"
>  
>  /*
>   * Not really modular but using module_param is the easiest way to
> @@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
>  	return etm_get_trace_id(drvdata);
>  }
>  
> +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
> +{
> +	int trace_id;
> +
> +	/*
> +	 * This will allocate a trace ID to the cpu,
> +	 * or return the one currently allocated.
> +	 */
> +	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> +						 coresight_get_trace_id_map());
> +	if (trace_id > 0) {
> +		spin_lock(&drvdata->spinlock);
> +		drvdata->traceid = (u8)trace_id;
> +		spin_unlock(&drvdata->spinlock);
> +	} else {
> +		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> +		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
> +	}
> +	return trace_id;
> +}
> +
> +void etm_release_trace_id(struct etm_drvdata *drvdata)
> +{
> +	coresight_trace_id_put_cpu_id(drvdata->cpu,
> +				      coresight_get_trace_id_map());
> +	spin_lock(&drvdata->spinlock);
> +	drvdata->traceid = 0;
> +	spin_unlock(&drvdata->spinlock);
> +}
> +
>  static int etm_enable_perf(struct coresight_device *csdev,
>  			   struct perf_event *event)
>  {
>  	struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> +	int ret;
>  
>  	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
>  		return -EINVAL;
>  
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
>

For etm4x, this was done after etm_parse_event_config().  Please do the same
here.


>  	/* Configure the tracer based on the session's specifics */
>  	etm_parse_event_config(drvdata, event);
>  	/* And enable it */
> -	return etm_enable_hw(drvdata);
> +	ret = etm_enable_hw(drvdata);
> +
> +	if (ret)
> +		etm_release_trace_id(drvdata);
> +	return ret;
>  }
>  
>  static int etm_enable_sysfs(struct coresight_device *csdev)
> @@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
>  	struct etm_enable_arg arg = { };
>  	int ret;
>  
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
> +

Please add the same comment as in etm4_disable_sysfs() in etm_disable_sysfs().

>  	spin_lock(&drvdata->spinlock);
>  
>  	/*
> @@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
>  
>  	if (!ret)
>  		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> +	else
> +		etm_release_trace_id(drvdata);
>  	return ret;
>  }
>  
> @@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
>  	coresight_disclaim_device_unlocked(csdev);
>  
>  	CS_LOCK(drvdata->base);
> +
> +	etm_release_trace_id(drvdata);
>  }
>  
>  static void etm_disable_sysfs(struct coresight_device *csdev)
> @@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
>  	CS_LOCK(drvdata->base);
>  }
>  
> -static void etm_init_trace_id(struct etm_drvdata *drvdata)
> -{
> -	drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
> -}
> -
>  static int __init etm_hp_setup(void)
>  {
>  	int ret;
> @@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  	if (etm_arch_supported(drvdata->arch) == false)
>  		return -EINVAL;
>  
> -	etm_init_trace_id(drvdata);
>  	etm_set_default(&drvdata->config);
>  
>  	pdata = coresight_get_platform_data(dev);
> @@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  		return PTR_ERR(drvdata->csdev);
>  
>  	ret = etm_perf_symlink(drvdata->csdev, true);
> -	if (ret) {
> -		coresight_unregister(drvdata->csdev);
> -		return ret;
> -	}
> +	if (ret)
> +		goto cs_unregister;
> +
> +	ret = etm_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		goto cs_unregister;
>  
>  	etmdrvdata[drvdata->cpu] = drvdata;
>  
> @@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
>  	}
>  
>  	return 0;
> +
> +cs_unregister:
> +	coresight_unregister(drvdata->csdev);
> +	return ret;
>  }
>  
>  static void clear_etmdrvdata(void *info)
> @@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
>  
>  	etm_perf_symlink(drvdata->csdev, false);
>  
> +	etm_release_trace_id(drvdata);
> +
>  	/*
>  	 * Taking hotplug lock here to avoid racing between etm_remove and
>  	 * CPU hotplug call backs.
> diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> index e8c7649f123e..3ee70b174240 100644
> --- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> +++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> @@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
>  
>  		etm_set_default(config);
>  		spin_unlock(&drvdata->spinlock);
> +		/* release trace id outside the spinlock as this fn uses it */
> +		etm_release_trace_id(drvdata);
>  	}
>  
>  	return size;
> @@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
>  static ssize_t traceid_show(struct device *dev,
>  			    struct device_attribute *attr, char *buf)
>  {
> -	unsigned long val;
> -	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> -
> -	val = etm_get_trace_id(drvdata);
> -
> -	return sprintf(buf, "%#lx\n", val);
> -}
> -
> -static ssize_t traceid_store(struct device *dev,
> -			     struct device_attribute *attr,
> -			     const char *buf, size_t size)
> -{
> -	int ret;
> -	unsigned long val;
> +	int trace_id;
>  	struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
>  
> -	ret = kstrtoul(buf, 16, &val);
> -	if (ret)
> -		return ret;
> +	trace_id = etm_read_alloc_trace_id(drvdata);
> +	if (trace_id < 0)
> +		return trace_id;
>  
> -	drvdata->traceid = val & ETM_TRACEID_MASK;
> -	return size;
> +	return sprintf(buf, "%#x\n", trace_id);
>  }
> -static DEVICE_ATTR_RW(traceid);
> +static DEVICE_ATTR_RO(traceid);
>  
>  static struct attribute *coresight_etm_attrs[] = {
>  	&dev_attr_nr_addr_cmp.attr,
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 04/10] coresight: etm4x: Use trace ID API to dynamically allocate trace ID
  2022-03-08 20:49   ` Mike Leach
@ 2022-04-05 17:25     ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:25 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:54PM +0000, Mike Leach wrote:
> The trace ID API is now used to allocate trace IDs for ETM4.x / ETE
> devices.
> 
> For perf sessions, these will be allocated on enable, and released on
> disable.
> 
> For sysfs sessions, these will be allocated on enable, but only released
> on reset. This allows the sysfs session to interrogate the Trace ID used
> after the session is over - maintaining functional consistency with the
> previous allocation scheme.
> 
> The trace ID will also be allocated on read of the mgmt/trctraceid file.
> This ensures that if perf or sysfs read this before enabling trace, the
> value will be the one used for the trace session.
> 
> Trace ID initialisation is removed from the _probe() function.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  .../coresight/coresight-etm4x-core.c          | 63 +++++++++++++++++--
>  .../coresight/coresight-etm4x-sysfs.c         | 32 +++++++++-
>  drivers/hwtracing/coresight/coresight-etm4x.h |  3 +
>  3 files changed, 89 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> index 7f416a12000e..aa7ea5ad8b06 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> @@ -42,6 +42,7 @@
>  #include "coresight-etm4x-cfg.h"
>  #include "coresight-self-hosted-trace.h"
>  #include "coresight-syscfg.h"
> +#include "coresight-trace-id.h"
>  
>  static int boot_enable;
>  module_param(boot_enable, int, 0444);
> @@ -234,6 +235,36 @@ static int etm4_trace_id(struct coresight_device *csdev)
>  	return drvdata->trcid;
>  }
>  
> +int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
> +{
> +	int trace_id;
> +
> +	/*
> +	 * This will allocate a trace ID to the cpu,
> +	 * or return the one currently allocated.
> +	 */
> +	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> +						 coresight_get_trace_id_map());
> +	if (trace_id > 0) {
> +		spin_lock(&drvdata->spinlock);
> +		drvdata->trcid = (u8)trace_id;
> +		spin_unlock(&drvdata->spinlock);
> +	} else {
> +		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> +		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
> +	}
> +	return trace_id;
> +}
> +
> +void etm4_release_trace_id(struct etmv4_drvdata *drvdata)
> +{
> +	coresight_trace_id_put_cpu_id(drvdata->cpu,
> +				      coresight_get_trace_id_map());
> +	spin_lock(&drvdata->spinlock);
> +	drvdata->trcid = 0;
> +	spin_unlock(&drvdata->spinlock);
> +}
> +
>  struct etm4_enable_arg {
>  	struct etmv4_drvdata *drvdata;
>  	int rc;
> @@ -717,9 +748,18 @@ static int etm4_enable_perf(struct coresight_device *csdev,
>  	ret = etm4_parse_event_config(csdev, event);
>  	if (ret)
>  		goto out;
> +
> +	/* allocate a trace ID */
> +	ret =  etm4_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		goto out;
> +
>  	/* And enable it */
>  	ret = etm4_enable_hw(drvdata);
>  
> +	/* failed to enable */
> +	if (ret)
> +		etm4_release_trace_id(drvdata);
>  out:
>  	return ret;
>  }
> @@ -739,6 +779,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
>  			return ret;
>  	}
>  
> +	/* allocate a trace ID */
> +	ret = etm4_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
> +
>  	spin_lock(&drvdata->spinlock);
>  
>  	/*
> @@ -756,6 +801,8 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
>  
>  	if (!ret)
>  		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> +	else
> +		etm4_release_trace_id(drvdata);
>  	return ret;
>  }
>  
> @@ -883,6 +930,9 @@ static int etm4_disable_perf(struct coresight_device *csdev,
>  	/* TRCVICTLR::SSSTATUS, bit[9] */
>  	filters->ssstatus = (control & BIT(9));
>  
> +	/* release trace ID - this may pend release if perf session is still active */
> +	etm4_release_trace_id(drvdata);
> +
>  	return 0;
>  }
>  
> @@ -908,6 +958,13 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
>  	spin_unlock(&drvdata->spinlock);
>  	cpus_read_unlock();
>  
> +	/*
> +	 * unlike for perf session - we only release trace IDs when resetting
> +	 * sysfs. This permits sysfs users to read the trace ID after the trace
> +	 * session has completed. This maintains operational behaviour with
> +	 * prior trace id allocation method
> +	 */
> +
>  	dev_dbg(&csdev->dev, "ETM tracing disabled\n");
>  }
>  
> @@ -1596,11 +1653,6 @@ static int etm4_dying_cpu(unsigned int cpu)
>  	return 0;
>  }
>  
> -static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
> -{
> -	drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
> -}
> -
>  static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
>  {
>  	int i, ret = 0;
> @@ -2005,7 +2057,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
>  	if (!desc.name)
>  		return -ENOMEM;
>  
> -	etm4_init_trace_id(drvdata);
>  	etm4_set_default(&drvdata->config);
>  
>  	pdata = coresight_get_platform_data(dev);
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> index 21687cc1e4e2..bb69a203b833 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> @@ -266,10 +266,11 @@ static ssize_t reset_store(struct device *dev,
>  	config->vmid_mask0 = 0x0;
>  	config->vmid_mask1 = 0x0;
>  
> -	drvdata->trcid = drvdata->cpu + 1;
> -

That was really broken...  I'm surprised we never caught it before this
patchset. 

More comments to come tomorrow.

Thanks,
Mathieu

>  	spin_unlock(&drvdata->spinlock);
>  
> +	/* for sysfs - only release trace id when resetting */
> +	etm4_release_trace_id(drvdata);
> +
>  	cscfg_csdev_reset_feats(to_coresight_device(dev));
>  
>  	return size;
> @@ -2355,6 +2356,31 @@ static struct attribute *coresight_etmv4_attrs[] = {
>  	NULL,
>  };
>  
> +/*
> + * Trace ID allocated dynamically on enable - but also allocate on read
> + * in case sysfs or perf read before enable to ensure consistent metadata
> + * information for trace decode
> + */
> +static ssize_t trctraceid_show(struct device *dev,
> +			       struct device_attribute *attr,
> +			       char *buf)
> +{
> +	int trace_id;
> +	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
> +
> +	trace_id = etm4_read_alloc_trace_id(drvdata);
> +	if (trace_id < 0)
> +		return trace_id;
> +
> +	return scnprintf(buf, PAGE_SIZE, "0x%x\n", trace_id);
> +}
> +
> +/* mgmt group uses extended attributes - no standard macro available */
> +static struct dev_ext_attribute dev_attr_trctraceid = {
> +		__ATTR(trctraceid, 0444, trctraceid_show, NULL),
> +		(void *)(unsigned long)TRCTRACEIDR
> +};
> +
>  struct etmv4_reg {
>  	struct coresight_device *csdev;
>  	u32 offset;
> @@ -2491,7 +2517,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
>  	coresight_etm4x_reg(trcpidr3, TRCPIDR3),
>  	coresight_etm4x_reg(trcoslsr, TRCOSLSR),
>  	coresight_etm4x_reg(trcconfig, TRCCONFIGR),
> -	coresight_etm4x_reg(trctraceid, TRCTRACEIDR),
> +	&dev_attr_trctraceid.attr.attr,
>  	coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
>  	NULL,
>  };
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
> index 3c4d69b096ca..64976a00c839 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x.h
> +++ b/drivers/hwtracing/coresight/coresight-etm4x.h
> @@ -1010,4 +1010,7 @@ static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata)
>  {
>  	return drvdata->arch >= ETM_ARCH_ETE;
>  }
> +
> +int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata);
> +void etm4_release_trace_id(struct etmv4_drvdata *drvdata);
>  #endif
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 04/10] coresight: etm4x: Use trace ID API to dynamically allocate trace ID
@ 2022-04-05 17:25     ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-05 17:25 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:54PM +0000, Mike Leach wrote:
> The trace ID API is now used to allocate trace IDs for ETM4.x / ETE
> devices.
> 
> For perf sessions, these will be allocated on enable, and released on
> disable.
> 
> For sysfs sessions, these will be allocated on enable, but only released
> on reset. This allows the sysfs session to interrogate the Trace ID used
> after the session is over - maintaining functional consistency with the
> previous allocation scheme.
> 
> The trace ID will also be allocated on read of the mgmt/trctraceid file.
> This ensures that if perf or sysfs read this before enabling trace, the
> value will be the one used for the trace session.
> 
> Trace ID initialisation is removed from the _probe() function.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  .../coresight/coresight-etm4x-core.c          | 63 +++++++++++++++++--
>  .../coresight/coresight-etm4x-sysfs.c         | 32 +++++++++-
>  drivers/hwtracing/coresight/coresight-etm4x.h |  3 +
>  3 files changed, 89 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> index 7f416a12000e..aa7ea5ad8b06 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
> @@ -42,6 +42,7 @@
>  #include "coresight-etm4x-cfg.h"
>  #include "coresight-self-hosted-trace.h"
>  #include "coresight-syscfg.h"
> +#include "coresight-trace-id.h"
>  
>  static int boot_enable;
>  module_param(boot_enable, int, 0444);
> @@ -234,6 +235,36 @@ static int etm4_trace_id(struct coresight_device *csdev)
>  	return drvdata->trcid;
>  }
>  
> +int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata)
> +{
> +	int trace_id;
> +
> +	/*
> +	 * This will allocate a trace ID to the cpu,
> +	 * or return the one currently allocated.
> +	 */
> +	trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> +						 coresight_get_trace_id_map());
> +	if (trace_id > 0) {
> +		spin_lock(&drvdata->spinlock);
> +		drvdata->trcid = (u8)trace_id;
> +		spin_unlock(&drvdata->spinlock);
> +	} else {
> +		pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> +		       dev_name(&drvdata->csdev->dev), drvdata->cpu);
> +	}
> +	return trace_id;
> +}
> +
> +void etm4_release_trace_id(struct etmv4_drvdata *drvdata)
> +{
> +	coresight_trace_id_put_cpu_id(drvdata->cpu,
> +				      coresight_get_trace_id_map());
> +	spin_lock(&drvdata->spinlock);
> +	drvdata->trcid = 0;
> +	spin_unlock(&drvdata->spinlock);
> +}
> +
>  struct etm4_enable_arg {
>  	struct etmv4_drvdata *drvdata;
>  	int rc;
> @@ -717,9 +748,18 @@ static int etm4_enable_perf(struct coresight_device *csdev,
>  	ret = etm4_parse_event_config(csdev, event);
>  	if (ret)
>  		goto out;
> +
> +	/* allocate a trace ID */
> +	ret =  etm4_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		goto out;
> +
>  	/* And enable it */
>  	ret = etm4_enable_hw(drvdata);
>  
> +	/* failed to enable */
> +	if (ret)
> +		etm4_release_trace_id(drvdata);
>  out:
>  	return ret;
>  }
> @@ -739,6 +779,11 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
>  			return ret;
>  	}
>  
> +	/* allocate a trace ID */
> +	ret = etm4_read_alloc_trace_id(drvdata);
> +	if (ret < 0)
> +		return ret;
> +
>  	spin_lock(&drvdata->spinlock);
>  
>  	/*
> @@ -756,6 +801,8 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
>  
>  	if (!ret)
>  		dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> +	else
> +		etm4_release_trace_id(drvdata);
>  	return ret;
>  }
>  
> @@ -883,6 +930,9 @@ static int etm4_disable_perf(struct coresight_device *csdev,
>  	/* TRCVICTLR::SSSTATUS, bit[9] */
>  	filters->ssstatus = (control & BIT(9));
>  
> +	/* release trace ID - this may pend release if perf session is still active */
> +	etm4_release_trace_id(drvdata);
> +
>  	return 0;
>  }
>  
> @@ -908,6 +958,13 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
>  	spin_unlock(&drvdata->spinlock);
>  	cpus_read_unlock();
>  
> +	/*
> +	 * unlike for perf session - we only release trace IDs when resetting
> +	 * sysfs. This permits sysfs users to read the trace ID after the trace
> +	 * session has completed. This maintains operational behaviour with
> +	 * prior trace id allocation method
> +	 */
> +
>  	dev_dbg(&csdev->dev, "ETM tracing disabled\n");
>  }
>  
> @@ -1596,11 +1653,6 @@ static int etm4_dying_cpu(unsigned int cpu)
>  	return 0;
>  }
>  
> -static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
> -{
> -	drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
> -}
> -
>  static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
>  {
>  	int i, ret = 0;
> @@ -2005,7 +2057,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
>  	if (!desc.name)
>  		return -ENOMEM;
>  
> -	etm4_init_trace_id(drvdata);
>  	etm4_set_default(&drvdata->config);
>  
>  	pdata = coresight_get_platform_data(dev);
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> index 21687cc1e4e2..bb69a203b833 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> +++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
> @@ -266,10 +266,11 @@ static ssize_t reset_store(struct device *dev,
>  	config->vmid_mask0 = 0x0;
>  	config->vmid_mask1 = 0x0;
>  
> -	drvdata->trcid = drvdata->cpu + 1;
> -

That was really broken...  I'm surprised we never caught it before this
patchset. 

More comments to come tomorrow.

Thanks,
Mathieu

>  	spin_unlock(&drvdata->spinlock);
>  
> +	/* for sysfs - only release trace id when resetting */
> +	etm4_release_trace_id(drvdata);
> +
>  	cscfg_csdev_reset_feats(to_coresight_device(dev));
>  
>  	return size;
> @@ -2355,6 +2356,31 @@ static struct attribute *coresight_etmv4_attrs[] = {
>  	NULL,
>  };
>  
> +/*
> + * Trace ID allocated dynamically on enable - but also allocate on read
> + * in case sysfs or perf read before enable to ensure consistent metadata
> + * information for trace decode
> + */
> +static ssize_t trctraceid_show(struct device *dev,
> +			       struct device_attribute *attr,
> +			       char *buf)
> +{
> +	int trace_id;
> +	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
> +
> +	trace_id = etm4_read_alloc_trace_id(drvdata);
> +	if (trace_id < 0)
> +		return trace_id;
> +
> +	return scnprintf(buf, PAGE_SIZE, "0x%x\n", trace_id);
> +}
> +
> +/* mgmt group uses extended attributes - no standard macro available */
> +static struct dev_ext_attribute dev_attr_trctraceid = {
> +		__ATTR(trctraceid, 0444, trctraceid_show, NULL),
> +		(void *)(unsigned long)TRCTRACEIDR
> +};
> +
>  struct etmv4_reg {
>  	struct coresight_device *csdev;
>  	u32 offset;
> @@ -2491,7 +2517,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
>  	coresight_etm4x_reg(trcpidr3, TRCPIDR3),
>  	coresight_etm4x_reg(trcoslsr, TRCOSLSR),
>  	coresight_etm4x_reg(trcconfig, TRCCONFIGR),
> -	coresight_etm4x_reg(trctraceid, TRCTRACEIDR),
> +	&dev_attr_trctraceid.attr.attr,
>  	coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
>  	NULL,
>  };
> diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
> index 3c4d69b096ca..64976a00c839 100644
> --- a/drivers/hwtracing/coresight/coresight-etm4x.h
> +++ b/drivers/hwtracing/coresight/coresight-etm4x.h
> @@ -1010,4 +1010,7 @@ static inline bool etm4x_is_ete(struct etmv4_drvdata *drvdata)
>  {
>  	return drvdata->arch >= ETM_ARCH_ETE;
>  }
> +
> +int etm4_read_alloc_trace_id(struct etmv4_drvdata *drvdata);
> +void etm4_release_trace_id(struct etmv4_drvdata *drvdata);
>  #endif
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
  2022-03-08 20:49   ` Mike Leach
@ 2022-04-06 17:11     ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-06 17:11 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> Adds in notifier calls to the trace ID allocator that perf
> events are starting and stopping.
> 
> This ensures that Trace IDs associated with CPUs remain the same
> throughout the perf session, and are only release when all perf
> sessions are complete.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> index c039b6ae206f..008f9dac429d 100644
> --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> @@ -22,6 +22,7 @@
>  #include "coresight-etm-perf.h"
>  #include "coresight-priv.h"
>  #include "coresight-syscfg.h"
> +#include "coresight-trace-id.h"
>  
>  static struct pmu etm_pmu;
>  static bool etm_perf_up;
> @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
>  		struct list_head **ppath;
>  
>  		ppath = etm_event_cpu_path_ptr(event_data, cpu);
> -		if (!(IS_ERR_OR_NULL(*ppath)))
> +		if (!(IS_ERR_OR_NULL(*ppath))) {
>  			coresight_release_path(*ppath);
> +			/*
> +			 * perf may have read a trace id for a cpu, but never actually
> +			 * executed code on that cpu - which means the trace id would
> +			 * not release on disable. Re-release here to be sure.
> +			 */
> +			coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());

A CPU gets a traceID in event_etm_start() when the event is installed for
running.  Do you see a scenario where etm_free_aux() is called without
previously calling event_etm_stop()? 

> +		}
>  		*ppath = NULL;
>  	}
>  
> +	/* mark perf event as done for trace id allocator */
> +	coresight_trace_id_perf_stop();
> +
>  	free_percpu(event_data->path);
>  	kfree(event_data);
>  }
> @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
>  		sink = user_sink = coresight_get_sink_by_id(id);
>  	}
>  
> +	/* tell the trace ID allocator that a perf event is starting up */
> +	coresight_trace_id_perf_start();
> +
>  	/* check if user wants a coresight configuration selected */
>  	cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
>  	if (cfg_hash) {
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
@ 2022-04-06 17:11     ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-06 17:11 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> Adds in notifier calls to the trace ID allocator that perf
> events are starting and stopping.
> 
> This ensures that Trace IDs associated with CPUs remain the same
> throughout the perf session, and are only release when all perf
> sessions are complete.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> index c039b6ae206f..008f9dac429d 100644
> --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> @@ -22,6 +22,7 @@
>  #include "coresight-etm-perf.h"
>  #include "coresight-priv.h"
>  #include "coresight-syscfg.h"
> +#include "coresight-trace-id.h"
>  
>  static struct pmu etm_pmu;
>  static bool etm_perf_up;
> @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
>  		struct list_head **ppath;
>  
>  		ppath = etm_event_cpu_path_ptr(event_data, cpu);
> -		if (!(IS_ERR_OR_NULL(*ppath)))
> +		if (!(IS_ERR_OR_NULL(*ppath))) {
>  			coresight_release_path(*ppath);
> +			/*
> +			 * perf may have read a trace id for a cpu, but never actually
> +			 * executed code on that cpu - which means the trace id would
> +			 * not release on disable. Re-release here to be sure.
> +			 */
> +			coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());

A CPU gets a traceID in event_etm_start() when the event is installed for
running.  Do you see a scenario where etm_free_aux() is called without
previously calling event_etm_stop()? 

> +		}
>  		*ppath = NULL;
>  	}
>  
> +	/* mark perf event as done for trace id allocator */
> +	coresight_trace_id_perf_stop();
> +
>  	free_percpu(event_data->path);
>  	kfree(event_data);
>  }
> @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
>  		sink = user_sink = coresight_get_sink_by_id(id);
>  	}
>  
> +	/* tell the trace ID allocator that a perf event is starting up */
> +	coresight_trace_id_perf_start();
> +
>  	/* check if user wants a coresight configuration selected */
>  	cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
>  	if (cfg_hash) {
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
  2022-04-06 17:11     ` Mathieu Poirier
@ 2022-04-06 19:38       ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:38 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mathieu,

On Wed, 6 Apr 2022 at 18:11, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> > Adds in notifier calls to the trace ID allocator that perf
> > events are starting and stopping.
> >
> > This ensures that Trace IDs associated with CPUs remain the same
> > throughout the perf session, and are only release when all perf
> > sessions are complete.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
> >  1 file changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > index c039b6ae206f..008f9dac429d 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > @@ -22,6 +22,7 @@
> >  #include "coresight-etm-perf.h"
> >  #include "coresight-priv.h"
> >  #include "coresight-syscfg.h"
> > +#include "coresight-trace-id.h"
> >
> >  static struct pmu etm_pmu;
> >  static bool etm_perf_up;
> > @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
> >               struct list_head **ppath;
> >
> >               ppath = etm_event_cpu_path_ptr(event_data, cpu);
> > -             if (!(IS_ERR_OR_NULL(*ppath)))
> > +             if (!(IS_ERR_OR_NULL(*ppath))) {
> >                       coresight_release_path(*ppath);
> > +                     /*
> > +                      * perf may have read a trace id for a cpu, but never actually
> > +                      * executed code on that cpu - which means the trace id would
> > +                      * not release on disable. Re-release here to be sure.
> > +                      */
> > +                     coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
>
> A CPU gets a traceID in event_etm_start() when the event is installed for
> running.  Do you see a scenario where etm_free_aux() is called without
> previously calling event_etm_stop()?
>

To ensure that IDs are obtained in a timely manner, they assign on
read. Therefore when cs_etm.c::cs_etm_info_fill() is called,
potentially the ID will be read for all CPUs - and dumped into the
AUXINFO data at the top of the perf.data file.
However, a --per-thread execution may actually only start the event on
a subset of cpus, hence we ensure that all cpus are released.
This was proven during testing when I ran both --per-thread and cpu
wide tests with logging monitoring the ID assignments.

I have programmed this deliberately defensively - on the basis that
the timings and orderings of the code/callbacks in todays perf may not
necessarily be the same in tomorrows. perf.

In future we may be able to use Suzuki's idea of embedding the ID into
an alternative packet in the perf.data file. but I think that should
be a separate change as it affects decode in a big way.

Regards

Mike


> > +             }
> >               *ppath = NULL;
> >       }
> >
> > +     /* mark perf event as done for trace id allocator */
> > +     coresight_trace_id_perf_stop();
> > +
> >       free_percpu(event_data->path);
> >       kfree(event_data);
> >  }
> > @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
> >               sink = user_sink = coresight_get_sink_by_id(id);
> >       }
> >
> > +     /* tell the trace ID allocator that a perf event is starting up */
> > +     coresight_trace_id_perf_start();
> > +
> >       /* check if user wants a coresight configuration selected */
> >       cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
> >       if (cfg_hash) {
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
@ 2022-04-06 19:38       ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:38 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mathieu,

On Wed, 6 Apr 2022 at 18:11, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> > Adds in notifier calls to the trace ID allocator that perf
> > events are starting and stopping.
> >
> > This ensures that Trace IDs associated with CPUs remain the same
> > throughout the perf session, and are only release when all perf
> > sessions are complete.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
> >  1 file changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > index c039b6ae206f..008f9dac429d 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > @@ -22,6 +22,7 @@
> >  #include "coresight-etm-perf.h"
> >  #include "coresight-priv.h"
> >  #include "coresight-syscfg.h"
> > +#include "coresight-trace-id.h"
> >
> >  static struct pmu etm_pmu;
> >  static bool etm_perf_up;
> > @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
> >               struct list_head **ppath;
> >
> >               ppath = etm_event_cpu_path_ptr(event_data, cpu);
> > -             if (!(IS_ERR_OR_NULL(*ppath)))
> > +             if (!(IS_ERR_OR_NULL(*ppath))) {
> >                       coresight_release_path(*ppath);
> > +                     /*
> > +                      * perf may have read a trace id for a cpu, but never actually
> > +                      * executed code on that cpu - which means the trace id would
> > +                      * not release on disable. Re-release here to be sure.
> > +                      */
> > +                     coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
>
> A CPU gets a traceID in event_etm_start() when the event is installed for
> running.  Do you see a scenario where etm_free_aux() is called without
> previously calling event_etm_stop()?
>

To ensure that IDs are obtained in a timely manner, they assign on
read. Therefore when cs_etm.c::cs_etm_info_fill() is called,
potentially the ID will be read for all CPUs - and dumped into the
AUXINFO data at the top of the perf.data file.
However, a --per-thread execution may actually only start the event on
a subset of cpus, hence we ensure that all cpus are released.
This was proven during testing when I ran both --per-thread and cpu
wide tests with logging monitoring the ID assignments.

I have programmed this deliberately defensively - on the basis that
the timings and orderings of the code/callbacks in todays perf may not
necessarily be the same in tomorrows. perf.

In future we may be able to use Suzuki's idea of embedding the ID into
an alternative packet in the perf.data file. but I think that should
be a separate change as it affects decode in a big way.

Regards

Mike


> > +             }
> >               *ppath = NULL;
> >       }
> >
> > +     /* mark perf event as done for trace id allocator */
> > +     coresight_trace_id_perf_stop();
> > +
> >       free_percpu(event_data->path);
> >       kfree(event_data);
> >  }
> > @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
> >               sink = user_sink = coresight_get_sink_by_id(id);
> >       }
> >
> > +     /* tell the trace ID allocator that a perf event is starting up */
> > +     coresight_trace_id_perf_start();
> > +
> >       /* check if user wants a coresight configuration selected */
> >       cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
> >       if (cfg_hash) {
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
  2022-04-05 17:02     ` Mathieu Poirier
@ 2022-04-06 19:45       ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:45 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mathieu,

On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> Good morning,
>
> To avoid getting tunnel vision, I haven't red any of the comments already posted
> about this set.  As such some of my observations may be redundant.
>
> On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > The existing mechanism to assign Trace ID values to sources is limited
> > and does not scale for larger multicore / multi trace source systems.
> >
> > The API introduces functions that reserve IDs based on availabilty
> > represented by a coresight_trace_id_map structure. This records the
> > used and free IDs in a bitmap.
> >
> > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > the ID associated with the CPU. This ensures that the same ID will be
> > re-used while perf events are active on the CPU. The put_cpu_id function
> > will pend release of the ID until all perf cs_etm sessions are complete.
> >
> > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > coresight_trace_id_put_system_id.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/Makefile          |   2 +-
> >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> >  3 files changed, 292 insertions(+), 1 deletion(-)
> >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >
> > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > index b6c4a48140ec..329a0c704b87 100644
> > --- a/drivers/hwtracing/coresight/Makefile
> > +++ b/drivers/hwtracing/coresight/Makefile
> > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > -             coresight-syscfg-configfs.o
> > +             coresight-syscfg-configfs.o coresight-trace-id.o
> >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> >                     coresight-tmc-etr.o
> > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > new file mode 100644
> > index 000000000000..ce6c7d7b55d6
> > --- /dev/null
> > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > @@ -0,0 +1,222 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > + * Author: Mike Leach <mike.leach@linaro.org>
> > + */
> > +#include <linux/kernel.h>
> > +#include <linux/types.h>
> > +#include <linux/spinlock.h>
> > +
> > +#include "coresight-trace-id.h"
> > +
> > +/* need to keep data on ids & association with cpus. */
> > +struct cpu_id_info {
> > +     int id;
> > +     bool pend_rel;
> > +};
> > +
> > +/* maintain a record of the current mapping of cpu IDs */
> > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > +
> > +/* a list of currently used id_maps */
> > +static LIST_HEAD(id_map_list);
> > +
> > +/* perf session active flag */
> > +static int perf_cs_etm_session_active;
> > +
> > +/* lock to protect id_map list and cpu data  */
> > +static DEFINE_SPINLOCK(id_map_lock);
> > +
> > +/* ID 0 is reserved */
> > +#define CORESIGHT_TRACE_ID_RES_0 0
> > +
> > +/* ID 0x70 onwards are reserved */
> > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > +
> > +#define IS_VALID_ID(id)      \
> > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > +
> > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             set_bit(id, id_map->avail_ids);
> > +}
> > +
> > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             clear_bit(id, id_map->avail_ids);
> > +}
> > +
> > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             set_bit(id, id_map->pend_rel_ids);
> > +}
> > +
> > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             clear_bit(id, id_map->pend_rel_ids);
> > +}
> > +
> > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > +{
> > +     int id;
> > +
> > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > +             id = -EINVAL;
> > +     return id;
> > +}
> > +
> > +/* release all pending IDs for all current maps & clear CPU associations */
> > +static void coresight_trace_id_release_all_pending(void)
> > +{
> > +     struct coresight_trace_id_map *id_map;
> > +     int cpu, bit;
> > +
> > +     list_for_each_entry(id_map, &id_map_list, node) {
> > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > +                     clear_bit(bit, id_map->avail_ids);
> > +                     clear_bit(bit, id_map->pend_rel_ids);
> > +             }
> > +     }
> > +
> > +     for_each_possible_cpu(cpu) {
> > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > +                     per_cpu(cpu_ids, cpu).id = 0;
> > +             }
> > +     }
> > +}
> > +
> > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = per_cpu(cpu_ids, cpu).id;
> > +     if (!id) {
> > +             id = coresight_trace_id_find_new_id(id_map);
> > +             if (id < 0)
> > +                     goto get_cpu_id_out;
> > +     }
> > +
> > +     per_cpu(cpu_ids, cpu).id = id;
> > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > +     coresight_trace_id_set_inuse(id, id_map);
> > +     coresight_trace_id_clear_pend_rel(id, id_map);
>
> The above should have been done when the ID for this CPU has been set for the
> first time.  Therefore we should simply release the spinlock and return if an ID
> has already been set.
>

Agreed.

> > +
> > +get_cpu_id_out:
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +     return id;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > +
> > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = per_cpu(cpu_ids, cpu).id;
> > +     if (!id)
> > +             goto put_cpu_id_out;
> > +
> > +     if (perf_cs_etm_session_active) {
> > +             /* set release at pending if perf still active */
> > +             coresight_trace_id_set_pend_rel(id, id_map);
> > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > +     } else {
> > +             /* otherwise clear id */
> > +             coresight_trace_id_clear_inuse(id, id_map);
> > +             per_cpu(cpu_ids, cpu).id = 0;
> > +     }
> > +
> > + put_cpu_id_out:
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > +
> > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > +{
>
> So far I have reviewed until patch 05 and I find passing the id_map every time
> we need an ID quite heavy, especially since it is always the same one.
>

My initital thought was to ensure that the API would handle the
forthcoming requirements for per-sink id maps without having to change
too much code, both within the API and the client code.

However an alternative method could be to drop the id_map passing in
the external API calls - simplifying it and the client code, and
simply extend the API once passing in the id_map is really required.
This would mean being able to drop the list of maps for now, use a
default id_map withing the ID map code. However I would retain the
passing of the id_map for the internal functions as I feel it is
better practice to have functions operate on data passed to them
rather then a global blob.

Regards

Mike

> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = coresight_trace_id_find_new_id(id_map);
> > +     if (id > 0)
> > +             coresight_trace_id_set_inuse(id, id_map);
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +
> > +     return id;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > +
> > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     coresight_trace_id_clear_inuse(id, id_map);
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > +
> > +void coresight_trace_id_perf_start(void)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     perf_cs_etm_session_active++;
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > +
> > +void coresight_trace_id_perf_stop(void)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     perf_cs_etm_session_active--;
> > +     if (!perf_cs_etm_session_active)
> > +             coresight_trace_id_release_all_pending();
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > +
> > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int bit;
> > +
> > +     /* set all reserved bits as in-use */
> > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > +             set_bit(bit, id_map->avail_ids);
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +
> > +     /* add id_map to the list */
> > +     list_add(&id_map->node, &id_map_list);
> > +
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > +
> > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +
> > +     /* remove id_map from list */
> > +     list_del(&id_map->node);
> > +
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > new file mode 100644
> > index 000000000000..01db2441cee6
> > --- /dev/null
> > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > @@ -0,0 +1,69 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > + * Author: Mike Leach <mike.leach@linaro.org>
> > + */
> > +
> > +#ifndef _CORESIGHT_TRACE_ID_H
> > +#define _CORESIGHT_TRACE_ID_H
> > +
> > +/*
> > + * Coresight trace ID allocation API
> > + *
> > + * With multi cpu systems, and more additional trace sources a scalable
> > + * trace ID reservation system is required.
> > + *
> > + * The system will allocate Ids on a demand basis, and allow them to be
> > + * released when done.
> > + *
> > + * In order to ensure that a consistent cpu / ID matching is maintained
> > + * throughout a perf cs_etm event session - a session in progress flag will
> > + * be maintained, and released IDs not cleared until the perf session is
> > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > + *
> > + *
> > + * Trace ID maps will be created and initialised to prevent architecturally
> > + * reserved IDs from being allocated.
> > + *
> > + * API permits multiple maps to be maintained - for large systems where
> > + * different sets of cpus trace into different independent sinks.
> > + */
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/types.h>
> > +
> > +
> > +/* architecturally we have 128 IDs some of which are reserved */
> > +#define CORESIGHT_TRACE_IDS_MAX 128
> > +
> > +/**
> > + * Trace ID map.
> > + *
> > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > + * @node:    List entry to add to list of managed trace id maps.
> > + */
> > +struct coresight_trace_id_map {
> > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     struct list_head node;
> > +};
> > +
> > +/* Allocate and release IDs */
> > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > +
> > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > +/* remove ID maps from list of maps */
> > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > +
> > +/* notifiers for perf session start and stop */
> > +void coresight_trace_id_perf_start(void);
> > +void coresight_trace_id_perf_stop(void);
> > +
> > +#endif /* _CORESIGHT_TRACE_ID_H */
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
@ 2022-04-06 19:45       ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:45 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mathieu,

On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> Good morning,
>
> To avoid getting tunnel vision, I haven't red any of the comments already posted
> about this set.  As such some of my observations may be redundant.
>
> On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > The existing mechanism to assign Trace ID values to sources is limited
> > and does not scale for larger multicore / multi trace source systems.
> >
> > The API introduces functions that reserve IDs based on availabilty
> > represented by a coresight_trace_id_map structure. This records the
> > used and free IDs in a bitmap.
> >
> > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > the ID associated with the CPU. This ensures that the same ID will be
> > re-used while perf events are active on the CPU. The put_cpu_id function
> > will pend release of the ID until all perf cs_etm sessions are complete.
> >
> > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > coresight_trace_id_put_system_id.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/Makefile          |   2 +-
> >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> >  3 files changed, 292 insertions(+), 1 deletion(-)
> >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> >
> > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > index b6c4a48140ec..329a0c704b87 100644
> > --- a/drivers/hwtracing/coresight/Makefile
> > +++ b/drivers/hwtracing/coresight/Makefile
> > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > -             coresight-syscfg-configfs.o
> > +             coresight-syscfg-configfs.o coresight-trace-id.o
> >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> >                     coresight-tmc-etr.o
> > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > new file mode 100644
> > index 000000000000..ce6c7d7b55d6
> > --- /dev/null
> > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > @@ -0,0 +1,222 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > + * Author: Mike Leach <mike.leach@linaro.org>
> > + */
> > +#include <linux/kernel.h>
> > +#include <linux/types.h>
> > +#include <linux/spinlock.h>
> > +
> > +#include "coresight-trace-id.h"
> > +
> > +/* need to keep data on ids & association with cpus. */
> > +struct cpu_id_info {
> > +     int id;
> > +     bool pend_rel;
> > +};
> > +
> > +/* maintain a record of the current mapping of cpu IDs */
> > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > +
> > +/* a list of currently used id_maps */
> > +static LIST_HEAD(id_map_list);
> > +
> > +/* perf session active flag */
> > +static int perf_cs_etm_session_active;
> > +
> > +/* lock to protect id_map list and cpu data  */
> > +static DEFINE_SPINLOCK(id_map_lock);
> > +
> > +/* ID 0 is reserved */
> > +#define CORESIGHT_TRACE_ID_RES_0 0
> > +
> > +/* ID 0x70 onwards are reserved */
> > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > +
> > +#define IS_VALID_ID(id)      \
> > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > +
> > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             set_bit(id, id_map->avail_ids);
> > +}
> > +
> > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             clear_bit(id, id_map->avail_ids);
> > +}
> > +
> > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             set_bit(id, id_map->pend_rel_ids);
> > +}
> > +
> > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > +{
> > +     if (IS_VALID_ID(id))
> > +             clear_bit(id, id_map->pend_rel_ids);
> > +}
> > +
> > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > +{
> > +     int id;
> > +
> > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > +             id = -EINVAL;
> > +     return id;
> > +}
> > +
> > +/* release all pending IDs for all current maps & clear CPU associations */
> > +static void coresight_trace_id_release_all_pending(void)
> > +{
> > +     struct coresight_trace_id_map *id_map;
> > +     int cpu, bit;
> > +
> > +     list_for_each_entry(id_map, &id_map_list, node) {
> > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > +                     clear_bit(bit, id_map->avail_ids);
> > +                     clear_bit(bit, id_map->pend_rel_ids);
> > +             }
> > +     }
> > +
> > +     for_each_possible_cpu(cpu) {
> > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > +                     per_cpu(cpu_ids, cpu).id = 0;
> > +             }
> > +     }
> > +}
> > +
> > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = per_cpu(cpu_ids, cpu).id;
> > +     if (!id) {
> > +             id = coresight_trace_id_find_new_id(id_map);
> > +             if (id < 0)
> > +                     goto get_cpu_id_out;
> > +     }
> > +
> > +     per_cpu(cpu_ids, cpu).id = id;
> > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > +     coresight_trace_id_set_inuse(id, id_map);
> > +     coresight_trace_id_clear_pend_rel(id, id_map);
>
> The above should have been done when the ID for this CPU has been set for the
> first time.  Therefore we should simply release the spinlock and return if an ID
> has already been set.
>

Agreed.

> > +
> > +get_cpu_id_out:
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +     return id;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > +
> > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = per_cpu(cpu_ids, cpu).id;
> > +     if (!id)
> > +             goto put_cpu_id_out;
> > +
> > +     if (perf_cs_etm_session_active) {
> > +             /* set release at pending if perf still active */
> > +             coresight_trace_id_set_pend_rel(id, id_map);
> > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > +     } else {
> > +             /* otherwise clear id */
> > +             coresight_trace_id_clear_inuse(id, id_map);
> > +             per_cpu(cpu_ids, cpu).id = 0;
> > +     }
> > +
> > + put_cpu_id_out:
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > +
> > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > +{
>
> So far I have reviewed until patch 05 and I find passing the id_map every time
> we need an ID quite heavy, especially since it is always the same one.
>

My initital thought was to ensure that the API would handle the
forthcoming requirements for per-sink id maps without having to change
too much code, both within the API and the client code.

However an alternative method could be to drop the id_map passing in
the external API calls - simplifying it and the client code, and
simply extend the API once passing in the id_map is really required.
This would mean being able to drop the list of maps for now, use a
default id_map withing the ID map code. However I would retain the
passing of the id_map for the internal functions as I feel it is
better practice to have functions operate on data passed to them
rather then a global blob.

Regards

Mike

> > +     unsigned long flags;
> > +     int id;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     id = coresight_trace_id_find_new_id(id_map);
> > +     if (id > 0)
> > +             coresight_trace_id_set_inuse(id, id_map);
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +
> > +     return id;
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > +
> > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     coresight_trace_id_clear_inuse(id, id_map);
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > +
> > +void coresight_trace_id_perf_start(void)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     perf_cs_etm_session_active++;
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > +
> > +void coresight_trace_id_perf_stop(void)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +     perf_cs_etm_session_active--;
> > +     if (!perf_cs_etm_session_active)
> > +             coresight_trace_id_release_all_pending();
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > +
> > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +     int bit;
> > +
> > +     /* set all reserved bits as in-use */
> > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > +             set_bit(bit, id_map->avail_ids);
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +
> > +     /* add id_map to the list */
> > +     list_add(&id_map->node, &id_map_list);
> > +
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > +
> > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > +{
> > +     unsigned long flags;
> > +
> > +     spin_lock_irqsave(&id_map_lock, flags);
> > +
> > +     /* remove id_map from list */
> > +     list_del(&id_map->node);
> > +
> > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > +}
> > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > new file mode 100644
> > index 000000000000..01db2441cee6
> > --- /dev/null
> > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > @@ -0,0 +1,69 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > + * Author: Mike Leach <mike.leach@linaro.org>
> > + */
> > +
> > +#ifndef _CORESIGHT_TRACE_ID_H
> > +#define _CORESIGHT_TRACE_ID_H
> > +
> > +/*
> > + * Coresight trace ID allocation API
> > + *
> > + * With multi cpu systems, and more additional trace sources a scalable
> > + * trace ID reservation system is required.
> > + *
> > + * The system will allocate Ids on a demand basis, and allow them to be
> > + * released when done.
> > + *
> > + * In order to ensure that a consistent cpu / ID matching is maintained
> > + * throughout a perf cs_etm event session - a session in progress flag will
> > + * be maintained, and released IDs not cleared until the perf session is
> > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > + *
> > + *
> > + * Trace ID maps will be created and initialised to prevent architecturally
> > + * reserved IDs from being allocated.
> > + *
> > + * API permits multiple maps to be maintained - for large systems where
> > + * different sets of cpus trace into different independent sinks.
> > + */
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/types.h>
> > +
> > +
> > +/* architecturally we have 128 IDs some of which are reserved */
> > +#define CORESIGHT_TRACE_IDS_MAX 128
> > +
> > +/**
> > + * Trace ID map.
> > + *
> > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > + * @node:    List entry to add to list of managed trace id maps.
> > + */
> > +struct coresight_trace_id_map {
> > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > +     struct list_head node;
> > +};
> > +
> > +/* Allocate and release IDs */
> > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > +
> > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > +/* remove ID maps from list of maps */
> > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > +
> > +/* notifiers for perf session start and stop */
> > +void coresight_trace_id_perf_start(void);
> > +void coresight_trace_id_perf_stop(void);
> > +
> > +#endif /* _CORESIGHT_TRACE_ID_H */
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
  2022-04-05 17:22     ` Mathieu Poirier
@ 2022-04-06 19:47       ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:47 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hii

On Tue, 5 Apr 2022 at 18:22, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Tue, Mar 08, 2022 at 08:49:55PM +0000, Mike Leach wrote:
> > Use the TraceID API to allocate ETM trace IDs dynamically.
> >
> > As with the etm4x we allocate on enable / disable for perf,
> > allocate on enable / reset for sysfs.
> >
> > Additionally we allocate on sysfs file read as both perf and sysfs
> > can read the ID before enabling the hardware.
> >
> > Remove sysfs option to write trace ID - which is inconsistent with
> > both the dynamic allocation method and the fixed allocation method
> > previously used.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/coresight-etm.h   |  2 +
> >  .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
> >  .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
> >  3 files changed, 71 insertions(+), 31 deletions(-)
> >
> > diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
> > index f3ab96eaf44e..3667428d38b6 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm.h
> > +++ b/drivers/hwtracing/coresight/coresight-etm.h
> > @@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
> >  void etm_set_default(struct etm_config *config);
> >  void etm_config_trace_mode(struct etm_config *config);
> >  struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
> > +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
> > +void etm_release_trace_id(struct etm_drvdata *drvdata);
> >  #endif
> > diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > index 7d413ba8b823..98213503bd09 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > @@ -32,6 +32,7 @@
> >
> >  #include "coresight-etm.h"
> >  #include "coresight-etm-perf.h"
> > +#include "coresight-trace-id.h"
> >
> >  /*
> >   * Not really modular but using module_param is the easiest way to
> > @@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
> >       return etm_get_trace_id(drvdata);
> >  }
> >
> > +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
> > +{
> > +     int trace_id;
> > +
> > +     /*
> > +      * This will allocate a trace ID to the cpu,
> > +      * or return the one currently allocated.
> > +      */
> > +     trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> > +                                              coresight_get_trace_id_map());
> > +     if (trace_id > 0) {
> > +             spin_lock(&drvdata->spinlock);
> > +             drvdata->traceid = (u8)trace_id;
> > +             spin_unlock(&drvdata->spinlock);
> > +     } else {
> > +             pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> > +                    dev_name(&drvdata->csdev->dev), drvdata->cpu);
> > +     }
> > +     return trace_id;
> > +}
> > +
> > +void etm_release_trace_id(struct etm_drvdata *drvdata)
> > +{
> > +     coresight_trace_id_put_cpu_id(drvdata->cpu,
> > +                                   coresight_get_trace_id_map());
> > +     spin_lock(&drvdata->spinlock);
> > +     drvdata->traceid = 0;
> > +     spin_unlock(&drvdata->spinlock);
> > +}
> > +
> >  static int etm_enable_perf(struct coresight_device *csdev,
> >                          struct perf_event *event)
> >  {
> >       struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> > +     int ret;
> >
> >       if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
> >               return -EINVAL;
> >
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             return ret;
> >
>
> For etm4x, this was done after etm_parse_event_config().  Please do the same
> here.
>
>
> >       /* Configure the tracer based on the session's specifics */
> >       etm_parse_event_config(drvdata, event);
> >       /* And enable it */
> > -     return etm_enable_hw(drvdata);
> > +     ret = etm_enable_hw(drvdata);
> > +
> > +     if (ret)
> > +             etm_release_trace_id(drvdata);
> > +     return ret;
> >  }
> >
> >  static int etm_enable_sysfs(struct coresight_device *csdev)
> > @@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
> >       struct etm_enable_arg arg = { };
> >       int ret;
> >
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             return ret;
> > +
>
> Please add the same comment as in etm4_disable_sysfs() in etm_disable_sysfs().
>

Agreed for both comments on this patch

Thanks

Mike


> >       spin_lock(&drvdata->spinlock);
> >
> >       /*
> > @@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
> >
> >       if (!ret)
> >               dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> > +     else
> > +             etm_release_trace_id(drvdata);
> >       return ret;
> >  }
> >
> > @@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
> >       coresight_disclaim_device_unlocked(csdev);
> >
> >       CS_LOCK(drvdata->base);
> > +
> > +     etm_release_trace_id(drvdata);
> >  }
> >
> >  static void etm_disable_sysfs(struct coresight_device *csdev)
> > @@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
> >       CS_LOCK(drvdata->base);
> >  }
> >
> > -static void etm_init_trace_id(struct etm_drvdata *drvdata)
> > -{
> > -     drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
> > -}
> > -
> >  static int __init etm_hp_setup(void)
> >  {
> >       int ret;
> > @@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >       if (etm_arch_supported(drvdata->arch) == false)
> >               return -EINVAL;
> >
> > -     etm_init_trace_id(drvdata);
> >       etm_set_default(&drvdata->config);
> >
> >       pdata = coresight_get_platform_data(dev);
> > @@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >               return PTR_ERR(drvdata->csdev);
> >
> >       ret = etm_perf_symlink(drvdata->csdev, true);
> > -     if (ret) {
> > -             coresight_unregister(drvdata->csdev);
> > -             return ret;
> > -     }
> > +     if (ret)
> > +             goto cs_unregister;
> > +
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             goto cs_unregister;
> >
> >       etmdrvdata[drvdata->cpu] = drvdata;
> >
> > @@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >       }
> >
> >       return 0;
> > +
> > +cs_unregister:
> > +     coresight_unregister(drvdata->csdev);
> > +     return ret;
> >  }
> >
> >  static void clear_etmdrvdata(void *info)
> > @@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
> >
> >       etm_perf_symlink(drvdata->csdev, false);
> >
> > +     etm_release_trace_id(drvdata);
> > +
> >       /*
> >        * Taking hotplug lock here to avoid racing between etm_remove and
> >        * CPU hotplug call backs.
> > diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > index e8c7649f123e..3ee70b174240 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > @@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
> >
> >               etm_set_default(config);
> >               spin_unlock(&drvdata->spinlock);
> > +             /* release trace id outside the spinlock as this fn uses it */
> > +             etm_release_trace_id(drvdata);
> >       }
> >
> >       return size;
> > @@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
> >  static ssize_t traceid_show(struct device *dev,
> >                           struct device_attribute *attr, char *buf)
> >  {
> > -     unsigned long val;
> > -     struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> > -
> > -     val = etm_get_trace_id(drvdata);
> > -
> > -     return sprintf(buf, "%#lx\n", val);
> > -}
> > -
> > -static ssize_t traceid_store(struct device *dev,
> > -                          struct device_attribute *attr,
> > -                          const char *buf, size_t size)
> > -{
> > -     int ret;
> > -     unsigned long val;
> > +     int trace_id;
> >       struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> >
> > -     ret = kstrtoul(buf, 16, &val);
> > -     if (ret)
> > -             return ret;
> > +     trace_id = etm_read_alloc_trace_id(drvdata);
> > +     if (trace_id < 0)
> > +             return trace_id;
> >
> > -     drvdata->traceid = val & ETM_TRACEID_MASK;
> > -     return size;
> > +     return sprintf(buf, "%#x\n", trace_id);
> >  }
> > -static DEVICE_ATTR_RW(traceid);
> > +static DEVICE_ATTR_RO(traceid);
> >
> >  static struct attribute *coresight_etm_attrs[] = {
> >       &dev_attr_nr_addr_cmp.attr,
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs
@ 2022-04-06 19:47       ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-06 19:47 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Hii

On Tue, 5 Apr 2022 at 18:22, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Tue, Mar 08, 2022 at 08:49:55PM +0000, Mike Leach wrote:
> > Use the TraceID API to allocate ETM trace IDs dynamically.
> >
> > As with the etm4x we allocate on enable / disable for perf,
> > allocate on enable / reset for sysfs.
> >
> > Additionally we allocate on sysfs file read as both perf and sysfs
> > can read the ID before enabling the hardware.
> >
> > Remove sysfs option to write trace ID - which is inconsistent with
> > both the dynamic allocation method and the fixed allocation method
> > previously used.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >  drivers/hwtracing/coresight/coresight-etm.h   |  2 +
> >  .../coresight/coresight-etm3x-core.c          | 72 ++++++++++++++++---
> >  .../coresight/coresight-etm3x-sysfs.c         | 28 +++-----
> >  3 files changed, 71 insertions(+), 31 deletions(-)
> >
> > diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
> > index f3ab96eaf44e..3667428d38b6 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm.h
> > +++ b/drivers/hwtracing/coresight/coresight-etm.h
> > @@ -287,4 +287,6 @@ int etm_get_trace_id(struct etm_drvdata *drvdata);
> >  void etm_set_default(struct etm_config *config);
> >  void etm_config_trace_mode(struct etm_config *config);
> >  struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
> > +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata);
> > +void etm_release_trace_id(struct etm_drvdata *drvdata);
> >  #endif
> > diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > index 7d413ba8b823..98213503bd09 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
> > @@ -32,6 +32,7 @@
> >
> >  #include "coresight-etm.h"
> >  #include "coresight-etm-perf.h"
> > +#include "coresight-trace-id.h"
> >
> >  /*
> >   * Not really modular but using module_param is the easiest way to
> > @@ -490,18 +491,57 @@ static int etm_trace_id(struct coresight_device *csdev)
> >       return etm_get_trace_id(drvdata);
> >  }
> >
> > +int etm_read_alloc_trace_id(struct etm_drvdata *drvdata)
> > +{
> > +     int trace_id;
> > +
> > +     /*
> > +      * This will allocate a trace ID to the cpu,
> > +      * or return the one currently allocated.
> > +      */
> > +     trace_id = coresight_trace_id_get_cpu_id(drvdata->cpu,
> > +                                              coresight_get_trace_id_map());
> > +     if (trace_id > 0) {
> > +             spin_lock(&drvdata->spinlock);
> > +             drvdata->traceid = (u8)trace_id;
> > +             spin_unlock(&drvdata->spinlock);
> > +     } else {
> > +             pr_err("Failed to allocate trace ID for %s on CPU%d\n",
> > +                    dev_name(&drvdata->csdev->dev), drvdata->cpu);
> > +     }
> > +     return trace_id;
> > +}
> > +
> > +void etm_release_trace_id(struct etm_drvdata *drvdata)
> > +{
> > +     coresight_trace_id_put_cpu_id(drvdata->cpu,
> > +                                   coresight_get_trace_id_map());
> > +     spin_lock(&drvdata->spinlock);
> > +     drvdata->traceid = 0;
> > +     spin_unlock(&drvdata->spinlock);
> > +}
> > +
> >  static int etm_enable_perf(struct coresight_device *csdev,
> >                          struct perf_event *event)
> >  {
> >       struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
> > +     int ret;
> >
> >       if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
> >               return -EINVAL;
> >
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             return ret;
> >
>
> For etm4x, this was done after etm_parse_event_config().  Please do the same
> here.
>
>
> >       /* Configure the tracer based on the session's specifics */
> >       etm_parse_event_config(drvdata, event);
> >       /* And enable it */
> > -     return etm_enable_hw(drvdata);
> > +     ret = etm_enable_hw(drvdata);
> > +
> > +     if (ret)
> > +             etm_release_trace_id(drvdata);
> > +     return ret;
> >  }
> >
> >  static int etm_enable_sysfs(struct coresight_device *csdev)
> > @@ -510,6 +550,10 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
> >       struct etm_enable_arg arg = { };
> >       int ret;
> >
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             return ret;
> > +
>
> Please add the same comment as in etm4_disable_sysfs() in etm_disable_sysfs().
>

Agreed for both comments on this patch

Thanks

Mike


> >       spin_lock(&drvdata->spinlock);
> >
> >       /*
> > @@ -532,6 +576,8 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
> >
> >       if (!ret)
> >               dev_dbg(&csdev->dev, "ETM tracing enabled\n");
> > +     else
> > +             etm_release_trace_id(drvdata);
> >       return ret;
> >  }
> >
> > @@ -611,6 +657,8 @@ static void etm_disable_perf(struct coresight_device *csdev)
> >       coresight_disclaim_device_unlocked(csdev);
> >
> >       CS_LOCK(drvdata->base);
> > +
> > +     etm_release_trace_id(drvdata);
> >  }
> >
> >  static void etm_disable_sysfs(struct coresight_device *csdev)
> > @@ -781,11 +829,6 @@ static void etm_init_arch_data(void *info)
> >       CS_LOCK(drvdata->base);
> >  }
> >
> > -static void etm_init_trace_id(struct etm_drvdata *drvdata)
> > -{
> > -     drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
> > -}
> > -
> >  static int __init etm_hp_setup(void)
> >  {
> >       int ret;
> > @@ -871,7 +914,6 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >       if (etm_arch_supported(drvdata->arch) == false)
> >               return -EINVAL;
> >
> > -     etm_init_trace_id(drvdata);
> >       etm_set_default(&drvdata->config);
> >
> >       pdata = coresight_get_platform_data(dev);
> > @@ -891,10 +933,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >               return PTR_ERR(drvdata->csdev);
> >
> >       ret = etm_perf_symlink(drvdata->csdev, true);
> > -     if (ret) {
> > -             coresight_unregister(drvdata->csdev);
> > -             return ret;
> > -     }
> > +     if (ret)
> > +             goto cs_unregister;
> > +
> > +     ret = etm_read_alloc_trace_id(drvdata);
> > +     if (ret < 0)
> > +             goto cs_unregister;
> >
> >       etmdrvdata[drvdata->cpu] = drvdata;
> >
> > @@ -907,6 +951,10 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
> >       }
> >
> >       return 0;
> > +
> > +cs_unregister:
> > +     coresight_unregister(drvdata->csdev);
> > +     return ret;
> >  }
> >
> >  static void clear_etmdrvdata(void *info)
> > @@ -922,6 +970,8 @@ static void etm_remove(struct amba_device *adev)
> >
> >       etm_perf_symlink(drvdata->csdev, false);
> >
> > +     etm_release_trace_id(drvdata);
> > +
> >       /*
> >        * Taking hotplug lock here to avoid racing between etm_remove and
> >        * CPU hotplug call backs.
> > diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > index e8c7649f123e..3ee70b174240 100644
> > --- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > +++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
> > @@ -86,6 +86,8 @@ static ssize_t reset_store(struct device *dev,
> >
> >               etm_set_default(config);
> >               spin_unlock(&drvdata->spinlock);
> > +             /* release trace id outside the spinlock as this fn uses it */
> > +             etm_release_trace_id(drvdata);
> >       }
> >
> >       return size;
> > @@ -1189,30 +1191,16 @@ static DEVICE_ATTR_RO(cpu);
> >  static ssize_t traceid_show(struct device *dev,
> >                           struct device_attribute *attr, char *buf)
> >  {
> > -     unsigned long val;
> > -     struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> > -
> > -     val = etm_get_trace_id(drvdata);
> > -
> > -     return sprintf(buf, "%#lx\n", val);
> > -}
> > -
> > -static ssize_t traceid_store(struct device *dev,
> > -                          struct device_attribute *attr,
> > -                          const char *buf, size_t size)
> > -{
> > -     int ret;
> > -     unsigned long val;
> > +     int trace_id;
> >       struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
> >
> > -     ret = kstrtoul(buf, 16, &val);
> > -     if (ret)
> > -             return ret;
> > +     trace_id = etm_read_alloc_trace_id(drvdata);
> > +     if (trace_id < 0)
> > +             return trace_id;
> >
> > -     drvdata->traceid = val & ETM_TRACEID_MASK;
> > -     return size;
> > +     return sprintf(buf, "%#x\n", trace_id);
> >  }
> > -static DEVICE_ATTR_RW(traceid);
> > +static DEVICE_ATTR_RO(traceid);
> >
> >  static struct attribute *coresight_etm_attrs[] = {
> >       &dev_attr_nr_addr_cmp.attr,
> > --
> > 2.17.1
> >



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
  2022-04-06 19:38       ` Mike Leach
@ 2022-04-07 17:46         ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-07 17:46 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

On Wed, Apr 06, 2022 at 08:38:08PM +0100, Mike Leach wrote:
> Hi Mathieu,
> 
> On Wed, 6 Apr 2022 at 18:11, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> >
> > On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> > > Adds in notifier calls to the trace ID allocator that perf
> > > events are starting and stopping.
> > >
> > > This ensures that Trace IDs associated with CPUs remain the same
> > > throughout the perf session, and are only release when all perf
> > > sessions are complete.
> > >
> > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > ---
> > >  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
> > >  1 file changed, 15 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > index c039b6ae206f..008f9dac429d 100644
> > > --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > @@ -22,6 +22,7 @@
> > >  #include "coresight-etm-perf.h"
> > >  #include "coresight-priv.h"
> > >  #include "coresight-syscfg.h"
> > > +#include "coresight-trace-id.h"
> > >
> > >  static struct pmu etm_pmu;
> > >  static bool etm_perf_up;
> > > @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
> > >               struct list_head **ppath;
> > >
> > >               ppath = etm_event_cpu_path_ptr(event_data, cpu);
> > > -             if (!(IS_ERR_OR_NULL(*ppath)))
> > > +             if (!(IS_ERR_OR_NULL(*ppath))) {
> > >                       coresight_release_path(*ppath);
> > > +                     /*
> > > +                      * perf may have read a trace id for a cpu, but never actually
> > > +                      * executed code on that cpu - which means the trace id would
> > > +                      * not release on disable. Re-release here to be sure.
> > > +                      */
> > > +                     coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
> >
> > A CPU gets a traceID in event_etm_start() when the event is installed for
> > running.  Do you see a scenario where etm_free_aux() is called without
> > previously calling event_etm_stop()?
> >
> 
> To ensure that IDs are obtained in a timely manner, they assign on
> read. Therefore when cs_etm.c::cs_etm_info_fill() is called,
> potentially the ID will be read for all CPUs - and dumped into the
> AUXINFO data at the top of the perf.data file.

Right, I realised that when I got to the perf tools part.  If we end up keeping
the current approach it would be nice to see this explanation in the comment.
Otherwise it will be very difficult for anyone new to the project to understand
what is going on.

> However, a --per-thread execution may actually only start the event on
> a subset of cpus, hence we ensure that all cpus are released.
> This was proven during testing when I ran both --per-thread and cpu
> wide tests with logging monitoring the ID assignments.
> 
> I have programmed this deliberately defensively - on the basis that
> the timings and orderings of the code/callbacks in todays perf may not
> necessarily be the same in tomorrows. perf.
> 
> In future we may be able to use Suzuki's idea of embedding the ID into
> an alternative packet in the perf.data file. but I think that should
> be a separate change as it affects decode in a big way.
>

I really like Suzuki's idea of using a PERF_RECORD_AUX_OUTPUT_HW_ID to convey
the traceID to user space for perf sessions.  At the very least it is worth
prototyping.

I generally prefer an incremental approach but in this case it might be better
to move forward the right way, right away.  We would also avoid having to
maintain the old way, the intermediate way and the new way.

Thanks,
Mathieu

> Regards
> 
> Mike
> 
> 
> > > +             }
> > >               *ppath = NULL;
> > >       }
> > >
> > > +     /* mark perf event as done for trace id allocator */
> > > +     coresight_trace_id_perf_stop();
> > > +
> > >       free_percpu(event_data->path);
> > >       kfree(event_data);
> > >  }
> > > @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
> > >               sink = user_sink = coresight_get_sink_by_id(id);
> > >       }
> > >
> > > +     /* tell the trace ID allocator that a perf event is starting up */
> > > +     coresight_trace_id_perf_start();
> > > +
> > >       /* check if user wants a coresight configuration selected */
> > >       cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
> > >       if (cfg_hash) {
> > > --
> > > 2.17.1
> > >
> 
> 
> 
> -- 
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID
@ 2022-04-07 17:46         ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-07 17:46 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Good morning,

On Wed, Apr 06, 2022 at 08:38:08PM +0100, Mike Leach wrote:
> Hi Mathieu,
> 
> On Wed, 6 Apr 2022 at 18:11, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> >
> > On Tue, Mar 08, 2022 at 08:49:56PM +0000, Mike Leach wrote:
> > > Adds in notifier calls to the trace ID allocator that perf
> > > events are starting and stopping.
> > >
> > > This ensures that Trace IDs associated with CPUs remain the same
> > > throughout the perf session, and are only release when all perf
> > > sessions are complete.
> > >
> > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > ---
> > >  drivers/hwtracing/coresight/coresight-etm-perf.c | 16 +++++++++++++++-
> > >  1 file changed, 15 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > index c039b6ae206f..008f9dac429d 100644
> > > --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
> > > @@ -22,6 +22,7 @@
> > >  #include "coresight-etm-perf.h"
> > >  #include "coresight-priv.h"
> > >  #include "coresight-syscfg.h"
> > > +#include "coresight-trace-id.h"
> > >
> > >  static struct pmu etm_pmu;
> > >  static bool etm_perf_up;
> > > @@ -223,11 +224,21 @@ static void free_event_data(struct work_struct *work)
> > >               struct list_head **ppath;
> > >
> > >               ppath = etm_event_cpu_path_ptr(event_data, cpu);
> > > -             if (!(IS_ERR_OR_NULL(*ppath)))
> > > +             if (!(IS_ERR_OR_NULL(*ppath))) {
> > >                       coresight_release_path(*ppath);
> > > +                     /*
> > > +                      * perf may have read a trace id for a cpu, but never actually
> > > +                      * executed code on that cpu - which means the trace id would
> > > +                      * not release on disable. Re-release here to be sure.
> > > +                      */
> > > +                     coresight_trace_id_put_cpu_id(cpu, coresight_get_trace_id_map());
> >
> > A CPU gets a traceID in event_etm_start() when the event is installed for
> > running.  Do you see a scenario where etm_free_aux() is called without
> > previously calling event_etm_stop()?
> >
> 
> To ensure that IDs are obtained in a timely manner, they assign on
> read. Therefore when cs_etm.c::cs_etm_info_fill() is called,
> potentially the ID will be read for all CPUs - and dumped into the
> AUXINFO data at the top of the perf.data file.

Right, I realised that when I got to the perf tools part.  If we end up keeping
the current approach it would be nice to see this explanation in the comment.
Otherwise it will be very difficult for anyone new to the project to understand
what is going on.

> However, a --per-thread execution may actually only start the event on
> a subset of cpus, hence we ensure that all cpus are released.
> This was proven during testing when I ran both --per-thread and cpu
> wide tests with logging monitoring the ID assignments.
> 
> I have programmed this deliberately defensively - on the basis that
> the timings and orderings of the code/callbacks in todays perf may not
> necessarily be the same in tomorrows. perf.
> 
> In future we may be able to use Suzuki's idea of embedding the ID into
> an alternative packet in the perf.data file. but I think that should
> be a separate change as it affects decode in a big way.
>

I really like Suzuki's idea of using a PERF_RECORD_AUX_OUTPUT_HW_ID to convey
the traceID to user space for perf sessions.  At the very least it is worth
prototyping.

I generally prefer an incremental approach but in this case it might be better
to move forward the right way, right away.  We would also avoid having to
maintain the old way, the intermediate way and the new way.

Thanks,
Mathieu

> Regards
> 
> Mike
> 
> 
> > > +             }
> > >               *ppath = NULL;
> > >       }
> > >
> > > +     /* mark perf event as done for trace id allocator */
> > > +     coresight_trace_id_perf_stop();
> > > +
> > >       free_percpu(event_data->path);
> > >       kfree(event_data);
> > >  }
> > > @@ -314,6 +325,9 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
> > >               sink = user_sink = coresight_get_sink_by_id(id);
> > >       }
> > >
> > > +     /* tell the trace ID allocator that a perf event is starting up */
> > > +     coresight_trace_id_perf_start();
> > > +
> > >       /* check if user wants a coresight configuration selected */
> > >       cfg_hash = (u32)((event->attr.config2 & GENMASK_ULL(63, 32)) >> 32);
> > >       if (cfg_hash) {
> > > --
> > > 2.17.1
> > >
> 
> 
> 
> -- 
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
  2022-04-06 19:45       ` Mike Leach
@ 2022-04-07 18:08         ` Mathieu Poirier
  -1 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-07 18:08 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Wed, Apr 06, 2022 at 08:45:21PM +0100, Mike Leach wrote:
> Hi Mathieu,
> 
> On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> >
> > Good morning,
> >
> > To avoid getting tunnel vision, I haven't red any of the comments already posted
> > about this set.  As such some of my observations may be redundant.
> >
> > On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > > The existing mechanism to assign Trace ID values to sources is limited
> > > and does not scale for larger multicore / multi trace source systems.
> > >
> > > The API introduces functions that reserve IDs based on availabilty
> > > represented by a coresight_trace_id_map structure. This records the
> > > used and free IDs in a bitmap.
> > >
> > > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > > the ID associated with the CPU. This ensures that the same ID will be
> > > re-used while perf events are active on the CPU. The put_cpu_id function
> > > will pend release of the ID until all perf cs_etm sessions are complete.
> > >
> > > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > > coresight_trace_id_put_system_id.
> > >
> > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > ---
> > >  drivers/hwtracing/coresight/Makefile          |   2 +-
> > >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> > >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> > >  3 files changed, 292 insertions(+), 1 deletion(-)
> > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> > >
> > > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > > index b6c4a48140ec..329a0c704b87 100644
> > > --- a/drivers/hwtracing/coresight/Makefile
> > > +++ b/drivers/hwtracing/coresight/Makefile
> > > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> > >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> > >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> > >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > > -             coresight-syscfg-configfs.o
> > > +             coresight-syscfg-configfs.o coresight-trace-id.o
> > >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> > >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> > >                     coresight-tmc-etr.o
> > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > new file mode 100644
> > > index 000000000000..ce6c7d7b55d6
> > > --- /dev/null
> > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > @@ -0,0 +1,222 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > + */
> > > +#include <linux/kernel.h>
> > > +#include <linux/types.h>
> > > +#include <linux/spinlock.h>
> > > +
> > > +#include "coresight-trace-id.h"
> > > +
> > > +/* need to keep data on ids & association with cpus. */
> > > +struct cpu_id_info {
> > > +     int id;
> > > +     bool pend_rel;
> > > +};
> > > +
> > > +/* maintain a record of the current mapping of cpu IDs */
> > > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > > +
> > > +/* a list of currently used id_maps */
> > > +static LIST_HEAD(id_map_list);
> > > +
> > > +/* perf session active flag */
> > > +static int perf_cs_etm_session_active;
> > > +
> > > +/* lock to protect id_map list and cpu data  */
> > > +static DEFINE_SPINLOCK(id_map_lock);
> > > +
> > > +/* ID 0 is reserved */
> > > +#define CORESIGHT_TRACE_ID_RES_0 0
> > > +
> > > +/* ID 0x70 onwards are reserved */
> > > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > > +
> > > +#define IS_VALID_ID(id)      \
> > > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > > +
> > > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             set_bit(id, id_map->avail_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             clear_bit(id, id_map->avail_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             set_bit(id, id_map->pend_rel_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             clear_bit(id, id_map->pend_rel_ids);
> > > +}
> > > +
> > > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     int id;
> > > +
> > > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > > +             id = -EINVAL;
> > > +     return id;
> > > +}
> > > +
> > > +/* release all pending IDs for all current maps & clear CPU associations */
> > > +static void coresight_trace_id_release_all_pending(void)
> > > +{
> > > +     struct coresight_trace_id_map *id_map;
> > > +     int cpu, bit;
> > > +
> > > +     list_for_each_entry(id_map, &id_map_list, node) {
> > > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > > +                     clear_bit(bit, id_map->avail_ids);
> > > +                     clear_bit(bit, id_map->pend_rel_ids);
> > > +             }
> > > +     }
> > > +
> > > +     for_each_possible_cpu(cpu) {
> > > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > +                     per_cpu(cpu_ids, cpu).id = 0;
> > > +             }
> > > +     }
> > > +}
> > > +
> > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = per_cpu(cpu_ids, cpu).id;
> > > +     if (!id) {
> > > +             id = coresight_trace_id_find_new_id(id_map);
> > > +             if (id < 0)
> > > +                     goto get_cpu_id_out;
> > > +     }
> > > +
> > > +     per_cpu(cpu_ids, cpu).id = id;
> > > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > +     coresight_trace_id_set_inuse(id, id_map);
> > > +     coresight_trace_id_clear_pend_rel(id, id_map);
> >
> > The above should have been done when the ID for this CPU has been set for the
> > first time.  Therefore we should simply release the spinlock and return if an ID
> > has already been set.
> >
> 
> Agreed.
> 
> > > +
> > > +get_cpu_id_out:
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +     return id;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > > +
> > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = per_cpu(cpu_ids, cpu).id;
> > > +     if (!id)
> > > +             goto put_cpu_id_out;
> > > +
> > > +     if (perf_cs_etm_session_active) {
> > > +             /* set release at pending if perf still active */
> > > +             coresight_trace_id_set_pend_rel(id, id_map);
> > > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > > +     } else {
> > > +             /* otherwise clear id */
> > > +             coresight_trace_id_clear_inuse(id, id_map);
> > > +             per_cpu(cpu_ids, cpu).id = 0;
> > > +     }
> > > +
> > > + put_cpu_id_out:
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > > +
> > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > > +{
> >
> > So far I have reviewed until patch 05 and I find passing the id_map every time
> > we need an ID quite heavy, especially since it is always the same one.
> >
> 
> My initital thought was to ensure that the API would handle the
> forthcoming requirements for per-sink id maps without having to change
> too much code, both within the API and the client code.

I think going with a perf-sink id map would fix a lot of
problems.  The id map would need to be linked with the perf session, as we
already do with sink buffers, i.e CPUs in the same perf session id would get the
same id map.  

In etm_event_start() a CPU's traceID could be fetch from the etm_event_data and
sent down to the driver with an additional parameter to source_ops()->enable().

I am done revewing this set.

Thanks,
Mathieu

> 
> However an alternative method could be to drop the id_map passing in
> the external API calls - simplifying it and the client code, and
> simply extend the API once passing in the id_map is really required.
> This would mean being able to drop the list of maps for now, use a
> default id_map withing the ID map code. However I would retain the
> passing of the id_map for the internal functions as I feel it is
> better practice to have functions operate on data passed to them
> rather then a global blob.
> 
> Regards
> 
> Mike
> 
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = coresight_trace_id_find_new_id(id_map);
> > > +     if (id > 0)
> > > +             coresight_trace_id_set_inuse(id, id_map);
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +
> > > +     return id;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > > +
> > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     coresight_trace_id_clear_inuse(id, id_map);
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > > +
> > > +void coresight_trace_id_perf_start(void)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     perf_cs_etm_session_active++;
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > > +
> > > +void coresight_trace_id_perf_stop(void)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     perf_cs_etm_session_active--;
> > > +     if (!perf_cs_etm_session_active)
> > > +             coresight_trace_id_release_all_pending();
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > > +
> > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int bit;
> > > +
> > > +     /* set all reserved bits as in-use */
> > > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > > +             set_bit(bit, id_map->avail_ids);
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +
> > > +     /* add id_map to the list */
> > > +     list_add(&id_map->node, &id_map_list);
> > > +
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > > +
> > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +
> > > +     /* remove id_map from list */
> > > +     list_del(&id_map->node);
> > > +
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > new file mode 100644
> > > index 000000000000..01db2441cee6
> > > --- /dev/null
> > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > @@ -0,0 +1,69 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > + */
> > > +
> > > +#ifndef _CORESIGHT_TRACE_ID_H
> > > +#define _CORESIGHT_TRACE_ID_H
> > > +
> > > +/*
> > > + * Coresight trace ID allocation API
> > > + *
> > > + * With multi cpu systems, and more additional trace sources a scalable
> > > + * trace ID reservation system is required.
> > > + *
> > > + * The system will allocate Ids on a demand basis, and allow them to be
> > > + * released when done.
> > > + *
> > > + * In order to ensure that a consistent cpu / ID matching is maintained
> > > + * throughout a perf cs_etm event session - a session in progress flag will
> > > + * be maintained, and released IDs not cleared until the perf session is
> > > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > > + *
> > > + *
> > > + * Trace ID maps will be created and initialised to prevent architecturally
> > > + * reserved IDs from being allocated.
> > > + *
> > > + * API permits multiple maps to be maintained - for large systems where
> > > + * different sets of cpus trace into different independent sinks.
> > > + */
> > > +
> > > +#include <linux/bitops.h>
> > > +#include <linux/types.h>
> > > +
> > > +
> > > +/* architecturally we have 128 IDs some of which are reserved */
> > > +#define CORESIGHT_TRACE_IDS_MAX 128
> > > +
> > > +/**
> > > + * Trace ID map.
> > > + *
> > > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > > + * @node:    List entry to add to list of managed trace id maps.
> > > + */
> > > +struct coresight_trace_id_map {
> > > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     struct list_head node;
> > > +};
> > > +
> > > +/* Allocate and release IDs */
> > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > > +
> > > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > > +/* remove ID maps from list of maps */
> > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > > +
> > > +/* notifiers for perf session start and stop */
> > > +void coresight_trace_id_perf_start(void);
> > > +void coresight_trace_id_perf_stop(void);
> > > +
> > > +#endif /* _CORESIGHT_TRACE_ID_H */
> > > --
> > > 2.17.1
> > >
> 
> 
> 
> -- 
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
@ 2022-04-07 18:08         ` Mathieu Poirier
  0 siblings, 0 replies; 70+ messages in thread
From: Mathieu Poirier @ 2022-04-07 18:08 UTC (permalink / raw)
  To: Mike Leach
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

On Wed, Apr 06, 2022 at 08:45:21PM +0100, Mike Leach wrote:
> Hi Mathieu,
> 
> On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> >
> > Good morning,
> >
> > To avoid getting tunnel vision, I haven't red any of the comments already posted
> > about this set.  As such some of my observations may be redundant.
> >
> > On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > > The existing mechanism to assign Trace ID values to sources is limited
> > > and does not scale for larger multicore / multi trace source systems.
> > >
> > > The API introduces functions that reserve IDs based on availabilty
> > > represented by a coresight_trace_id_map structure. This records the
> > > used and free IDs in a bitmap.
> > >
> > > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > > the ID associated with the CPU. This ensures that the same ID will be
> > > re-used while perf events are active on the CPU. The put_cpu_id function
> > > will pend release of the ID until all perf cs_etm sessions are complete.
> > >
> > > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > > coresight_trace_id_put_system_id.
> > >
> > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > ---
> > >  drivers/hwtracing/coresight/Makefile          |   2 +-
> > >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> > >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> > >  3 files changed, 292 insertions(+), 1 deletion(-)
> > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> > >
> > > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > > index b6c4a48140ec..329a0c704b87 100644
> > > --- a/drivers/hwtracing/coresight/Makefile
> > > +++ b/drivers/hwtracing/coresight/Makefile
> > > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> > >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> > >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> > >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > > -             coresight-syscfg-configfs.o
> > > +             coresight-syscfg-configfs.o coresight-trace-id.o
> > >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> > >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> > >                     coresight-tmc-etr.o
> > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > new file mode 100644
> > > index 000000000000..ce6c7d7b55d6
> > > --- /dev/null
> > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > @@ -0,0 +1,222 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > + */
> > > +#include <linux/kernel.h>
> > > +#include <linux/types.h>
> > > +#include <linux/spinlock.h>
> > > +
> > > +#include "coresight-trace-id.h"
> > > +
> > > +/* need to keep data on ids & association with cpus. */
> > > +struct cpu_id_info {
> > > +     int id;
> > > +     bool pend_rel;
> > > +};
> > > +
> > > +/* maintain a record of the current mapping of cpu IDs */
> > > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > > +
> > > +/* a list of currently used id_maps */
> > > +static LIST_HEAD(id_map_list);
> > > +
> > > +/* perf session active flag */
> > > +static int perf_cs_etm_session_active;
> > > +
> > > +/* lock to protect id_map list and cpu data  */
> > > +static DEFINE_SPINLOCK(id_map_lock);
> > > +
> > > +/* ID 0 is reserved */
> > > +#define CORESIGHT_TRACE_ID_RES_0 0
> > > +
> > > +/* ID 0x70 onwards are reserved */
> > > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > > +
> > > +#define IS_VALID_ID(id)      \
> > > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > > +
> > > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             set_bit(id, id_map->avail_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             clear_bit(id, id_map->avail_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             set_bit(id, id_map->pend_rel_ids);
> > > +}
> > > +
> > > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     if (IS_VALID_ID(id))
> > > +             clear_bit(id, id_map->pend_rel_ids);
> > > +}
> > > +
> > > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     int id;
> > > +
> > > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > > +             id = -EINVAL;
> > > +     return id;
> > > +}
> > > +
> > > +/* release all pending IDs for all current maps & clear CPU associations */
> > > +static void coresight_trace_id_release_all_pending(void)
> > > +{
> > > +     struct coresight_trace_id_map *id_map;
> > > +     int cpu, bit;
> > > +
> > > +     list_for_each_entry(id_map, &id_map_list, node) {
> > > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > > +                     clear_bit(bit, id_map->avail_ids);
> > > +                     clear_bit(bit, id_map->pend_rel_ids);
> > > +             }
> > > +     }
> > > +
> > > +     for_each_possible_cpu(cpu) {
> > > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > +                     per_cpu(cpu_ids, cpu).id = 0;
> > > +             }
> > > +     }
> > > +}
> > > +
> > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = per_cpu(cpu_ids, cpu).id;
> > > +     if (!id) {
> > > +             id = coresight_trace_id_find_new_id(id_map);
> > > +             if (id < 0)
> > > +                     goto get_cpu_id_out;
> > > +     }
> > > +
> > > +     per_cpu(cpu_ids, cpu).id = id;
> > > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > +     coresight_trace_id_set_inuse(id, id_map);
> > > +     coresight_trace_id_clear_pend_rel(id, id_map);
> >
> > The above should have been done when the ID for this CPU has been set for the
> > first time.  Therefore we should simply release the spinlock and return if an ID
> > has already been set.
> >
> 
> Agreed.
> 
> > > +
> > > +get_cpu_id_out:
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +     return id;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > > +
> > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = per_cpu(cpu_ids, cpu).id;
> > > +     if (!id)
> > > +             goto put_cpu_id_out;
> > > +
> > > +     if (perf_cs_etm_session_active) {
> > > +             /* set release at pending if perf still active */
> > > +             coresight_trace_id_set_pend_rel(id, id_map);
> > > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > > +     } else {
> > > +             /* otherwise clear id */
> > > +             coresight_trace_id_clear_inuse(id, id_map);
> > > +             per_cpu(cpu_ids, cpu).id = 0;
> > > +     }
> > > +
> > > + put_cpu_id_out:
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > > +
> > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > > +{
> >
> > So far I have reviewed until patch 05 and I find passing the id_map every time
> > we need an ID quite heavy, especially since it is always the same one.
> >
> 
> My initital thought was to ensure that the API would handle the
> forthcoming requirements for per-sink id maps without having to change
> too much code, both within the API and the client code.

I think going with a perf-sink id map would fix a lot of
problems.  The id map would need to be linked with the perf session, as we
already do with sink buffers, i.e CPUs in the same perf session id would get the
same id map.  

In etm_event_start() a CPU's traceID could be fetch from the etm_event_data and
sent down to the driver with an additional parameter to source_ops()->enable().

I am done revewing this set.

Thanks,
Mathieu

> 
> However an alternative method could be to drop the id_map passing in
> the external API calls - simplifying it and the client code, and
> simply extend the API once passing in the id_map is really required.
> This would mean being able to drop the list of maps for now, use a
> default id_map withing the ID map code. However I would retain the
> passing of the id_map for the internal functions as I feel it is
> better practice to have functions operate on data passed to them
> rather then a global blob.
> 
> Regards
> 
> Mike
> 
> > > +     unsigned long flags;
> > > +     int id;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     id = coresight_trace_id_find_new_id(id_map);
> > > +     if (id > 0)
> > > +             coresight_trace_id_set_inuse(id, id_map);
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +
> > > +     return id;
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > > +
> > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     coresight_trace_id_clear_inuse(id, id_map);
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > > +
> > > +void coresight_trace_id_perf_start(void)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     perf_cs_etm_session_active++;
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > > +
> > > +void coresight_trace_id_perf_stop(void)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +     perf_cs_etm_session_active--;
> > > +     if (!perf_cs_etm_session_active)
> > > +             coresight_trace_id_release_all_pending();
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > > +
> > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +     int bit;
> > > +
> > > +     /* set all reserved bits as in-use */
> > > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > > +             set_bit(bit, id_map->avail_ids);
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +
> > > +     /* add id_map to the list */
> > > +     list_add(&id_map->node, &id_map_list);
> > > +
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > > +
> > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > > +{
> > > +     unsigned long flags;
> > > +
> > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > +
> > > +     /* remove id_map from list */
> > > +     list_del(&id_map->node);
> > > +
> > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > +}
> > > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > new file mode 100644
> > > index 000000000000..01db2441cee6
> > > --- /dev/null
> > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > @@ -0,0 +1,69 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > + */
> > > +
> > > +#ifndef _CORESIGHT_TRACE_ID_H
> > > +#define _CORESIGHT_TRACE_ID_H
> > > +
> > > +/*
> > > + * Coresight trace ID allocation API
> > > + *
> > > + * With multi cpu systems, and more additional trace sources a scalable
> > > + * trace ID reservation system is required.
> > > + *
> > > + * The system will allocate Ids on a demand basis, and allow them to be
> > > + * released when done.
> > > + *
> > > + * In order to ensure that a consistent cpu / ID matching is maintained
> > > + * throughout a perf cs_etm event session - a session in progress flag will
> > > + * be maintained, and released IDs not cleared until the perf session is
> > > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > > + *
> > > + *
> > > + * Trace ID maps will be created and initialised to prevent architecturally
> > > + * reserved IDs from being allocated.
> > > + *
> > > + * API permits multiple maps to be maintained - for large systems where
> > > + * different sets of cpus trace into different independent sinks.
> > > + */
> > > +
> > > +#include <linux/bitops.h>
> > > +#include <linux/types.h>
> > > +
> > > +
> > > +/* architecturally we have 128 IDs some of which are reserved */
> > > +#define CORESIGHT_TRACE_IDS_MAX 128
> > > +
> > > +/**
> > > + * Trace ID map.
> > > + *
> > > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > > + * @node:    List entry to add to list of managed trace id maps.
> > > + */
> > > +struct coresight_trace_id_map {
> > > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > > +     struct list_head node;
> > > +};
> > > +
> > > +/* Allocate and release IDs */
> > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > > +
> > > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > > +/* remove ID maps from list of maps */
> > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > > +
> > > +/* notifiers for perf session start and stop */
> > > +void coresight_trace_id_perf_start(void);
> > > +void coresight_trace_id_perf_stop(void);
> > > +
> > > +#endif /* _CORESIGHT_TRACE_ID_H */
> > > --
> > > 2.17.1
> > >
> 
> 
> 
> -- 
> Mike Leach
> Principal Engineer, ARM Ltd.
> Manchester Design Centre. UK
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
  2022-04-07 18:08         ` Mathieu Poirier
@ 2022-04-08 13:28           ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-08 13:28 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Mathieu

On Thu, 7 Apr 2022 at 19:08, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Wed, Apr 06, 2022 at 08:45:21PM +0100, Mike Leach wrote:
> > Hi Mathieu,
> >
> > On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> > >
> > > Good morning,
> > >
> > > To avoid getting tunnel vision, I haven't red any of the comments already posted
> > > about this set.  As such some of my observations may be redundant.
> > >
> > > On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > > > The existing mechanism to assign Trace ID values to sources is limited
> > > > and does not scale for larger multicore / multi trace source systems.
> > > >
> > > > The API introduces functions that reserve IDs based on availabilty
> > > > represented by a coresight_trace_id_map structure. This records the
> > > > used and free IDs in a bitmap.
> > > >
> > > > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > > > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > > > the ID associated with the CPU. This ensures that the same ID will be
> > > > re-used while perf events are active on the CPU. The put_cpu_id function
> > > > will pend release of the ID until all perf cs_etm sessions are complete.
> > > >
> > > > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > > > coresight_trace_id_put_system_id.
> > > >
> > > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > > ---
> > > >  drivers/hwtracing/coresight/Makefile          |   2 +-
> > > >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> > > >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> > > >  3 files changed, 292 insertions(+), 1 deletion(-)
> > > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> > > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> > > >
> > > > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > > > index b6c4a48140ec..329a0c704b87 100644
> > > > --- a/drivers/hwtracing/coresight/Makefile
> > > > +++ b/drivers/hwtracing/coresight/Makefile
> > > > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> > > >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> > > >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> > > >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > > > -             coresight-syscfg-configfs.o
> > > > +             coresight-syscfg-configfs.o coresight-trace-id.o
> > > >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> > > >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> > > >                     coresight-tmc-etr.o
> > > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > > new file mode 100644
> > > > index 000000000000..ce6c7d7b55d6
> > > > --- /dev/null
> > > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > > @@ -0,0 +1,222 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +/*
> > > > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > > + */
> > > > +#include <linux/kernel.h>
> > > > +#include <linux/types.h>
> > > > +#include <linux/spinlock.h>
> > > > +
> > > > +#include "coresight-trace-id.h"
> > > > +
> > > > +/* need to keep data on ids & association with cpus. */
> > > > +struct cpu_id_info {
> > > > +     int id;
> > > > +     bool pend_rel;
> > > > +};
> > > > +
> > > > +/* maintain a record of the current mapping of cpu IDs */
> > > > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > > > +
> > > > +/* a list of currently used id_maps */
> > > > +static LIST_HEAD(id_map_list);
> > > > +
> > > > +/* perf session active flag */
> > > > +static int perf_cs_etm_session_active;
> > > > +
> > > > +/* lock to protect id_map list and cpu data  */
> > > > +static DEFINE_SPINLOCK(id_map_lock);
> > > > +
> > > > +/* ID 0 is reserved */
> > > > +#define CORESIGHT_TRACE_ID_RES_0 0
> > > > +
> > > > +/* ID 0x70 onwards are reserved */
> > > > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > > > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > > > +
> > > > +#define IS_VALID_ID(id)      \
> > > > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > > > +
> > > > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             set_bit(id, id_map->avail_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             clear_bit(id, id_map->avail_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             set_bit(id, id_map->pend_rel_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             clear_bit(id, id_map->pend_rel_ids);
> > > > +}
> > > > +
> > > > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     int id;
> > > > +
> > > > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > > > +             id = -EINVAL;
> > > > +     return id;
> > > > +}
> > > > +
> > > > +/* release all pending IDs for all current maps & clear CPU associations */
> > > > +static void coresight_trace_id_release_all_pending(void)
> > > > +{
> > > > +     struct coresight_trace_id_map *id_map;
> > > > +     int cpu, bit;
> > > > +
> > > > +     list_for_each_entry(id_map, &id_map_list, node) {
> > > > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > > > +                     clear_bit(bit, id_map->avail_ids);
> > > > +                     clear_bit(bit, id_map->pend_rel_ids);
> > > > +             }
> > > > +     }
> > > > +
> > > > +     for_each_possible_cpu(cpu) {
> > > > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > > > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > > +                     per_cpu(cpu_ids, cpu).id = 0;
> > > > +             }
> > > > +     }
> > > > +}
> > > > +
> > > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = per_cpu(cpu_ids, cpu).id;
> > > > +     if (!id) {
> > > > +             id = coresight_trace_id_find_new_id(id_map);
> > > > +             if (id < 0)
> > > > +                     goto get_cpu_id_out;
> > > > +     }
> > > > +
> > > > +     per_cpu(cpu_ids, cpu).id = id;
> > > > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > > +     coresight_trace_id_set_inuse(id, id_map);
> > > > +     coresight_trace_id_clear_pend_rel(id, id_map);
> > >
> > > The above should have been done when the ID for this CPU has been set for the
> > > first time.  Therefore we should simply release the spinlock and return if an ID
> > > has already been set.
> > >
> >
> > Agreed.
> >
> > > > +
> > > > +get_cpu_id_out:
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +     return id;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > > > +
> > > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = per_cpu(cpu_ids, cpu).id;
> > > > +     if (!id)
> > > > +             goto put_cpu_id_out;
> > > > +
> > > > +     if (perf_cs_etm_session_active) {
> > > > +             /* set release at pending if perf still active */
> > > > +             coresight_trace_id_set_pend_rel(id, id_map);
> > > > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > > > +     } else {
> > > > +             /* otherwise clear id */
> > > > +             coresight_trace_id_clear_inuse(id, id_map);
> > > > +             per_cpu(cpu_ids, cpu).id = 0;
> > > > +     }
> > > > +
> > > > + put_cpu_id_out:
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > > > +
> > > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > > > +{
> > >
> > > So far I have reviewed until patch 05 and I find passing the id_map every time
> > > we need an ID quite heavy, especially since it is always the same one.
> > >
> >
> > My initital thought was to ensure that the API would handle the
> > forthcoming requirements for per-sink id maps without having to change
> > too much code, both within the API and the client code.
>
> I think going with a perf-sink id map would fix a lot of
> problems.

I think you misread here - were are talking about a "per-sink" not
"perF-sink" ID map.

The issue being that multi-cluster devices can have a set of CPU/ETMs
running into one sink, while the other cluster runs CPUs/ETMs into
another sink.
Trace IDs only have to be unique for a given sink - and associated
with the CPU/ETM writing to that sink.

Each trace sink can have its own ID map - but both sinks could be
linked to the same perf session - if the session runs on both
clusters.

We don't have any systems big enough to need or test that yet - i.e we
do not currently run out of IDs - which is why it is not implemented
in this set - but it is anticipated as a necessity for the future.

Thanks

Mike

>  The id map would need to be linked with the perf session, as we
> already do with sink buffers, i.e CPUs in the same perf session id would get the
> same id map.
>
> In etm_event_start() a CPU's traceID could be fetch from the etm_event_data and
> sent down to the driver with an additional parameter to source_ops()->enable().
>
> I am done revewing this set.
>
> Thanks,
> Mathieu
>
> >
> > However an alternative method could be to drop the id_map passing in
> > the external API calls - simplifying it and the client code, and
> > simply extend the API once passing in the id_map is really required.
> > This would mean being able to drop the list of maps for now, use a
> > default id_map withing the ID map code. However I would retain the
> > passing of the id_map for the internal functions as I feel it is
> > better practice to have functions operate on data passed to them
> > rather then a global blob.
> >
> > Regards
> >
> > Mike
> >
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = coresight_trace_id_find_new_id(id_map);
> > > > +     if (id > 0)
> > > > +             coresight_trace_id_set_inuse(id, id_map);
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +
> > > > +     return id;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > > > +
> > > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     coresight_trace_id_clear_inuse(id, id_map);
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > > > +
> > > > +void coresight_trace_id_perf_start(void)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     perf_cs_etm_session_active++;
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > > > +
> > > > +void coresight_trace_id_perf_stop(void)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     perf_cs_etm_session_active--;
> > > > +     if (!perf_cs_etm_session_active)
> > > > +             coresight_trace_id_release_all_pending();
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > > > +
> > > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int bit;
> > > > +
> > > > +     /* set all reserved bits as in-use */
> > > > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > > > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > > > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > > > +             set_bit(bit, id_map->avail_ids);
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +
> > > > +     /* add id_map to the list */
> > > > +     list_add(&id_map->node, &id_map_list);
> > > > +
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > > > +
> > > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +
> > > > +     /* remove id_map from list */
> > > > +     list_del(&id_map->node);
> > > > +
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > > new file mode 100644
> > > > index 000000000000..01db2441cee6
> > > > --- /dev/null
> > > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > > @@ -0,0 +1,69 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > > + */
> > > > +
> > > > +#ifndef _CORESIGHT_TRACE_ID_H
> > > > +#define _CORESIGHT_TRACE_ID_H
> > > > +
> > > > +/*
> > > > + * Coresight trace ID allocation API
> > > > + *
> > > > + * With multi cpu systems, and more additional trace sources a scalable
> > > > + * trace ID reservation system is required.
> > > > + *
> > > > + * The system will allocate Ids on a demand basis, and allow them to be
> > > > + * released when done.
> > > > + *
> > > > + * In order to ensure that a consistent cpu / ID matching is maintained
> > > > + * throughout a perf cs_etm event session - a session in progress flag will
> > > > + * be maintained, and released IDs not cleared until the perf session is
> > > > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > > > + *
> > > > + *
> > > > + * Trace ID maps will be created and initialised to prevent architecturally
> > > > + * reserved IDs from being allocated.
> > > > + *
> > > > + * API permits multiple maps to be maintained - for large systems where
> > > > + * different sets of cpus trace into different independent sinks.
> > > > + */
> > > > +
> > > > +#include <linux/bitops.h>
> > > > +#include <linux/types.h>
> > > > +
> > > > +
> > > > +/* architecturally we have 128 IDs some of which are reserved */
> > > > +#define CORESIGHT_TRACE_IDS_MAX 128
> > > > +
> > > > +/**
> > > > + * Trace ID map.
> > > > + *
> > > > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > > > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > > > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > > > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > > > + * @node:    List entry to add to list of managed trace id maps.
> > > > + */
> > > > +struct coresight_trace_id_map {
> > > > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     struct list_head node;
> > > > +};
> > > > +
> > > > +/* Allocate and release IDs */
> > > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > > > +
> > > > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > > > +/* remove ID maps from list of maps */
> > > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > > > +
> > > > +/* notifiers for perf session start and stop */
> > > > +void coresight_trace_id_perf_start(void);
> > > > +void coresight_trace_id_perf_stop(void);
> > > > +
> > > > +#endif /* _CORESIGHT_TRACE_ID_H */
> > > > --
> > > > 2.17.1
> > > >
> >
> >
> >
> > --
> > Mike Leach
> > Principal Engineer, ARM Ltd.
> > Manchester Design Centre. UK
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace ID values
@ 2022-04-08 13:28           ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-04-08 13:28 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users, leo.yan

Mathieu

On Thu, 7 Apr 2022 at 19:08, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
>
> On Wed, Apr 06, 2022 at 08:45:21PM +0100, Mike Leach wrote:
> > Hi Mathieu,
> >
> > On Tue, 5 Apr 2022 at 18:02, Mathieu Poirier <mathieu.poirier@linaro.org> wrote:
> > >
> > > Good morning,
> > >
> > > To avoid getting tunnel vision, I haven't red any of the comments already posted
> > > about this set.  As such some of my observations may be redundant.
> > >
> > > On Tue, Mar 08, 2022 at 08:49:51PM +0000, Mike Leach wrote:
> > > > The existing mechanism to assign Trace ID values to sources is limited
> > > > and does not scale for larger multicore / multi trace source systems.
> > > >
> > > > The API introduces functions that reserve IDs based on availabilty
> > > > represented by a coresight_trace_id_map structure. This records the
> > > > used and free IDs in a bitmap.
> > > >
> > > > CPU bound sources such as ETMs use the coresight_trace_id_get_cpu_id /
> > > > coresight_trace_id_put_cpu_id pair of functions. The API will record
> > > > the ID associated with the CPU. This ensures that the same ID will be
> > > > re-used while perf events are active on the CPU. The put_cpu_id function
> > > > will pend release of the ID until all perf cs_etm sessions are complete.
> > > >
> > > > Non-cpu sources, such as the STM can use coresight_trace_id_get_system_id /
> > > > coresight_trace_id_put_system_id.
> > > >
> > > > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > > > ---
> > > >  drivers/hwtracing/coresight/Makefile          |   2 +-
> > > >  .../hwtracing/coresight/coresight-trace-id.c  | 222 ++++++++++++++++++
> > > >  .../hwtracing/coresight/coresight-trace-id.h  |  69 ++++++
> > > >  3 files changed, 292 insertions(+), 1 deletion(-)
> > > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.c
> > > >  create mode 100644 drivers/hwtracing/coresight/coresight-trace-id.h
> > > >
> > > > diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
> > > > index b6c4a48140ec..329a0c704b87 100644
> > > > --- a/drivers/hwtracing/coresight/Makefile
> > > > +++ b/drivers/hwtracing/coresight/Makefile
> > > > @@ -6,7 +6,7 @@ obj-$(CONFIG_CORESIGHT) += coresight.o
> > > >  coresight-y := coresight-core.o  coresight-etm-perf.o coresight-platform.o \
> > > >               coresight-sysfs.o coresight-syscfg.o coresight-config.o \
> > > >               coresight-cfg-preload.o coresight-cfg-afdo.o \
> > > > -             coresight-syscfg-configfs.o
> > > > +             coresight-syscfg-configfs.o coresight-trace-id.o
> > > >  obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
> > > >  coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
> > > >                     coresight-tmc-etr.o
> > > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.c b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > > new file mode 100644
> > > > index 000000000000..ce6c7d7b55d6
> > > > --- /dev/null
> > > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.c
> > > > @@ -0,0 +1,222 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +/*
> > > > + * Copyright (c) 2022, Linaro Limited, All rights reserved.
> > > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > > + */
> > > > +#include <linux/kernel.h>
> > > > +#include <linux/types.h>
> > > > +#include <linux/spinlock.h>
> > > > +
> > > > +#include "coresight-trace-id.h"
> > > > +
> > > > +/* need to keep data on ids & association with cpus. */
> > > > +struct cpu_id_info {
> > > > +     int id;
> > > > +     bool pend_rel;
> > > > +};
> > > > +
> > > > +/* maintain a record of the current mapping of cpu IDs */
> > > > +static DEFINE_PER_CPU(struct cpu_id_info, cpu_ids);
> > > > +
> > > > +/* a list of currently used id_maps */
> > > > +static LIST_HEAD(id_map_list);
> > > > +
> > > > +/* perf session active flag */
> > > > +static int perf_cs_etm_session_active;
> > > > +
> > > > +/* lock to protect id_map list and cpu data  */
> > > > +static DEFINE_SPINLOCK(id_map_lock);
> > > > +
> > > > +/* ID 0 is reserved */
> > > > +#define CORESIGHT_TRACE_ID_RES_0 0
> > > > +
> > > > +/* ID 0x70 onwards are reserved */
> > > > +#define CORESIGHT_TRACE_ID_RES_RANGE_LO 0x70
> > > > +#define CORESIGHT_TRACE_ID_RES_RANGE_HI 0x7F
> > > > +
> > > > +#define IS_VALID_ID(id)      \
> > > > +     ((id > CORESIGHT_TRACE_ID_RES_0) && (id < CORESIGHT_TRACE_ID_RES_RANGE_LO))
> > > > +
> > > > +static void coresight_trace_id_set_inuse(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             set_bit(id, id_map->avail_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_clear_inuse(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             clear_bit(id, id_map->avail_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_set_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             set_bit(id, id_map->pend_rel_ids);
> > > > +}
> > > > +
> > > > +static void coresight_trace_id_clear_pend_rel(int id, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     if (IS_VALID_ID(id))
> > > > +             clear_bit(id, id_map->pend_rel_ids);
> > > > +}
> > > > +
> > > > +static int coresight_trace_id_find_new_id(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     int id;
> > > > +
> > > > +     id = find_first_zero_bit(id_map->avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     if (id >= CORESIGHT_TRACE_IDS_MAX)
> > > > +             id = -EINVAL;
> > > > +     return id;
> > > > +}
> > > > +
> > > > +/* release all pending IDs for all current maps & clear CPU associations */
> > > > +static void coresight_trace_id_release_all_pending(void)
> > > > +{
> > > > +     struct coresight_trace_id_map *id_map;
> > > > +     int cpu, bit;
> > > > +
> > > > +     list_for_each_entry(id_map, &id_map_list, node) {
> > > > +             for_each_set_bit(bit, id_map->pend_rel_ids, CORESIGHT_TRACE_IDS_MAX) {
> > > > +                     clear_bit(bit, id_map->avail_ids);
> > > > +                     clear_bit(bit, id_map->pend_rel_ids);
> > > > +             }
> > > > +     }
> > > > +
> > > > +     for_each_possible_cpu(cpu) {
> > > > +             if (per_cpu(cpu_ids, cpu).pend_rel) {
> > > > +                     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > > +                     per_cpu(cpu_ids, cpu).id = 0;
> > > > +             }
> > > > +     }
> > > > +}
> > > > +
> > > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = per_cpu(cpu_ids, cpu).id;
> > > > +     if (!id) {
> > > > +             id = coresight_trace_id_find_new_id(id_map);
> > > > +             if (id < 0)
> > > > +                     goto get_cpu_id_out;
> > > > +     }
> > > > +
> > > > +     per_cpu(cpu_ids, cpu).id = id;
> > > > +     per_cpu(cpu_ids, cpu).pend_rel = false;
> > > > +     coresight_trace_id_set_inuse(id, id_map);
> > > > +     coresight_trace_id_clear_pend_rel(id, id_map);
> > >
> > > The above should have been done when the ID for this CPU has been set for the
> > > first time.  Therefore we should simply release the spinlock and return if an ID
> > > has already been set.
> > >
> >
> > Agreed.
> >
> > > > +
> > > > +get_cpu_id_out:
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +     return id;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_cpu_id);
> > > > +
> > > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = per_cpu(cpu_ids, cpu).id;
> > > > +     if (!id)
> > > > +             goto put_cpu_id_out;
> > > > +
> > > > +     if (perf_cs_etm_session_active) {
> > > > +             /* set release at pending if perf still active */
> > > > +             coresight_trace_id_set_pend_rel(id, id_map);
> > > > +             per_cpu(cpu_ids, cpu).pend_rel = true;
> > > > +     } else {
> > > > +             /* otherwise clear id */
> > > > +             coresight_trace_id_clear_inuse(id, id_map);
> > > > +             per_cpu(cpu_ids, cpu).id = 0;
> > > > +     }
> > > > +
> > > > + put_cpu_id_out:
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_cpu_id);
> > > > +
> > > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map)
> > > > +{
> > >
> > > So far I have reviewed until patch 05 and I find passing the id_map every time
> > > we need an ID quite heavy, especially since it is always the same one.
> > >
> >
> > My initital thought was to ensure that the API would handle the
> > forthcoming requirements for per-sink id maps without having to change
> > too much code, both within the API and the client code.
>
> I think going with a perf-sink id map would fix a lot of
> problems.

I think you misread here - were are talking about a "per-sink" not
"perF-sink" ID map.

The issue being that multi-cluster devices can have a set of CPU/ETMs
running into one sink, while the other cluster runs CPUs/ETMs into
another sink.
Trace IDs only have to be unique for a given sink - and associated
with the CPU/ETM writing to that sink.

Each trace sink can have its own ID map - but both sinks could be
linked to the same perf session - if the session runs on both
clusters.

We don't have any systems big enough to need or test that yet - i.e we
do not currently run out of IDs - which is why it is not implemented
in this set - but it is anticipated as a necessity for the future.

Thanks

Mike

>  The id map would need to be linked with the perf session, as we
> already do with sink buffers, i.e CPUs in the same perf session id would get the
> same id map.
>
> In etm_event_start() a CPU's traceID could be fetch from the etm_event_data and
> sent down to the driver with an additional parameter to source_ops()->enable().
>
> I am done revewing this set.
>
> Thanks,
> Mathieu
>
> >
> > However an alternative method could be to drop the id_map passing in
> > the external API calls - simplifying it and the client code, and
> > simply extend the API once passing in the id_map is really required.
> > This would mean being able to drop the list of maps for now, use a
> > default id_map withing the ID map code. However I would retain the
> > passing of the id_map for the internal functions as I feel it is
> > better practice to have functions operate on data passed to them
> > rather then a global blob.
> >
> > Regards
> >
> > Mike
> >
> > > > +     unsigned long flags;
> > > > +     int id;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     id = coresight_trace_id_find_new_id(id_map);
> > > > +     if (id > 0)
> > > > +             coresight_trace_id_set_inuse(id, id_map);
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +
> > > > +     return id;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_get_system_id);
> > > > +
> > > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     coresight_trace_id_clear_inuse(id, id_map);
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_put_system_id);
> > > > +
> > > > +void coresight_trace_id_perf_start(void)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     perf_cs_etm_session_active++;
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_start);
> > > > +
> > > > +void coresight_trace_id_perf_stop(void)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +     perf_cs_etm_session_active--;
> > > > +     if (!perf_cs_etm_session_active)
> > > > +             coresight_trace_id_release_all_pending();
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_perf_stop);
> > > > +
> > > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +     int bit;
> > > > +
> > > > +     /* set all reserved bits as in-use */
> > > > +     set_bit(CORESIGHT_TRACE_ID_RES_0, id_map->avail_ids);
> > > > +     for (bit = CORESIGHT_TRACE_ID_RES_RANGE_LO;
> > > > +          bit <= CORESIGHT_TRACE_ID_RES_RANGE_HI; bit++)
> > > > +             set_bit(bit, id_map->avail_ids);
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +
> > > > +     /* add id_map to the list */
> > > > +     list_add(&id_map->node, &id_map_list);
> > > > +
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_init_id_map);
> > > > +
> > > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map)
> > > > +{
> > > > +     unsigned long flags;
> > > > +
> > > > +     spin_lock_irqsave(&id_map_lock, flags);
> > > > +
> > > > +     /* remove id_map from list */
> > > > +     list_del(&id_map->node);
> > > > +
> > > > +     spin_unlock_irqrestore(&id_map_lock, flags);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(coresight_trace_id_release_id_map);
> > > > diff --git a/drivers/hwtracing/coresight/coresight-trace-id.h b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > > new file mode 100644
> > > > index 000000000000..01db2441cee6
> > > > --- /dev/null
> > > > +++ b/drivers/hwtracing/coresight/coresight-trace-id.h
> > > > @@ -0,0 +1,69 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright(C) 2022 Linaro Limited. All rights reserved.
> > > > + * Author: Mike Leach <mike.leach@linaro.org>
> > > > + */
> > > > +
> > > > +#ifndef _CORESIGHT_TRACE_ID_H
> > > > +#define _CORESIGHT_TRACE_ID_H
> > > > +
> > > > +/*
> > > > + * Coresight trace ID allocation API
> > > > + *
> > > > + * With multi cpu systems, and more additional trace sources a scalable
> > > > + * trace ID reservation system is required.
> > > > + *
> > > > + * The system will allocate Ids on a demand basis, and allow them to be
> > > > + * released when done.
> > > > + *
> > > > + * In order to ensure that a consistent cpu / ID matching is maintained
> > > > + * throughout a perf cs_etm event session - a session in progress flag will
> > > > + * be maintained, and released IDs not cleared until the perf session is
> > > > + * complete. This allows the same CPU to be re-allocated its prior ID.
> > > > + *
> > > > + *
> > > > + * Trace ID maps will be created and initialised to prevent architecturally
> > > > + * reserved IDs from being allocated.
> > > > + *
> > > > + * API permits multiple maps to be maintained - for large systems where
> > > > + * different sets of cpus trace into different independent sinks.
> > > > + */
> > > > +
> > > > +#include <linux/bitops.h>
> > > > +#include <linux/types.h>
> > > > +
> > > > +
> > > > +/* architecturally we have 128 IDs some of which are reserved */
> > > > +#define CORESIGHT_TRACE_IDS_MAX 128
> > > > +
> > > > +/**
> > > > + * Trace ID map.
> > > > + *
> > > > + * @avail_ids:       Bitmap to register available (bit = 0) and in use (bit = 1) IDs.
> > > > + *           Initialised so that the reserved IDs are permanently marked as in use.
> > > > + * @pend_rel_ids: CPU IDs that have been released by the trace source but not yet marked
> > > > + *                as available, to allow re-allocation to the same CPU during a perf session.
> > > > + * @node:    List entry to add to list of managed trace id maps.
> > > > + */
> > > > +struct coresight_trace_id_map {
> > > > +     DECLARE_BITMAP(avail_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     DECLARE_BITMAP(pend_rel_ids, CORESIGHT_TRACE_IDS_MAX);
> > > > +     struct list_head node;
> > > > +};
> > > > +
> > > > +/* Allocate and release IDs */
> > > > +int coresight_trace_id_get_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > > +int coresight_trace_id_get_system_id(struct coresight_trace_id_map *id_map);
> > > > +void coresight_trace_id_put_cpu_id(int cpu, struct coresight_trace_id_map *id_map);
> > > > +void coresight_trace_id_put_system_id(struct coresight_trace_id_map *id_map, int id);
> > > > +
> > > > +/* initialise ID map - block reserved IDs, add to list of ID maps */
> > > > +void coresight_trace_id_init_id_map(struct coresight_trace_id_map *id_map);
> > > > +/* remove ID maps from list of maps */
> > > > +void coresight_trace_id_release_id_map(struct coresight_trace_id_map *id_map);
> > > > +
> > > > +/* notifiers for perf session start and stop */
> > > > +void coresight_trace_id_perf_start(void);
> > > > +void coresight_trace_id_perf_stop(void);
> > > > +
> > > > +#endif /* _CORESIGHT_TRACE_ID_H */
> > > > --
> > > > 2.17.1
> > > >
> >
> >
> >
> > --
> > Mike Leach
> > Principal Engineer, ARM Ltd.
> > Manchester Design Centre. UK
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
  2022-03-08 20:49   ` Mike Leach
@ 2022-05-17  3:56     ` liuqi (BA)
  -1 siblings, 0 replies; 70+ messages in thread
From: liuqi (BA) @ 2022-05-17  3:56 UTC (permalink / raw)
  To: Mike Leach, suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mike,

On 2022/3/9 4:49, Mike Leach wrote:
> This static 'cpu * 2 + seed' was outdated and broken for systems with high
> core counts (>46).
> 
> This has been replaced by a dynamic allocation system.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>   include/linux/coresight-pmu.h | 12 ------------
>   1 file changed, 12 deletions(-)

Seems coresight_get_trace_id() in tools/include/linux/coresight-pmu.h 
need to be deleted too.

Thanks,
Qi

> 
> diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
> index 4ac5c081af93..bb4eb4de3c77 100644
> --- a/include/linux/coresight-pmu.h
> +++ b/include/linux/coresight-pmu.h
> @@ -8,7 +8,6 @@
>   #define _LINUX_CORESIGHT_PMU_H
>   
>   #define CORESIGHT_ETM_PMU_NAME "cs_etm"
> -#define CORESIGHT_ETM_PMU_SEED  0x10
>   
>   /*
>    * Below are the definition of bit offsets for perf option, and works as
> @@ -32,15 +31,4 @@
>   #define ETM4_CFG_BIT_RETSTK	12
>   #define ETM4_CFG_BIT_VMID_OPT	15
>   
> -static inline int coresight_get_trace_id(int cpu)
> -{
> -	/*
> -	 * A trace ID of value 0 is invalid, so let's start at some
> -	 * random value that fits in 7 bits and go from there.  Since
> -	 * the common convention is to have data trace IDs be I(N) + 1,
> -	 * set instruction trace IDs as a function of the CPU number.
> -	 */
> -	return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
> -}
> -
>   #endif
> 

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
@ 2022-05-17  3:56     ` liuqi (BA)
  0 siblings, 0 replies; 70+ messages in thread
From: liuqi (BA) @ 2022-05-17  3:56 UTC (permalink / raw)
  To: Mike Leach, suzuki.poulose, coresight, linux-arm-kernel, linux-kernel
  Cc: mathieu.poirier, peterz, mingo, acme, linux-perf-users, leo.yan

Hi Mike,

On 2022/3/9 4:49, Mike Leach wrote:
> This static 'cpu * 2 + seed' was outdated and broken for systems with high
> core counts (>46).
> 
> This has been replaced by a dynamic allocation system.
> 
> Signed-off-by: Mike Leach <mike.leach@linaro.org>
> ---
>   include/linux/coresight-pmu.h | 12 ------------
>   1 file changed, 12 deletions(-)

Seems coresight_get_trace_id() in tools/include/linux/coresight-pmu.h 
need to be deleted too.

Thanks,
Qi

> 
> diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
> index 4ac5c081af93..bb4eb4de3c77 100644
> --- a/include/linux/coresight-pmu.h
> +++ b/include/linux/coresight-pmu.h
> @@ -8,7 +8,6 @@
>   #define _LINUX_CORESIGHT_PMU_H
>   
>   #define CORESIGHT_ETM_PMU_NAME "cs_etm"
> -#define CORESIGHT_ETM_PMU_SEED  0x10
>   
>   /*
>    * Below are the definition of bit offsets for perf option, and works as
> @@ -32,15 +31,4 @@
>   #define ETM4_CFG_BIT_RETSTK	12
>   #define ETM4_CFG_BIT_VMID_OPT	15
>   
> -static inline int coresight_get_trace_id(int cpu)
> -{
> -	/*
> -	 * A trace ID of value 0 is invalid, so let's start at some
> -	 * random value that fits in 7 bits and go from there.  Since
> -	 * the common convention is to have data trace IDs be I(N) + 1,
> -	 * set instruction trace IDs as a function of the CPU number.
> -	 */
> -	return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
> -}
> -
>   #endif
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
  2022-05-17  3:56     ` liuqi (BA)
@ 2022-05-18  9:07       ` Mike Leach
  -1 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-05-18  9:07 UTC (permalink / raw)
  To: liuqi (BA)
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users

Hi

On Tue, 17 May 2022 at 04:56, liuqi (BA) <liuqi115@huawei.com> wrote:
>
> Hi Mike,
>
> On 2022/3/9 4:49, Mike Leach wrote:
> > This static 'cpu * 2 + seed' was outdated and broken for systems with high
> > core counts (>46).
> >
> > This has been replaced by a dynamic allocation system.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >   include/linux/coresight-pmu.h | 12 ------------
> >   1 file changed, 12 deletions(-)
>
> Seems coresight_get_trace_id() in tools/include/linux/coresight-pmu.h
> need to be deleted too.
>
> Thanks,
> Qi
>

Agreed - I'll sort it for the next release.

Thanks

Mike

> >
> > diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
> > index 4ac5c081af93..bb4eb4de3c77 100644
> > --- a/include/linux/coresight-pmu.h
> > +++ b/include/linux/coresight-pmu.h
> > @@ -8,7 +8,6 @@
> >   #define _LINUX_CORESIGHT_PMU_H
> >
> >   #define CORESIGHT_ETM_PMU_NAME "cs_etm"
> > -#define CORESIGHT_ETM_PMU_SEED  0x10
> >
> >   /*
> >    * Below are the definition of bit offsets for perf option, and works as
> > @@ -32,15 +31,4 @@
> >   #define ETM4_CFG_BIT_RETSTK 12
> >   #define ETM4_CFG_BIT_VMID_OPT       15
> >
> > -static inline int coresight_get_trace_id(int cpu)
> > -{
> > -     /*
> > -      * A trace ID of value 0 is invalid, so let's start at some
> > -      * random value that fits in 7 bits and go from there.  Since
> > -      * the common convention is to have data trace IDs be I(N) + 1,
> > -      * set instruction trace IDs as a function of the CPU number.
> > -      */
> > -     return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
> > -}
> > -
> >   #endif
> >
> _______________________________________________
> CoreSight mailing list -- coresight@lists.linaro.org
> To unsubscribe send an email to coresight-leave@lists.linaro.org



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism
@ 2022-05-18  9:07       ` Mike Leach
  0 siblings, 0 replies; 70+ messages in thread
From: Mike Leach @ 2022-05-18  9:07 UTC (permalink / raw)
  To: liuqi (BA)
  Cc: suzuki.poulose, coresight, linux-arm-kernel, linux-kernel,
	peterz, mingo, acme, linux-perf-users

Hi

On Tue, 17 May 2022 at 04:56, liuqi (BA) <liuqi115@huawei.com> wrote:
>
> Hi Mike,
>
> On 2022/3/9 4:49, Mike Leach wrote:
> > This static 'cpu * 2 + seed' was outdated and broken for systems with high
> > core counts (>46).
> >
> > This has been replaced by a dynamic allocation system.
> >
> > Signed-off-by: Mike Leach <mike.leach@linaro.org>
> > ---
> >   include/linux/coresight-pmu.h | 12 ------------
> >   1 file changed, 12 deletions(-)
>
> Seems coresight_get_trace_id() in tools/include/linux/coresight-pmu.h
> need to be deleted too.
>
> Thanks,
> Qi
>

Agreed - I'll sort it for the next release.

Thanks

Mike

> >
> > diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
> > index 4ac5c081af93..bb4eb4de3c77 100644
> > --- a/include/linux/coresight-pmu.h
> > +++ b/include/linux/coresight-pmu.h
> > @@ -8,7 +8,6 @@
> >   #define _LINUX_CORESIGHT_PMU_H
> >
> >   #define CORESIGHT_ETM_PMU_NAME "cs_etm"
> > -#define CORESIGHT_ETM_PMU_SEED  0x10
> >
> >   /*
> >    * Below are the definition of bit offsets for perf option, and works as
> > @@ -32,15 +31,4 @@
> >   #define ETM4_CFG_BIT_RETSTK 12
> >   #define ETM4_CFG_BIT_VMID_OPT       15
> >
> > -static inline int coresight_get_trace_id(int cpu)
> > -{
> > -     /*
> > -      * A trace ID of value 0 is invalid, so let's start at some
> > -      * random value that fits in 7 bits and go from there.  Since
> > -      * the common convention is to have data trace IDs be I(N) + 1,
> > -      * set instruction trace IDs as a function of the CPU number.
> > -      */
> > -     return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
> > -}
> > -
> >   #endif
> >
> _______________________________________________
> CoreSight mailing list -- coresight@lists.linaro.org
> To unsubscribe send an email to coresight-leave@lists.linaro.org



-- 
Mike Leach
Principal Engineer, ARM Ltd.
Manchester Design Centre. UK

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

end of thread, other threads:[~2022-05-18  9:08 UTC | newest]

Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-08 20:49 [PATCH 00/10] coresight: Add new API to allocate trace source ID values Mike Leach
2022-03-08 20:49 ` Mike Leach
2022-03-08 20:49 ` [PATCH 01/10] coresight: trace-id: Add API to dynamically assign trace " Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-04-05 17:02   ` Mathieu Poirier
2022-04-05 17:02     ` Mathieu Poirier
2022-04-06 19:45     ` Mike Leach
2022-04-06 19:45       ` Mike Leach
2022-04-07 18:08       ` Mathieu Poirier
2022-04-07 18:08         ` Mathieu Poirier
2022-04-08 13:28         ` Mike Leach
2022-04-08 13:28           ` Mike Leach
2022-03-08 20:49 ` [PATCH 02/10] coresight: trace-id: Set up source trace ID map for system Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-03-08 20:49 ` [PATCH 03/10] coresight: stm: Update STM driver to use Trace ID api Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-03-08 20:49 ` [PATCH 04/10] coresight: etm4x: Use trace ID API to dynamically allocate trace ID Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-04-05 17:25   ` Mathieu Poirier
2022-04-05 17:25     ` Mathieu Poirier
2022-03-08 20:49 ` [PATCH 05/10] coresight: etm3x: Use trace ID API to allocate IDs Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-04-05 17:22   ` Mathieu Poirier
2022-04-05 17:22     ` Mathieu Poirier
2022-04-06 19:47     ` Mike Leach
2022-04-06 19:47       ` Mike Leach
2022-03-08 20:49 ` [PATCH 06/10] coresight: perf: traceid: Add perf notifiers for trace ID Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-04-06 17:11   ` Mathieu Poirier
2022-04-06 17:11     ` Mathieu Poirier
2022-04-06 19:38     ` Mike Leach
2022-04-06 19:38       ` Mike Leach
2022-04-07 17:46       ` Mathieu Poirier
2022-04-07 17:46         ` Mathieu Poirier
2022-03-08 20:49 ` [PATCH 07/10] perf: cs-etm: Update event to read trace ID from sysfs Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-03-08 20:49 ` [PATCH 08/10] coresight: Remove legacy Trace ID allocation mechanism Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-05-17  3:56   ` liuqi (BA)
2022-05-17  3:56     ` liuqi (BA)
2022-05-18  9:07     ` Mike Leach
2022-05-18  9:07       ` Mike Leach
2022-03-08 20:49 ` [PATCH 09/10] coresight: etmX.X: stm: Remove unused legacy source trace ID ops Mike Leach
2022-03-08 20:49   ` Mike Leach
2022-03-08 20:50 ` [PATCH 10/10] coresight: trace-id: Add debug & test macros to trace id allocation Mike Leach
2022-03-08 20:50   ` Mike Leach
2022-03-22 10:43 ` [PATCH 00/10] coresight: Add new API to allocate trace source ID values Suzuki Kuruppassery Poulose
2022-03-22 10:43   ` Suzuki Kuruppassery Poulose
2022-03-22 11:38   ` Mike Leach
2022-03-22 11:38     ` Mike Leach
2022-03-22 12:35     ` Suzuki Kuruppassery Poulose
2022-03-22 12:35       ` Suzuki Kuruppassery Poulose
2022-03-22 14:27       ` Mike Leach
2022-03-22 14:27         ` Mike Leach
2022-03-22 18:52         ` Suzuki K Poulose
2022-03-22 18:52           ` Suzuki K Poulose
2022-03-23 10:07           ` Mike Leach
2022-03-23 10:07             ` Mike Leach
2022-03-23 10:35             ` Al Grant
2022-03-23 10:35               ` Al Grant
2022-03-23 11:05               ` Mike Leach
2022-03-23 11:05                 ` Mike Leach
2022-03-23 10:41             ` Suzuki Kuruppassery Poulose
2022-03-23 10:41               ` Suzuki Kuruppassery Poulose
2022-03-23 11:35               ` Mike Leach
2022-03-23 11:35                 ` Mike Leach
2022-03-23 12:08                 ` Suzuki Kuruppassery Poulose
2022-03-23 12:08                   ` Suzuki Kuruppassery Poulose
2022-04-04 16:15 ` Mathieu Poirier
2022-04-04 16:15   ` Mathieu Poirier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.