* [PATCH V2 0/4] perf/core: Assert PERF_EVENT_FLAG_ARCH is followed
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
This series ensures that PERF_EVENT_FLAG_ARCH mask is followed correctly
while defining all the platform specific hardware event flags. But first
this expands PERF_EVENT_FLAG_ARCH with another four bits, to accommodate
some x86 platform event flags which were going beyond the existing mask.
This series applies on v6.0-rc4.
Changes in V2:
- Added first patch to expand PERF_EVENT_FLAG_ARCH
- Converted all BUILD_BUG_ON() into static_assert()
Changes in V1:
https://lore.kernel.org/all/20220829065507.177781-1-anshuman.khandual@arm.com/
Cc: James Clark <james.clark@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Anshuman Khandual (4):
perf/core: Expand PERF_EVENT_FLAG_ARCH
perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flags
arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
drivers/perf/arm_spe_pmu.c | 4 +++-
include/linux/perf/arm_pmu.h | 9 +++++----
include/linux/perf_event.h | 4 +++-
4 files changed, 31 insertions(+), 6 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH V2 0/4] perf/core: Assert PERF_EVENT_FLAG_ARCH is followed
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
This series ensures that PERF_EVENT_FLAG_ARCH mask is followed correctly
while defining all the platform specific hardware event flags. But first
this expands PERF_EVENT_FLAG_ARCH with another four bits, to accommodate
some x86 platform event flags which were going beyond the existing mask.
This series applies on v6.0-rc4.
Changes in V2:
- Added first patch to expand PERF_EVENT_FLAG_ARCH
- Converted all BUILD_BUG_ON() into static_assert()
Changes in V1:
https://lore.kernel.org/all/20220829065507.177781-1-anshuman.khandual@arm.com/
Cc: James Clark <james.clark@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Anshuman Khandual (4):
perf/core: Expand PERF_EVENT_FLAG_ARCH
perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flags
arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
drivers/perf/arm_spe_pmu.c | 4 +++-
include/linux/perf/arm_pmu.h | 9 +++++----
include/linux/perf_event.h | 4 +++-
4 files changed, 31 insertions(+), 6 deletions(-)
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH V2 1/4] perf/core: Expand PERF_EVENT_FLAG_ARCH
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-05 5:42 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
Two hardware event flags on x86 platform has overshot PERF_EVENT_FLAG_ARCH
(0x0000ffff). These flags are PERF_X86_EVENT_PEBS_LAT_HYBRID (0x20000) and
PERF_X86_EVENT_AMD_BRS (0x10000). Lets expand PERF_EVENT_FLAG_ARCH mask to
accommodate those flags, and also create room for two more in the future.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/perf_event.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index ee8b9ecdc03b..3f51fbf4a595 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -137,7 +137,7 @@ struct hw_perf_event_extra {
* PERF_EVENT_FLAG_ARCH bits are reserved for architecture-specific
* usage.
*/
-#define PERF_EVENT_FLAG_ARCH 0x0000ffff
+#define PERF_EVENT_FLAG_ARCH 0x000fffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
/**
--
2.25.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 1/4] perf/core: Expand PERF_EVENT_FLAG_ARCH
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
Two hardware event flags on x86 platform has overshot PERF_EVENT_FLAG_ARCH
(0x0000ffff). These flags are PERF_X86_EVENT_PEBS_LAT_HYBRID (0x20000) and
PERF_X86_EVENT_AMD_BRS (0x10000). Lets expand PERF_EVENT_FLAG_ARCH mask to
accommodate those flags, and also create room for two more in the future.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/perf_event.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index ee8b9ecdc03b..3f51fbf4a595 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -137,7 +137,7 @@ struct hw_perf_event_extra {
* PERF_EVENT_FLAG_ARCH bits are reserved for architecture-specific
* usage.
*/
-#define PERF_EVENT_FLAG_ARCH 0x0000ffff
+#define PERF_EVENT_FLAG_ARCH 0x000fffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
/**
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 2/4] perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flags
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-05 5:42 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
This just ensures that PERF_EVENT_FLAG_ARCH does not overlap with generic
hardware event flags.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/perf_event.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 3f51fbf4a595..10e23a0f9db0 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -140,6 +140,8 @@ struct hw_perf_event_extra {
#define PERF_EVENT_FLAG_ARCH 0x000fffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
+static_assert((PERF_EVENT_FLAG_USER_READ_CNT & PERF_EVENT_FLAG_ARCH) == 0);
+
/**
* struct hw_perf_event - performance event hardware details:
*/
--
2.25.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 2/4] perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flags
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86
This just ensures that PERF_EVENT_FLAG_ARCH does not overlap with generic
hardware event flags.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/perf_event.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 3f51fbf4a595..10e23a0f9db0 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -140,6 +140,8 @@ struct hw_perf_event_extra {
#define PERF_EVENT_FLAG_ARCH 0x000fffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
+static_assert((PERF_EVENT_FLAG_USER_READ_CNT & PERF_EVENT_FLAG_ARCH) == 0);
+
/**
* struct hw_perf_event - performance event hardware details:
*/
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-05 5:42 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Will Deacon,
Catalin Marinas
Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_spe_pmu.c | 4 +++-
include/linux/perf/arm_pmu.h | 9 +++++----
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
index b65a7d9640e1..db8a0a841062 100644
--- a/drivers/perf/arm_spe_pmu.c
+++ b/drivers/perf/arm_spe_pmu.c
@@ -44,7 +44,9 @@
* This allows us to perform the check, i.e, perfmon_capable(),
* in the context of the event owner, once, during the event_init().
*/
-#define SPE_PMU_HW_FLAGS_CX BIT(0)
+#define SPE_PMU_HW_FLAGS_CX 0x00001
+
+static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_CX);
static void set_spe_event_has_cx(struct perf_event *event)
{
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 0407a38b470a..0356cb6a215d 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -24,10 +24,11 @@
/*
* ARM PMU hw_event flags
*/
-/* Event uses a 64bit counter */
-#define ARMPMU_EVT_64BIT 1
-/* Event uses a 47bit counter */
-#define ARMPMU_EVT_47BIT 2
+#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
+#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
+
+static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_64BIT) == ARMPMU_EVT_64BIT);
+static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
#define HW_OP_UNSUPPORTED 0xFFFF
#define C(_x) PERF_COUNT_HW_CACHE_##_x
--
2.25.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Will Deacon,
Catalin Marinas
Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_spe_pmu.c | 4 +++-
include/linux/perf/arm_pmu.h | 9 +++++----
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
index b65a7d9640e1..db8a0a841062 100644
--- a/drivers/perf/arm_spe_pmu.c
+++ b/drivers/perf/arm_spe_pmu.c
@@ -44,7 +44,9 @@
* This allows us to perform the check, i.e, perfmon_capable(),
* in the context of the event owner, once, during the event_init().
*/
-#define SPE_PMU_HW_FLAGS_CX BIT(0)
+#define SPE_PMU_HW_FLAGS_CX 0x00001
+
+static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_CX);
static void set_spe_event_has_cx(struct perf_event *event)
{
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 0407a38b470a..0356cb6a215d 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -24,10 +24,11 @@
/*
* ARM PMU hw_event flags
*/
-/* Event uses a 64bit counter */
-#define ARMPMU_EVT_64BIT 1
-/* Event uses a 47bit counter */
-#define ARMPMU_EVT_47BIT 2
+#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
+#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
+
+static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_64BIT) == ARMPMU_EVT_64BIT);
+static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
#define HW_OP_UNSUPPORTED 0xFFFF
#define C(_x) PERF_COUNT_HW_CACHE_##_x
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-05 5:42 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index ba3d24a6a4ec..12136a33e9b7 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
+
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
+ == PERF_X86_EVENT_PEBS_LAT_HYBRID);
+
static inline bool is_topdown_count(struct perf_event *event)
{
return event->hw.flags & PERF_X86_EVENT_TOPDOWN;
--
2.25.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-05 5:42 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-05 5:42 UTC (permalink / raw)
To: linux-kernel, linux-perf-users, peterz
Cc: Anshuman Khandual, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index ba3d24a6a4ec..12136a33e9b7 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
+
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
+static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
+ == PERF_X86_EVENT_PEBS_LAT_HYBRID);
+
static inline bool is_topdown_count(struct perf_event *event)
{
return event->hw.flags & PERF_X86_EVENT_TOPDOWN;
--
2.25.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-05 9:10 ` James Clark
-1 siblings, 0 replies; 20+ messages in thread
From: James Clark @ 2022-09-05 9:10 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Namhyung Kim, linux-arm-kernel,
x86, Will Deacon, Catalin Marinas, linux-kernel,
linux-perf-users, peterz
On 05/09/2022 06:42, Anshuman Khandual wrote:
> Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-perf-users@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> drivers/perf/arm_spe_pmu.c | 4 +++-
> include/linux/perf/arm_pmu.h | 9 +++++----
> 2 files changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> index b65a7d9640e1..db8a0a841062 100644
> --- a/drivers/perf/arm_spe_pmu.c
> +++ b/drivers/perf/arm_spe_pmu.c
> @@ -44,7 +44,9 @@
> * This allows us to perform the check, i.e, perfmon_capable(),
> * in the context of the event owner, once, during the event_init().
> */
> -#define SPE_PMU_HW_FLAGS_CX BIT(0)
> +#define SPE_PMU_HW_FLAGS_CX 0x00001
> +
> +static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_CX);
>
> static void set_spe_event_has_cx(struct perf_event *event)
> {
> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> index 0407a38b470a..0356cb6a215d 100644
> --- a/include/linux/perf/arm_pmu.h
> +++ b/include/linux/perf/arm_pmu.h
> @@ -24,10 +24,11 @@
> /*
> * ARM PMU hw_event flags
> */
> -/* Event uses a 64bit counter */
> -#define ARMPMU_EVT_64BIT 1
> -/* Event uses a 47bit counter */
> -#define ARMPMU_EVT_47BIT 2
> +#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
> +#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
> +
Minor nit:
I don't think changing the definitions to hex adds anything except more
noise in the git blame.
Either way, for the whole set:
Reviewed-by: James Clark <james.clark@arm.com>
> +static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_64BIT) == ARMPMU_EVT_64BIT);
> +static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
>
> #define HW_OP_UNSUPPORTED 0xFFFF
> #define C(_x) PERF_COUNT_HW_CACHE_##_x
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-05 9:10 ` James Clark
0 siblings, 0 replies; 20+ messages in thread
From: James Clark @ 2022-09-05 9:10 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Namhyung Kim, linux-arm-kernel,
x86, Will Deacon, Catalin Marinas, linux-kernel,
linux-perf-users, peterz
On 05/09/2022 06:42, Anshuman Khandual wrote:
> Ensure all platform specific event flags are within PERF_EVENT_FLAG_ARCH.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-perf-users@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> drivers/perf/arm_spe_pmu.c | 4 +++-
> include/linux/perf/arm_pmu.h | 9 +++++----
> 2 files changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
> index b65a7d9640e1..db8a0a841062 100644
> --- a/drivers/perf/arm_spe_pmu.c
> +++ b/drivers/perf/arm_spe_pmu.c
> @@ -44,7 +44,9 @@
> * This allows us to perform the check, i.e, perfmon_capable(),
> * in the context of the event owner, once, during the event_init().
> */
> -#define SPE_PMU_HW_FLAGS_CX BIT(0)
> +#define SPE_PMU_HW_FLAGS_CX 0x00001
> +
> +static_assert((PERF_EVENT_FLAG_ARCH & SPE_PMU_HW_FLAGS_CX) == SPE_PMU_HW_FLAGS_CX);
>
> static void set_spe_event_has_cx(struct perf_event *event)
> {
> diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
> index 0407a38b470a..0356cb6a215d 100644
> --- a/include/linux/perf/arm_pmu.h
> +++ b/include/linux/perf/arm_pmu.h
> @@ -24,10 +24,11 @@
> /*
> * ARM PMU hw_event flags
> */
> -/* Event uses a 64bit counter */
> -#define ARMPMU_EVT_64BIT 1
> -/* Event uses a 47bit counter */
> -#define ARMPMU_EVT_47BIT 2
> +#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
> +#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
> +
Minor nit:
I don't think changing the definitions to hex adds anything except more
noise in the git blame.
Either way, for the whole set:
Reviewed-by: James Clark <james.clark@arm.com>
> +static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_64BIT) == ARMPMU_EVT_64BIT);
> +static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
>
> #define HW_OP_UNSUPPORTED 0xFFFF
> #define C(_x) PERF_COUNT_HW_CACHE_##_x
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-05 9:10 ` James Clark
@ 2022-09-06 2:57 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-06 2:57 UTC (permalink / raw)
To: James Clark
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Namhyung Kim, linux-arm-kernel,
x86, Will Deacon, Catalin Marinas, linux-kernel,
linux-perf-users, peterz
On 9/5/22 14:40, James Clark wrote:
>> --- a/include/linux/perf/arm_pmu.h
>> +++ b/include/linux/perf/arm_pmu.h
>> @@ -24,10 +24,11 @@
>> /*
>> * ARM PMU hw_event flags
>> */
>> -/* Event uses a 64bit counter */
>> -#define ARMPMU_EVT_64BIT 1
>> -/* Event uses a 47bit counter */
>> -#define ARMPMU_EVT_47BIT 2
>> +#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
>> +#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
>> +
> Minor nit:
>
> I don't think changing the definitions to hex adds anything except more
> noise in the git blame.
The idea here was just to make these five digit hex, in accordance with
PERF_EVENT_FLAG_ARCH mask like the existing x86 platform event flags.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-06 2:57 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-06 2:57 UTC (permalink / raw)
To: James Clark
Cc: Ingo Molnar, Arnaldo Carvalho de Melo, Mark Rutland,
Alexander Shishkin, Jiri Olsa, Namhyung Kim, linux-arm-kernel,
x86, Will Deacon, Catalin Marinas, linux-kernel,
linux-perf-users, peterz
On 9/5/22 14:40, James Clark wrote:
>> --- a/include/linux/perf/arm_pmu.h
>> +++ b/include/linux/perf/arm_pmu.h
>> @@ -24,10 +24,11 @@
>> /*
>> * ARM PMU hw_event flags
>> */
>> -/* Event uses a 64bit counter */
>> -#define ARMPMU_EVT_64BIT 1
>> -/* Event uses a 47bit counter */
>> -#define ARMPMU_EVT_47BIT 2
>> +#define ARMPMU_EVT_64BIT 0x00001 /* Event uses a 64bit counter */
>> +#define ARMPMU_EVT_47BIT 0x00002 /* Event uses a 47bit counter */
>> +
> Minor nit:
>
> I don't think changing the definitions to hex adds anything except more
> noise in the git blame.
The idea here was just to make these five digit hex, in accordance with
PERF_EVENT_FLAG_ARCH mask like the existing x86 platform event flags.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-05 5:42 ` Anshuman Khandual
@ 2022-09-06 19:22 ` Peter Zijlstra
-1 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-06 19:22 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On Mon, Sep 05, 2022 at 11:12:39AM +0530, Anshuman Khandual wrote:
> arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index ba3d24a6a4ec..12136a33e9b7 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
> #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> #define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
>
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
> +
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
> + == PERF_X86_EVENT_PEBS_LAT_HYBRID);
That's not half tedious...
How about something like so?
---
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -64,27 +64,25 @@ static inline bool constraint_match(stru
return ((ecode & c->cmask) - c->code) <= (u64)c->size;
}
+#define PERF_ARCH(name, val) \
+ PERF_X86_EVENT_##name = val,
+
/*
* struct hw_perf_event.flags flags
*/
-#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
-#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
-#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
-#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
-#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
-#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
-#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
-
-#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
-#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
-#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
-#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
-#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
-#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
-#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
-#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
-#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
-#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
+enum {
+#include "perf_event_flags.h"
+};
+
+#undef PERF_ARCH
+
+#define PERF_ARCH(name, val) \
+ static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
+ PERF_X86_EVENT_##name);
+
+#include "perf_event_flags.h"
+
+#undef PERF_ARCH
static inline bool is_topdown_count(struct perf_event *event)
{
--- /dev/null
+++ b/arch/x86/events/perf_event_flags.h
@@ -0,0 +1,22 @@
+
+/*
+ * struct hw_perf_event.flags flags
+ */
+PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
+PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
+PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
+PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
+PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
+PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
+PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
+ /* 0x00080 */
+PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
+PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
+PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
+PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
+PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
+PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
+PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
+PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
+PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
+PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-06 19:22 ` Peter Zijlstra
0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-06 19:22 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On Mon, Sep 05, 2022 at 11:12:39AM +0530, Anshuman Khandual wrote:
> arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index ba3d24a6a4ec..12136a33e9b7 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
> #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> #define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
>
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
> +
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
> + == PERF_X86_EVENT_PEBS_LAT_HYBRID);
That's not half tedious...
How about something like so?
---
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -64,27 +64,25 @@ static inline bool constraint_match(stru
return ((ecode & c->cmask) - c->code) <= (u64)c->size;
}
+#define PERF_ARCH(name, val) \
+ PERF_X86_EVENT_##name = val,
+
/*
* struct hw_perf_event.flags flags
*/
-#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
-#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
-#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
-#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
-#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
-#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
-#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
-
-#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
-#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
-#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
-#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
-#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
-#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
-#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
-#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
-#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
-#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
+enum {
+#include "perf_event_flags.h"
+};
+
+#undef PERF_ARCH
+
+#define PERF_ARCH(name, val) \
+ static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
+ PERF_X86_EVENT_##name);
+
+#include "perf_event_flags.h"
+
+#undef PERF_ARCH
static inline bool is_topdown_count(struct perf_event *event)
{
--- /dev/null
+++ b/arch/x86/events/perf_event_flags.h
@@ -0,0 +1,22 @@
+
+/*
+ * struct hw_perf_event.flags flags
+ */
+PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
+PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
+PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
+PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
+PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
+PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
+PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
+ /* 0x00080 */
+PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
+PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
+PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
+PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
+PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
+PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
+PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
+PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
+PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
+PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-06 19:22 ` Peter Zijlstra
@ 2022-09-07 5:27 ` Anshuman Khandual
-1 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-07 5:27 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On 9/7/22 00:52, Peter Zijlstra wrote:
> On Mon, Sep 05, 2022 at 11:12:39AM +0530, Anshuman Khandual wrote:
>
>> arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
>> index ba3d24a6a4ec..12136a33e9b7 100644
>> --- a/arch/x86/events/perf_event.h
>> +++ b/arch/x86/events/perf_event.h
>> @@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
>> #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
>> #define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
>>
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
>> +
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
>> + == PERF_X86_EVENT_PEBS_LAT_HYBRID);
>
>
> That's not half tedious...
>
> How about something like so?
Makes sense, will fold this back. Could I also include your "Signed-off-by:"
for this patch ?
- Anshuman
>
> ---
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -64,27 +64,25 @@ static inline bool constraint_match(stru
> return ((ecode & c->cmask) - c->code) <= (u64)c->size;
> }
>
> +#define PERF_ARCH(name, val) \
> + PERF_X86_EVENT_##name = val,
> +
> /*
> * struct hw_perf_event.flags flags
> */
> -#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
> -#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
> -#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
> -#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
> -#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
> -#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
> -#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
> -
> -#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
> -#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
> -#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
> -#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
> -#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
> -#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
> -#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
> -#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
> -#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> -#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
> +enum {
> +#include "perf_event_flags.h"
> +};
> +
> +#undef PERF_ARCH
> +
> +#define PERF_ARCH(name, val) \
> + static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
> + PERF_X86_EVENT_##name);
> +
> +#include "perf_event_flags.h"
> +
> +#undef PERF_ARCH
>
> static inline bool is_topdown_count(struct perf_event *event)
> {
> --- /dev/null
> +++ b/arch/x86/events/perf_event_flags.h
> @@ -0,0 +1,22 @@
> +
> +/*
> + * struct hw_perf_event.flags flags
> + */
> +PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
> +PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
> +PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
> +PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
> +PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
> +PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
> +PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
> + /* 0x00080 */
> +PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
> +PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
> +PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
> +PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
> +PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
> +PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
> +PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
> +PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
> +PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
> +PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-07 5:27 ` Anshuman Khandual
0 siblings, 0 replies; 20+ messages in thread
From: Anshuman Khandual @ 2022-09-07 5:27 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On 9/7/22 00:52, Peter Zijlstra wrote:
> On Mon, Sep 05, 2022 at 11:12:39AM +0530, Anshuman Khandual wrote:
>
>> arch/x86/events/perf_event.h | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
>> index ba3d24a6a4ec..12136a33e9b7 100644
>> --- a/arch/x86/events/perf_event.h
>> +++ b/arch/x86/events/perf_event.h
>> @@ -86,6 +86,26 @@ static inline bool constraint_match(struct event_constraint *c, u64 ecode)
>> #define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
>> #define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
>>
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LDLAT) == PERF_X86_EVENT_PEBS_LDLAT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST) == PERF_X86_EVENT_PEBS_ST);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_ST_HSW) == PERF_X86_EVENT_PEBS_ST_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LD_HSW) == PERF_X86_EVENT_PEBS_LD_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_NA_HSW) == PERF_X86_EVENT_PEBS_NA_HSW);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL) == PERF_X86_EVENT_EXCL);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_DYNAMIC) == PERF_X86_EVENT_DYNAMIC);
>> +
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_EXCL_ACCT) == PERF_X86_EVENT_EXCL_ACCT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AUTO_RELOAD) == PERF_X86_EVENT_AUTO_RELOAD);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LARGE_PEBS) == PERF_X86_EVENT_LARGE_PEBS);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_VIA_PT) == PERF_X86_EVENT_PEBS_VIA_PT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PAIR) == PERF_X86_EVENT_PAIR);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_LBR_SELECT) == PERF_X86_EVENT_LBR_SELECT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_TOPDOWN) == PERF_X86_EVENT_TOPDOWN);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_STLAT) == PERF_X86_EVENT_PEBS_STLAT);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_AMD_BRS) == PERF_X86_EVENT_AMD_BRS);
>> +static_assert((PERF_EVENT_FLAG_ARCH & PERF_X86_EVENT_PEBS_LAT_HYBRID)
>> + == PERF_X86_EVENT_PEBS_LAT_HYBRID);
>
>
> That's not half tedious...
>
> How about something like so?
Makes sense, will fold this back. Could I also include your "Signed-off-by:"
for this patch ?
- Anshuman
>
> ---
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -64,27 +64,25 @@ static inline bool constraint_match(stru
> return ((ecode & c->cmask) - c->code) <= (u64)c->size;
> }
>
> +#define PERF_ARCH(name, val) \
> + PERF_X86_EVENT_##name = val,
> +
> /*
> * struct hw_perf_event.flags flags
> */
> -#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
> -#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
> -#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
> -#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
> -#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
> -#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
> -#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
> -
> -#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
> -#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
> -#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
> -#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
> -#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
> -#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
> -#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
> -#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
> -#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> -#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
> +enum {
> +#include "perf_event_flags.h"
> +};
> +
> +#undef PERF_ARCH
> +
> +#define PERF_ARCH(name, val) \
> + static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
> + PERF_X86_EVENT_##name);
> +
> +#include "perf_event_flags.h"
> +
> +#undef PERF_ARCH
>
> static inline bool is_topdown_count(struct perf_event *event)
> {
> --- /dev/null
> +++ b/arch/x86/events/perf_event_flags.h
> @@ -0,0 +1,22 @@
> +
> +/*
> + * struct hw_perf_event.flags flags
> + */
> +PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
> +PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
> +PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
> +PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
> +PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
> +PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
> +PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
> + /* 0x00080 */
> +PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
> +PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
> +PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
> +PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
> +PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
> +PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
> +PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
> +PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
> +PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
> +PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
2022-09-07 5:27 ` Anshuman Khandual
@ 2022-09-07 8:30 ` Peter Zijlstra
-1 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-07 8:30 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On Wed, Sep 07, 2022 at 10:57:35AM +0530, Anshuman Khandual wrote:
> > How about something like so?
>
> Makes sense, will fold this back. Could I also include your "Signed-off-by:"
> for this patch ?
>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> >
> > ---
> > --- a/arch/x86/events/perf_event.h
> > +++ b/arch/x86/events/perf_event.h
> > @@ -64,27 +64,25 @@ static inline bool constraint_match(stru
> > return ((ecode & c->cmask) - c->code) <= (u64)c->size;
> > }
> >
> > +#define PERF_ARCH(name, val) \
> > + PERF_X86_EVENT_##name = val,
> > +
> > /*
> > * struct hw_perf_event.flags flags
> > */
> > -#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
> > -#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
> > -#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
> > -#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
> > -#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
> > -#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
> > -#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
> > -
> > -#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
> > -#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
> > -#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
> > -#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
> > -#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
> > -#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
> > -#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
> > -#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
> > -#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> > -#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
> > +enum {
> > +#include "perf_event_flags.h"
> > +};
> > +
> > +#undef PERF_ARCH
> > +
> > +#define PERF_ARCH(name, val) \
> > + static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
> > + PERF_X86_EVENT_##name);
> > +
> > +#include "perf_event_flags.h"
> > +
> > +#undef PERF_ARCH
> >
> > static inline bool is_topdown_count(struct perf_event *event)
> > {
> > --- /dev/null
> > +++ b/arch/x86/events/perf_event_flags.h
> > @@ -0,0 +1,22 @@
> > +
> > +/*
> > + * struct hw_perf_event.flags flags
> > + */
> > +PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
> > +PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
> > +PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
> > +PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
> > +PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
> > +PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
> > +PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
> > + /* 0x00080 */
> > +PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
> > +PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
> > +PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
> > +PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
> > +PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
> > +PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
> > +PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
> > +PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
> > +PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
> > +PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH V2 4/4] x86/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH
@ 2022-09-07 8:30 ` Peter Zijlstra
0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-07 8:30 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-kernel, linux-perf-users, James Clark, Ingo Molnar,
Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, linux-arm-kernel, x86, Thomas Gleixner,
Borislav Petkov
On Wed, Sep 07, 2022 at 10:57:35AM +0530, Anshuman Khandual wrote:
> > How about something like so?
>
> Makes sense, will fold this back. Could I also include your "Signed-off-by:"
> for this patch ?
>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> >
> > ---
> > --- a/arch/x86/events/perf_event.h
> > +++ b/arch/x86/events/perf_event.h
> > @@ -64,27 +64,25 @@ static inline bool constraint_match(stru
> > return ((ecode & c->cmask) - c->code) <= (u64)c->size;
> > }
> >
> > +#define PERF_ARCH(name, val) \
> > + PERF_X86_EVENT_##name = val,
> > +
> > /*
> > * struct hw_perf_event.flags flags
> > */
> > -#define PERF_X86_EVENT_PEBS_LDLAT 0x00001 /* ld+ldlat data address sampling */
> > -#define PERF_X86_EVENT_PEBS_ST 0x00002 /* st data address sampling */
> > -#define PERF_X86_EVENT_PEBS_ST_HSW 0x00004 /* haswell style datala, store */
> > -#define PERF_X86_EVENT_PEBS_LD_HSW 0x00008 /* haswell style datala, load */
> > -#define PERF_X86_EVENT_PEBS_NA_HSW 0x00010 /* haswell style datala, unknown */
> > -#define PERF_X86_EVENT_EXCL 0x00020 /* HT exclusivity on counter */
> > -#define PERF_X86_EVENT_DYNAMIC 0x00040 /* dynamic alloc'd constraint */
> > -
> > -#define PERF_X86_EVENT_EXCL_ACCT 0x00100 /* accounted EXCL event */
> > -#define PERF_X86_EVENT_AUTO_RELOAD 0x00200 /* use PEBS auto-reload */
> > -#define PERF_X86_EVENT_LARGE_PEBS 0x00400 /* use large PEBS */
> > -#define PERF_X86_EVENT_PEBS_VIA_PT 0x00800 /* use PT buffer for PEBS */
> > -#define PERF_X86_EVENT_PAIR 0x01000 /* Large Increment per Cycle */
> > -#define PERF_X86_EVENT_LBR_SELECT 0x02000 /* Save/Restore MSR_LBR_SELECT */
> > -#define PERF_X86_EVENT_TOPDOWN 0x04000 /* Count Topdown slots/metrics events */
> > -#define PERF_X86_EVENT_PEBS_STLAT 0x08000 /* st+stlat data address sampling */
> > -#define PERF_X86_EVENT_AMD_BRS 0x10000 /* AMD Branch Sampling */
> > -#define PERF_X86_EVENT_PEBS_LAT_HYBRID 0x20000 /* ld and st lat for hybrid */
> > +enum {
> > +#include "perf_event_flags.h"
> > +};
> > +
> > +#undef PERF_ARCH
> > +
> > +#define PERF_ARCH(name, val) \
> > + static_assert((PERF_X86_EVENT_##name & PERF_EVENT_FLAG_ARCH) == \
> > + PERF_X86_EVENT_##name);
> > +
> > +#include "perf_event_flags.h"
> > +
> > +#undef PERF_ARCH
> >
> > static inline bool is_topdown_count(struct perf_event *event)
> > {
> > --- /dev/null
> > +++ b/arch/x86/events/perf_event_flags.h
> > @@ -0,0 +1,22 @@
> > +
> > +/*
> > + * struct hw_perf_event.flags flags
> > + */
> > +PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
> > +PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
> > +PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
> > +PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
> > +PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
> > +PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
> > +PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
> > + /* 0x00080 */
> > +PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
> > +PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
> > +PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
> > +PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
> > +PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
> > +PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
> > +PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
> > +PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
> > +PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
> > +PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2022-09-07 8:40 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-05 5:42 [PATCH V2 0/4] perf/core: Assert PERF_EVENT_FLAG_ARCH is followed Anshuman Khandual
2022-09-05 5:42 ` Anshuman Khandual
2022-09-05 5:42 ` [PATCH V2 1/4] perf/core: Expand PERF_EVENT_FLAG_ARCH Anshuman Khandual
2022-09-05 5:42 ` Anshuman Khandual
2022-09-05 5:42 ` [PATCH V2 2/4] perf/core: Assert PERF_EVENT_FLAG_ARCH does not overlap with generic flags Anshuman Khandual
2022-09-05 5:42 ` Anshuman Khandual
2022-09-05 5:42 ` [PATCH V2 3/4] arm64/perf: Assert all platform event flags are within PERF_EVENT_FLAG_ARCH Anshuman Khandual
2022-09-05 5:42 ` Anshuman Khandual
2022-09-05 9:10 ` James Clark
2022-09-05 9:10 ` James Clark
2022-09-06 2:57 ` Anshuman Khandual
2022-09-06 2:57 ` Anshuman Khandual
2022-09-05 5:42 ` [PATCH V2 4/4] x86/perf: " Anshuman Khandual
2022-09-05 5:42 ` Anshuman Khandual
2022-09-06 19:22 ` Peter Zijlstra
2022-09-06 19:22 ` Peter Zijlstra
2022-09-07 5:27 ` Anshuman Khandual
2022-09-07 5:27 ` Anshuman Khandual
2022-09-07 8:30 ` Peter Zijlstra
2022-09-07 8:30 ` Peter Zijlstra
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.