From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 866F3C04A95 for ; Wed, 28 Sep 2022 07:26:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233640AbiI1H03 (ORCPT ); Wed, 28 Sep 2022 03:26:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233533AbiI1HZH (ORCPT ); Wed, 28 Sep 2022 03:25:07 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 258CBD432F for ; Wed, 28 Sep 2022 00:23:13 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id b18-20020a253412000000b006b0177978eeso10749784yba.21 for ; Wed, 28 Sep 2022 00:23:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc:subject:date; bh=O5ROf8stZr+TymWxqcEKmujX70BPAhW6lBlq+A3CJms=; b=C1NHPfohahu0W/iB2N3Dgftv3RzLciJV+muoHW67Hgp4Y5OJUPG/8gDJ1lZBSpo8Ma Sfn3AYCBYhfM5miL3jhRQm0VII5HHpH0/LDfWBK/l/FSjDjBKP7vlcDKuLHPqesEaDD7 Rzjybc9MMkXLTeHbaLKxhSaPiN7fou9u9Tt6H90i8TJUXzvGqip2MfjzpYmCVYUeVqr5 6tp/l0N9ARo7/+EPVxG1Dm8cQWPB7DO7C6FvxeWEXkkKZ/EKQyrKZbr4Zx9kz8SDPdIS iQHc3lpeKi/zSWbYhMx5c+GnoN4EUly7oiTKCCND9RTlgJJy0Q6eIuT9wDnMkt3vinyc QelQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc:subject:date; bh=O5ROf8stZr+TymWxqcEKmujX70BPAhW6lBlq+A3CJms=; b=0O5Six7Lk02fj9w9rz6jI3f/MhTyyD9YOc+xOwcmlrkJCw8e5U/gpF2W8bGzIi5DNP /RBCNqR6VVbkkcv4Y8ubJhsT/tzLYtV+Ef6TUa18JuacB+IY6UHjZf8VhgjUh/+CxdwN YK2ZB0LUt/F0iAwuiIz0fk0tuOVL3+TukV8k79mXjyJBdFr8iY0ghY5ifT+KJ+2WJJK4 j9ZzmRYVYUFzcnU/jCoERodpkCy92RskgAMc49/VDoRyq6DXqtD8oK8R/0+HOkjMYxY6 rW3/9y5aNzV5Vf3Zk2SpBv3zTAiCOrEO0jL8+BSeKytCFJcZpL0VPPETef2B6WIXXswX bYHw== X-Gm-Message-State: ACrzQf0brJJZ5Naa+9UNZwueSv/TfU/YxM94OkP8vnoPVXZbUDBb/c8B lXFLqyrcthCzdPCkpWve8LLP94hgfBu6 X-Google-Smtp-Source: AMsMyM7Wk6Wk1b6bL5rCqGa5UWxMisRubqTNTmsV0UKPtDIBuUf6+Ii6dNIZxSgvuhK+eHfvql0M7MLIf7ZC X-Received: from irogers.svl.corp.google.com ([2620:15c:2d4:203:7099:1ccb:612a:5ad6]) (user=irogers job=sendgmr) by 2002:a05:6902:120f:b0:676:aaa4:8434 with SMTP id s15-20020a056902120f00b00676aaa48434mr31182052ybu.218.1664349774574; Wed, 28 Sep 2022 00:22:54 -0700 (PDT) Date: Wed, 28 Sep 2022 00:21:59 -0700 In-Reply-To: <20220928072204.1613330-1-irogers@google.com> Message-Id: <20220928072204.1613330-18-irogers@google.com> Mime-Version: 1.0 References: <20220928072204.1613330-1-irogers@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Subject: [PATCH v1 17/22] perf vendor events: Update Intel sandybridge From: Ian Rogers To: Zhengjun Xing , Kan Liang , Andi Kleen , perry.taylor@intel.com, caleb.biggers@intel.com, kshipra.bopardikar@intel.com, samantha.alt@intel.com, ahmad.yasin@intel.com, Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , John Garry , James Clark , Kajol Jain , Thomas Richter , Miaoqian Lin , Florian Fischer , linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Stephane Eranian , Ian Rogers Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Events remain at v17, and the metrics are based on TMA 4.4 full. Use script at: https://github.com/intel/event-converter-for-linux-perf/blob/master/downloa= d_and_gen.py with updates at: https://github.com/captain5050/event-converter-for-linux-perf Updates include: - Rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound. - _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode are correctly expanded in the single main metric. - Addition of all 6 levels of TMA metrics. Child metrics are placed in a group named after their parent allowing children of a metric to be easily measured using the metric name with a _group suffix. - ## and ##? operators are correctly expanded. - The locate-with column is added to the long description describing a sampling event. - Metrics are written in terms of other metrics to reduce the expression size and increase readability. Tested with 'perf test': 10: PMU events : 10.1: PMU event table sanity : Ok 10.2: PMU event map aliases : Ok 10.3: Parsing of PMU event table metrics : Ok 10.4: Parsing of PMU event table metrics with fake PMUs : Ok Signed-off-by: Ian Rogers --- .../arch/x86/sandybridge/snb-metrics.json | 279 +++++++++++++----- 1 file changed, 207 insertions(+), 72 deletions(-) diff --git a/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json b/= tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json index ae7ed267b2a2..a86e11d1878d 100644 --- a/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json +++ b/tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json @@ -1,64 +1,217 @@ [ { "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED= .THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Frontend_Bound", + "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / SLOTS", + "MetricGroup": "PGO;TopdownL1;tma_L1_group", + "MetricName": "tma_frontend_bound", "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound." }, { - "BriefDescription": "This category represents fraction of slots wh= ere the processor's Frontend undersupplies its Backend. SMT version; use wh= en SMT is enabled and measuring per logical CPU.", - "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHA= LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHA= LTED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Frontend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here the processor's Frontend undersupplies its Backend. Frontend denotes t= he first part of the processor core responsible to fetch operations that ar= e executed later on by the Backend part. Within the Frontend; a branch pred= ictor predicts the next address to fetch; cache-lines are fetched from the = memory subsystem; parsed into instructions; and lastly decoded into micro-o= perations (uops). Ideally the Frontend can issue Machine_Width uops every c= ycle to the Backend. Frontend Bound denotes unutilized issue-slots when the= re is no Backend stall; i.e. bubbles where Frontend delivered no uops while= Backend could have accepted them. For example; stalls due to instruction-c= ache misses would be categorized under Frontend Bound. SMT version; use whe= n SMT is enabled and measuring per logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend latency issues", + "MetricExpr": "4 * min(CPU_CLK_UNHALTED.THREAD, IDQ_UOPS_NOT_DELIV= ERED.CYCLES_0_UOPS_DELIV.CORE) / SLOTS", + "MetricGroup": "Frontend;TopdownL2;tma_L2_group;tma_frontend_bound= _group", + "MetricName": "tma_fetch_latency", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend latency issues. For example; instruction-= cache misses; iTLB misses or fetch stalls after a branch misprediction are = categorized under Frontend Latency. In such cases; the Frontend eventually = delivers no uops for some period. Sample with: RS_EVENTS.EMPTY_END" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Instruction TLB (ITLB) misses", + "MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURAT= ION) / CLKS", + "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_fetch_lat= ency_group", + "MetricName": "tma_itlb_misses", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_M= ISSES.WALK_COMPLETED" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to Branch Resteers", + "MetricExpr": "12 * (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS= .COUNT + BACLEARS.ANY) / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_branch_resteers", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to Branch Resteers. Branch Resteers estimates the Fro= ntend delay in fetching operations from corrected path; following all sorts= of miss-predicted branches. For example; branchy code with lots of miss-pr= edictions might get categorized under Branch Resteers. Note the value of th= is node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRA= NCHES" + }, + { + "BriefDescription": "This metric represents fraction of cycles the= CPU was stalled due to switches from DSB to MITE pipelines", + "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / CLKS", + "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_fetch_latency_group= ", + "MetricName": "tma_dsb_switches", + "PublicDescription": "This metric represents fraction of cycles th= e CPU was stalled due to switches from DSB to MITE pipelines. The DSB (deco= ded i-cache) is a Uop Cache where the front-end directly delivers Uops (mic= ro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter la= tency and delivered higher bandwidth than the MITE (legacy instruction deco= de pipeline). Switching between the two pipelines can cause penalties hence= this metric measures the exposed penalty." + }, + { + "BriefDescription": "This metric represents fraction of cycles CPU= was stalled due to Length Changing Prefixes (LCPs)", + "MetricExpr": "ILD_STALL.LCP / CLKS", + "MetricGroup": "FetchLat;TopdownL3;tma_fetch_latency_group", + "MetricName": "tma_lcp", + "PublicDescription": "This metric represents fraction of cycles CP= U was stalled due to Length Changing Prefixes (LCPs). Using proper compiler= flags or Intel Compiler by default will certainly avoid this. #Link: Optim= ization Guide about LCP BKMs." + }, + { + "BriefDescription": "This metric estimates the fraction of cycles = when the CPU was stalled due to switches of uop delivery to the Microcode S= equencer (MS)", + "MetricExpr": "3 * IDQ.MS_SWITCHES / CLKS", + "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_fetch_latency_grou= p", + "MetricName": "tma_ms_switches", + "PublicDescription": "This metric estimates the fraction of cycles= when the CPU was stalled due to switches of uop delivery to the Microcode = Sequencer (MS). Commonly used instructions are optimized for delivery by th= e DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Cert= ain operations cannot be handled natively by the execution pipeline; and mu= st be performed by microcode (small programs injected into the execution st= ream). Switching to the MS too often can negatively impact performance. The= MS is designated to deliver long uop flows required by CISC instructions l= ike CPUID; or uncommon conditions like Floating Point Assists when dealing = with Denormals. Sample with: IDQ.MS_SWITCHES" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was stalled due to Frontend bandwidth issues", + "MetricExpr": "tma_frontend_bound - tma_fetch_latency", + "MetricGroup": "FetchBW;Frontend;TopdownL2;tma_L2_group;tma_fronte= nd_bound_group", + "MetricName": "tma_fetch_bandwidth", + "PublicDescription": "This metric represents fraction of slots the= CPU was stalled due to Frontend bandwidth issues. For example; inefficien= cies at the instruction decoders; or restrictions for caching in the DSB (d= ecoded uops cache) are categorized under Fetch Bandwidth. In such cases; th= e Frontend typically delivers suboptimal amount of uops to the Backend." }, { "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Bad_Speculation", + "MetricExpr": "(UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * = ((INT_MISC.RECOVERY_CYCLES_ANY / 2) if #SMT_on else INT_MISC.RECOVERY_CYCLE= S)) / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_bad_speculation", "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example." }, { - "BriefDescription": "This category represents fraction of slots wa= sted due to incorrect speculations. SMT version; use when SMT is enabled an= d measuring per logical CPU.", - "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 *= ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD = / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCL= K ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Bad_Speculation_SMT", - "PublicDescription": "This category represents fraction of slots w= asted due to incorrect speculations. This include slots used to issue uops = that do not eventually get retired and slots for which the issue-pipeline w= as blocked due to recovery from earlier incorrect speculation. For example;= wasted work due to miss-predicted branches are categorized under Bad Specu= lation category. Incorrect data speculation followed by Memory Ordering Nuk= es is another example. SMT version; use when SMT is enabled and measuring p= er logical CPU." + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Branch Misprediction", + "MetricExpr": "(BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.AL= L_BRANCHES + MACHINE_CLEARS.COUNT)) * tma_bad_speculation", + "MetricGroup": "BadSpec;BrMispredicts;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_branch_mispredicts", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Branch Misprediction. These slots are either wasted= by uops fetched from an incorrectly speculated program path; or stalls whe= n the out-of-order part of the machine needs to recover its state from a sp= eculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES" + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU has wasted due to Machine Clears", + "MetricExpr": "tma_bad_speculation - tma_branch_mispredicts", + "MetricGroup": "BadSpec;MachineClears;TopdownL2;tma_L2_group;tma_b= ad_speculation_group", + "MetricName": "tma_machine_clears", + "PublicDescription": "This metric represents fraction of slots the= CPU has wasted due to Machine Clears. These slots are either wasted by uo= ps fetched prior to the clear; or stalls the out-of-order portion of the ma= chine needs to recover its state after the clear. For example; this can hap= pen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modif= ying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT" }, { "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend", - "MetricConstraint": "NO_NMI_WATCHDOG", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_U= NHALTED.THREAD)) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT= _MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)) + (UOPS_RETIRED.RE= TIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) )", - "MetricGroup": "TopdownL1", - "MetricName": "Backend_Bound", + "MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma= _retiring)", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_backend_bound", "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound." }, { - "BriefDescription": "This category represents fraction of slots wh= ere no uops are being delivered due to a lack of required resources for acc= epting new uops in the Backend. SMT version; use when SMT is enabled and me= asuring per logical CPU.", - "MetricExpr": "1 - ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_C= LK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_C= LK_UNHALTED.REF_XCLK ) ))) + (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS= + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.T= HREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.R= EF_XCLK ) ))) + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THRE= AD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_= XCLK ) ))) )", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Backend_Bound_SMT", - "PublicDescription": "This category represents fraction of slots w= here no uops are being delivered due to a lack of required resources for ac= cepting new uops in the Backend. Backend is the portion of the processor co= re where the out-of-order scheduler dispatches ready uops into their respec= tive execution units; and once completed these uops get retired according t= o program order. For example; stalls due to data-cache misses or stalls due= to the divider unit being overloaded are both categorized under Backend Bo= und. Backend Bound is further divided into two main categories: Memory Boun= d and Core Bound. SMT version; use when SMT is enabled and measuring per lo= gical CPU." + "BriefDescription": "This metric represents fraction of slots the = Memory subsystem within the Backend was a bottleneck", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALL= S_L1D_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_= ACTIVITY.CYCLES_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - = cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISP= ATCHED.THREAD\\,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency= > 0.1) else RESOURCE_STALLS.SB)) * tma_backend_bound", + "MetricGroup": "Backend;TopdownL2;tma_L2_group;tma_backend_bound_g= roup", + "MetricName": "tma_memory_bound", + "PublicDescription": "This metric represents fraction of slots the= Memory subsystem within the Backend was a bottleneck. Memory Bound estima= tes fraction of slots where pipeline is likely stalled due to demand load o= r store instructions. This accounts mainly for (1) non-completed in-flight = memory demand loads which coincides with execution units starvation; in add= ition to (2) cases where stores could impose backpressure on the pipeline w= hen many of them get buffered at the same time (less common out of the two)= ." + }, + { + "BriefDescription": "This metric roughly estimates the fraction of= cycles where the Data TLB (DTLB) was missed by load accesses", + "MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.W= ALK_DURATION) / CLKS", + "MetricGroup": "MemoryTLB;TopdownL4;tma_l1_bound_group", + "MetricName": "tma_dtlb_load", + "PublicDescription": "This metric roughly estimates the fraction o= f cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Trans= lation Look-aside Buffers) are processor caches for recently used entries o= ut of the Page Tables that are used to map virtual- to physical-addresses b= y the operating system. This metric approximates the potential delay of dem= and loads missing the first-level data TLB (assuming worst case scenario wi= th back to back misses to different pages). This includes hitting in the se= cond-level TLB (STLB) as well as performing a hardware page walk on an STLB= miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled due to loads accesses to L3 cache or contended with a sibling Core", + "MetricExpr": "(MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RET= IRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS)) * CYCLE_ACTIVITY.S= TALLS_L2_PENDING / CLKS", + "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_mem= ory_bound_group", + "MetricName": "tma_l3_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled due to loads accesses to L3 cache or contended with a sibling Core.= Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency an= d increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS" + }, + { + "BriefDescription": "This metric estimates how often the CPU was s= talled on accesses to external memory (DRAM) by loads", + "MetricExpr": "(1 - (MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOP= S_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS))) * CYCLE_ACTI= VITY.STALLS_L2_PENDING / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_dram_bound", + "PublicDescription": "This metric estimates how often the CPU was = stalled on accesses to external memory (DRAM) by loads. Better caching can = improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RE= TIRED.L3_MISS_PS" + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the core's performance was likely hurt due to approaching bandwidth limit= s of external memory (DRAM)", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_O= UTSTANDING.ALL_DATA_RD\\,cmask\\=3D6@) / CLKS", + "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_bandwidth", + "PublicDescription": "This metric estimates fraction of cycles whe= re the core's performance was likely hurt due to approaching bandwidth limi= ts of external memory (DRAM). The underlying heuristic assumes that a simi= lar off-core traffic is generated by all IA cores. This metric does not agg= regate non-data-read requests by this logical processor; requests from othe= r IA Logical Processors/Physical Cores/sockets; or other non-IA devices lik= e GPU; hence the maximum external memory bandwidth limits may or may not be= approached when this metric is flagged (see Uncore counters for that)." + }, + { + "BriefDescription": "This metric estimates fraction of cycles wher= e the performance was likely hurt due to latency from external memory (DRAM= )", + "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTST= ANDING.CYCLES_WITH_DATA_RD) / CLKS - tma_mem_bandwidth", + "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_dram_bound_group", + "MetricName": "tma_mem_latency", + "PublicDescription": "This metric estimates fraction of cycles whe= re the performance was likely hurt due to latency from external memory (DRA= M). This metric does not aggregate requests from other Logical Processors/= Physical Cores/sockets (see Uncore counters for that)." + }, + { + "BriefDescription": "This metric estimates how often CPU was stall= ed due to RFO store memory accesses; RFO store issue a read-for-ownership = request before the write", + "MetricExpr": "RESOURCE_STALLS.SB / CLKS", + "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_memory_bound_gr= oup", + "MetricName": "tma_store_bound", + "PublicDescription": "This metric estimates how often CPU was stal= led due to RFO store memory accesses; RFO store issue a read-for-ownership= request before the write. Even though store accesses do not typically stal= l out-of-order CPUs; there are few cases where stores can lead to actual st= alls. This metric will be flagged should RFO stores be a bottleneck. Sample= with: MEM_UOPS_RETIRED.ALL_STORES_PS" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e Core non-memory issues were of a bottleneck", + "MetricExpr": "tma_backend_bound - tma_memory_bound", + "MetricGroup": "Backend;Compute;TopdownL2;tma_L2_group;tma_backend= _bound_group", + "MetricName": "tma_core_bound", + "PublicDescription": "This metric represents fraction of slots whe= re Core non-memory issues were of a bottleneck. Shortage in hardware compu= te resources; or dependencies in software's instructions are both categoriz= ed under Core Bound. Hence it may indicate the machine ran out of an out-of= -order resource; certain execution units are overloaded or dependencies in = program's data- or instruction-flow are limiting the performance (e.g. FP-c= hained long-latency arithmetic operations)." + }, + { + "BriefDescription": "This metric represents fraction of cycles whe= re the Divider unit was active", + "MetricExpr": "ARITH.FPU_DIV_ACTIVE / CORE_CLKS", + "MetricGroup": "TopdownL3;tma_core_bound_group", + "MetricName": "tma_divider", + "PublicDescription": "This metric represents fraction of cycles wh= ere the Divider unit was active. Divide and square root instructions are pe= rformed by the Divider unit and can take considerably longer latency than i= nteger or Floating Point addition; subtraction; or multiplication. Sample w= ith: ARITH.DIVIDER_UOPS" + }, + { + "BriefDescription": "This metric estimates fraction of cycles the = CPU performance was potentially limited due to Core computation issues (non= divider-related)", + "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLE= S_NO_DISPATCH) + cpu@UOPS_DISPATCHED.THREAD\\,cmask\\=3D1@ - cpu@UOPS_DISPA= TCHED.THREAD\\,cmask\\=3D3@ if (IPC > 1.8) else cpu@UOPS_DISPATCHED.THREAD\= \,cmask\\=3D2@ - RS_EVENTS.EMPTY_CYCLES if (tma_fetch_latency > 0.1) else R= ESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCL= E_ACTIVITY.STALLS_L1D_PENDING)) / CLKS", + "MetricGroup": "PortsUtil;TopdownL3;tma_core_bound_group", + "MetricName": "tma_ports_utilization", + "PublicDescription": "This metric estimates fraction of cycles the= CPU performance was potentially limited due to Core computation issues (no= n divider-related). Two distinct categories can be attributed into this me= tric: (1) heavy data-dependency among contiguous instructions would manifes= t in this metric - such cases are often referred to as low Instruction Leve= l Parallelism (ILP). (2) Contention on some hardware execution unit other t= han Divider. For example; when there are too many multiply operations." }, { "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.T= HREAD)", - "MetricGroup": "TopdownL1", - "MetricName": "Retiring", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. " + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / SLOTS", + "MetricGroup": "TopdownL1;tma_L1_group", + "MetricName": "tma_retiring", + "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. Sample with: UOPS_RETIRE= D.RETIRE_SLOTS" + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring light-weight operations -- instructions that require= no more than one uop (micro-operation)", + "MetricExpr": "tma_retiring - tma_heavy_operations", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_light_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring light-weight operations -- instructions that requir= e no more than one uop (micro-operation). This correlates with total number= of instructions used by the program. A uops-per-instruction (see UPI metri= c) ratio of 1 or less should be expected for decently optimized software ru= nning on Intel Core/Xeon products. While this often indicates efficient X86= instructions were executed; high value does not necessarily mean better pe= rformance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST" + }, + { + "BriefDescription": "This metric represents overall arithmetic flo= ating-point (FP) operations fraction the CPU has executed (retired)", + "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector", + "MetricGroup": "HPC;TopdownL3;tma_light_operations_group", + "MetricName": "tma_fp_arith", + "PublicDescription": "This metric represents overall arithmetic fl= oating-point (FP) operations fraction the CPU has executed (retired). Note = this metric's value may exceed its parent due to use of \"Uops\" CountDomai= n and FMA double-counting." + }, + { + "BriefDescription": "This metric serves as an approximation of leg= acy x87 usage", + "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS * FP_COMP_OPS_EXE.X87 / U= OPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_x87_use", + "PublicDescription": "This metric serves as an approximation of le= gacy x87 usage. It accounts for instructions beyond X87 FP arithmetic opera= tions; hence may be used as a thermometer to avoid X87 high usage and prefe= rably upgrade to modern ISA. See Tip under Tuning Hint." + }, + { + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) scalar uops fraction the CPU has retired", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EX= E.SSE_SCALAR_DOUBLE) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_scalar", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) scalar uops fraction the CPU has retired. May overcount due to = FMA double counting." }, { - "BriefDescription": "This category represents fraction of slots ut= ilized by useful work i.e. issued uops that eventually get retired. SMT ver= sion; use when SMT is enabled and measuring per logical CPU.", - "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALT= ED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALT= ED.REF_XCLK ) ))", - "MetricGroup": "TopdownL1_SMT", - "MetricName": "Retiring_SMT", - "PublicDescription": "This category represents fraction of slots u= tilized by useful work i.e. issued uops that eventually get retired. Ideall= y; all pipeline slots would be attributed to the Retiring category. Retiri= ng of 100% would indicate the maximum Pipeline_Width throughput was achieve= d. Maximizing Retiring typically increases the Instructions-per-cycle (see= IPC metric). Note that a high Retiring value does not necessary mean there= is no room for more performance. For example; Heavy-operations or Microco= de Assists are categorized under Retiring. They often indicate suboptimal p= erformance and can often be optimized or avoided. SMT version; use when SMT= is enabled and measuring per logical CPU." + "BriefDescription": "This metric approximates arithmetic floating-= point (FP) vector uops fraction the CPU has retired aggregated across all v= ector widths", + "MetricExpr": "(FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + FP_COMP_OPS_EX= E.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE= ) / UOPS_DISPATCHED.THREAD", + "MetricGroup": "Compute;Flops;TopdownL4;tma_fp_arith_group", + "MetricName": "tma_fp_vector", + "PublicDescription": "This metric approximates arithmetic floating= -point (FP) vector uops fraction the CPU has retired aggregated across all = vector widths. May overcount due to FMA double counting." + }, + { + "BriefDescription": "This metric represents fraction of slots wher= e the CPU was retiring heavy-weight operations -- instructions that require= two or more uops or microcoded sequences", + "MetricExpr": "tma_microcode_sequencer", + "MetricGroup": "Retire;TopdownL2;tma_L2_group;tma_retiring_group", + "MetricName": "tma_heavy_operations", + "PublicDescription": "This metric represents fraction of slots whe= re the CPU was retiring heavy-weight operations -- instructions that requir= e two or more uops or microcoded sequences. This highly-correlates with the= uop length of these instructions/sequences." + }, + { + "BriefDescription": "This metric represents fraction of slots the = CPU was retiring uops fetched by the Microcode Sequencer (MS) unit", + "MetricExpr": "(UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY) * IDQ= .MS_UOPS / SLOTS", + "MetricGroup": "MicroSeq;TopdownL3;tma_heavy_operations_group", + "MetricName": "tma_microcode_sequencer", + "PublicDescription": "This metric represents fraction of slots the= CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The M= S is used for CISC instructions not supported by the default decoders (like= repeat move strings; or CPUID); or by microcode assists used to address so= me operation modes (like in Floating Point assists). These cases can often = be avoided. Sample with: IDQ.MS_UOPS" }, { "BriefDescription": "Instructions Per Cycle (per Logical Processor= )", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", + "MetricExpr": "INST_RETIRED.ANY / CLKS", "MetricGroup": "Ret;Summary", "MetricName": "IPC" }, @@ -70,8 +223,8 @@ }, { "BriefDescription": "Cycles Per Instruction (per Logical Processor= )", - "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)", - "MetricGroup": "Pipeline;Mem", + "MetricExpr": "1 / IPC", + "MetricGroup": "Mem;Pipeline", "MetricName": "CPI" }, { @@ -82,16 +235,10 @@ }, { "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "TmaL1", + "MetricExpr": "4 * CORE_CLKS", + "MetricGroup": "tma_L1_group", "MetricName": "SLOTS" }, - { - "BriefDescription": "Total issue-pipeline slots (per-Physical Core= till ICL; per-Logical Processor ICL onward)", - "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_C= LK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "TmaL1_SMT", - "MetricName": "SLOTS_SMT" - }, { "BriefDescription": "The ratio of Executed- by Issued-Uops", "MetricExpr": "UOPS_DISPATCHED.THREAD / UOPS_ISSUED.ANY", @@ -101,44 +248,32 @@ }, { "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;SMT;TmaL1", + "MetricExpr": "INST_RETIRED.ANY / CORE_CLKS", + "MetricGroup": "Ret;SMT;tma_L1_group", "MetricName": "CoreIPC" }, - { - "BriefDescription": "Instructions Per Cycle across hyper-threads (= per physical core)", - "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 = ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) = )", - "MetricGroup": "Ret;SMT;TmaL1_SMT", - "MetricName": "CoreIPC_SMT" - }, { "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD", - "MetricGroup": "Ret;Flops", + "MetricExpr": "(1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_O= PS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP= _COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_= 256.PACKED_SINGLE) / CORE_CLKS", + "MetricGroup": "Flops;Ret", "MetricName": "FLOPc" }, - { - "BriefDescription": "Floating Point Operations Per Cycle", - "MetricExpr": "( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP= _OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * = ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * SIM= D_FP_256.PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CL= K_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", - "MetricGroup": "Ret;Flops_SMT", - "MetricName": "FLOPc_SMT" - }, { "BriefDescription": "Instruction-Level-Parallelism (average number= of uops executed when there is execution) per-core", - "MetricExpr": "UOPS_DISPATCHED.THREAD / (( cpu@UOPS_DISPATCHED.COR= E\\,cmask\\=3D1@ / 2 ) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\= =3D1@)", + "MetricExpr": "UOPS_DISPATCHED.THREAD / ((cpu@UOPS_DISPATCHED.CORE= \\,cmask\\=3D1@ / 2) if #SMT_on else cpu@UOPS_DISPATCHED.CORE\\,cmask\\=3D1= @)", "MetricGroup": "Backend;Cor;Pipeline;PortsUtil", "MetricName": "ILP" }, { "BriefDescription": "Core actual clocks when any Logical Processor= is active on the Physical Core", - "MetricExpr": "( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_U= NHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )", + "MetricExpr": "((CPU_CLK_UNHALTED.THREAD / 2) * (1 + CPU_CLK_UNHAL= TED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK)) if #core_wide < 1 else = (CPU_CLK_UNHALTED.THREAD_ANY / 2) if #SMT_on else CLKS", "MetricGroup": "SMT", "MetricName": "CORE_CLKS" }, { - "BriefDescription": "Total number of retired Instructions, Sample = with: INST_RETIRED.PREC_DIST", + "BriefDescription": "Total number of retired Instructions Sample w= ith: INST_RETIRED.PREC_DIST", "MetricExpr": "INST_RETIRED.ANY", - "MetricGroup": "Summary;TmaL1", + "MetricGroup": "Summary;tma_L1_group", "MetricName": "Instructions" }, { @@ -149,7 +284,7 @@ }, { "BriefDescription": "Fraction of Uops delivered by the DSB (aka De= coded ICache; or Uop Cache)", - "MetricExpr": "IDQ.DSB_UOPS / (( IDQ.DSB_UOPS + LSD.UOPS + IDQ.MIT= E_UOPS + IDQ.MS_UOPS ) )", + "MetricExpr": "IDQ.DSB_UOPS / ((IDQ.DSB_UOPS + LSD.UOPS + IDQ.MITE= _UOPS + IDQ.MS_UOPS))", "MetricGroup": "DSB;Fed;FetchBW", "MetricName": "DSB_Coverage" }, @@ -161,26 +296,26 @@ }, { "BriefDescription": "Measured Average Frequency for unhalted proce= ssors [GHz]", - "MetricExpr": "(CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC= ) * msr@tsc@ / 1000000000 / duration_time", - "MetricGroup": "Summary;Power", + "MetricExpr": "Turbo_Utilization * msr@tsc@ / 1000000000 / duratio= n_time", + "MetricGroup": "Power;Summary", "MetricName": "Average_Frequency" }, { "BriefDescription": "Giga Floating Point Operations Per Second", - "MetricExpr": "( ( 1 * ( FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_CO= MP_OPS_EXE.SSE_SCALAR_DOUBLE ) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 = * ( FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE ) + 8 * S= IMD_FP_256.PACKED_SINGLE ) / 1000000000 ) / duration_time", + "MetricExpr": "((1 * (FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_= OPS_EXE.SSE_SCALAR_DOUBLE) + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (F= P_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP= _256.PACKED_SINGLE) / 1000000000) / duration_time", "MetricGroup": "Cor;Flops;HPC", "MetricName": "GFLOPs", "PublicDescription": "Giga Floating Point Operations Per Second. A= ggregate across all supported options of: FP precisions, scalar and vector = instructions, vector-width and AMX engine." }, { "BriefDescription": "Average Frequency Utilization relative nomina= l frequency", - "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC"= , + "MetricExpr": "CLKS / CPU_CLK_UNHALTED.REF_TSC", "MetricGroup": "Power", "MetricName": "Turbo_Utilization" }, { "BriefDescription": "Fraction of cycles where both hardware Logica= l Processors were active", - "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_= UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0", + "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_U= NHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0", "MetricGroup": "SMT", "MetricName": "SMT_2T_Utilization" }, @@ -198,7 +333,7 @@ }, { "BriefDescription": "Average external Memory Bandwidth Use for rea= ds and writes [GB / sec]", - "MetricExpr": "64 * ( arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@ev= ent\\=3D0x84\\,umask\\=3D0x1@ ) / 1000000 / duration_time / 1000", + "MetricExpr": "64 * (arb@event\\=3D0x81\\,umask\\=3D0x1@ + arb@eve= nt\\=3D0x84\\,umask\\=3D0x1@) / 1000000 / duration_time / 1000", "MetricGroup": "HPC;Mem;MemoryBW;SoC", "MetricName": "DRAM_BW_Use" }, --=20 2.37.3.998.g577e59143f-goog