All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 0/6] Improve PMU support
@ 2022-07-27  6:49 Atish Patra
  2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv

The latest version of the SBI specification includes a Performance Monitoring
Unit(PMU) extension[1] which allows the supervisor to start/stop/configure
various PMU events. The Sscofpmf ('Ss' for Privileged arch and Supervisor-level
extensions, and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension[2] allows the perf like tool to handle overflow interrupts and
filtering support.

This series implements remaining PMU infrastructure to support
PMU in virt machine. The first seven patches from the original series
have been merged already.

This will allow us to add any PMU events in future.
Currently, this series enables the following omu events.
1. cycle count
2. instruction count
3. DTLB load/store miss
4. ITLB prefetch miss

The first two are computed using host ticks while last three are counted during
cpu_tlb_fill. We can do both sampling and count from guest userspace.
This series has been tested on both RV64 and RV32. Both Linux[3] and Opensbi[4]
patches are required to get the perf working.

Here is an output of perf stat/report while running hackbench with latest
OpenSBI & Linux kernel.

Perf stat:
==========
[root@fedora-riscv ~]# perf stat -e cycles -e instructions -e dTLB-load-misses -e dTLB-store-misses -e iTLB-load-misses \
> perf bench sched messaging -g 1 -l 10
# Running 'sched/messaging' benchmark:
# 20 sender and receiver processes per group
# 1 groups == 40 processes run

     Total time: 0.265 [sec]

 Performance counter stats for 'perf bench sched messaging -g 1 -l 10':

     4,167,825,362      cycles                                                      
     4,166,609,256      instructions              #    1.00  insn per cycle         
         3,092,026      dTLB-load-misses                                            
           258,280      dTLB-store-misses                                           
         2,068,966      iTLB-load-misses                                            

       0.585791767 seconds time elapsed

       0.373802000 seconds user
       1.042359000 seconds sys

Perf record:
============
[root@fedora-riscv ~]# perf record -e cycles -e instructions \
> -e dTLB-load-misses -e dTLB-store-misses -e iTLB-load-misses -c 10000 \
> perf bench sched messaging -g 1 -l 10
# Running 'sched/messaging' benchmark:
# 20 sender and receiver processes per group
# 1 groups == 40 processes run

     Total time: 1.397 [sec]
[ perf record: Woken up 10 times to write data ]
Check IO/CPU overload!
[ perf record: Captured and wrote 8.211 MB perf.data (214486 samples) ]

[root@fedora-riscv riscv]# perf report
Available samples                                                               
107K cycles                                                                    ◆
107K instructions                                                              ▒
250 dTLB-load-misses                                                           ▒
13 dTLB-store-misses                                                           ▒
172 iTLB-load-misses      
..

Changes from v10->v11:
1. Rebased on top of the master where first 7 patches were already merged.
2. Removed unnecessary additional check in ctr predicate function.
3. Removed unnecessary priv version checks in mcountinhibit read/write. 
4. Added Heiko's reviewed-by/tested-by tags.

Changes from v8->v9:
1. Added the write_done flags to the vmstate.
2. Fixed the hpmcounter read access from M-mode.

Changes from v7->v8:
1. Removeding ordering constraints for mhpmcounter & mhpmevent.

Changes from v6->v7:
1. Fixed all the compilation errors for the usermode.

Changes from v5->v6:
1. Fixed compilation issue with PATCH 1.
2. Addressed other comments.

Changes from v4->v5:
1. Rebased on top of the -next with following patches.
   - isa extension
   - priv 1.12 spec
2. Addressed all the comments on v4
3. Removed additional isa-ext DT node in favor of riscv,isa string update

Changes from v3->v4:
1. Removed the dummy events from pmu DT node.
2. Fixed pmu_avail_counters mask generation.
3. Added a patch to simplify the predicate function for counters. 

Changes from v2->v3:
1. Addressed all the comments on PATCH1-4.
2. Split patch1 into two separate patches.
3. Added explicit comments to explain the event types in DT node.
4. Rebased on latest Qemu.

Changes from v1->v2:
1. Dropped the ACks from v1 as signficant changes happened after v1.
2. sscofpmf support.
3. A generic counter management framework.

[1] https://github.com/riscv-non-isa/riscv-sbi-doc/blob/master/riscv-sbi.adoc
[2] https://drive.google.com/file/d/171j4jFjIkKdj5LWcExphq4xG_2sihbfd/edit
[3] https://github.com/atishp04/qemu/tree/riscv_pmu_v11

Atish Patra (6):
target/riscv: Add sscofpmf extension support
target/riscv: Simplify counter predicate function
target/riscv: Add few cache related PMU events
hw/riscv: virt: Add PMU DT node to the device tree
target/riscv: Update the privilege field for sscofpmf CSRs
target/riscv: Remove additional priv version check for mcountinhibit

hw/riscv/virt.c           |  28 +++
target/riscv/cpu.c        |  12 ++
target/riscv/cpu.h        |  25 +++
target/riscv/cpu_bits.h   |  55 +++++
target/riscv/cpu_helper.c |  25 +++
target/riscv/csr.c        | 314 ++++++++++++++++++-----------
target/riscv/machine.c    |   1 +
target/riscv/pmu.c        | 414 +++++++++++++++++++++++++++++++++++++-
target/riscv/pmu.h        |   8 +
9 files changed, 763 insertions(+), 119 deletions(-)

--
2.25.1



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v11 1/6] target/riscv: Add sscofpmf extension support
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  8:11   ` Weiwei Li
  2022-07-31  4:10   ` Rahul Pathak
  2022-07-27  6:49 ` [PATCH v11 2/6] target/riscv: Simplify counter predicate function Atish Patra
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Heiko Stuebner, Atish Patra, Alistair Francis,
	Bin Meng, Palmer Dabbelt, qemu-riscv

The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension allows the perf to handle overflow interrupts and filtering
support. This patch provides a framework for programmable
counters to leverage the extension. As the extension doesn't have any
provision for the overflow bit for fixed counters, the fixed events
can also be monitoring using programmable counters. The underlying
counters for cycle and instruction counters are always running. Thus,
a separate timer device is programmed to handle the overflow.

Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 target/riscv/cpu.c      |  11 ++
 target/riscv/cpu.h      |  25 +++
 target/riscv/cpu_bits.h |  55 +++++++
 target/riscv/csr.c      | 166 ++++++++++++++++++-
 target/riscv/machine.c  |   1 +
 target/riscv/pmu.c      | 357 +++++++++++++++++++++++++++++++++++++++-
 target/riscv/pmu.h      |   7 +
 7 files changed, 611 insertions(+), 11 deletions(-)

diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 1bb3973806d2..c1d62b81a725 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -22,6 +22,7 @@
 #include "qemu/ctype.h"
 #include "qemu/log.h"
 #include "cpu.h"
+#include "pmu.h"
 #include "internals.h"
 #include "exec/exec-all.h"
 #include "qapi/error.h"
@@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
         set_misa(env, env->misa_mxl, ext);
     }
 
+#ifndef CONFIG_USER_ONLY
+    if (cpu->cfg.pmu_num) {
+        if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
+            cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
+                                          riscv_pmu_timer_cb, cpu);
+        }
+     }
+#endif
+
     riscv_cpu_register_gdb_regs_for_features(cs);
 
     qemu_init_vcpu(cs);
@@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] = {
     DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
     DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
     DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
+    DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
     DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
     DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
     DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 5c7acc055ac9..2222db193c3d 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -137,6 +137,8 @@ typedef struct PMUCTRState {
     /* Snapshort value of a counter in RV32 */
     target_ulong mhpmcounterh_prev;
     bool started;
+    /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
+    target_ulong irq_overflow_left;
 } PMUCTRState;
 
 struct CPUArchState {
@@ -297,6 +299,9 @@ struct CPUArchState {
     /* PMU event selector configured values. First three are unused*/
     target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
 
+    /* PMU event selector configured values for RV32*/
+    target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
+
     target_ulong sscratch;
     target_ulong mscratch;
 
@@ -433,6 +438,7 @@ struct RISCVCPUConfig {
     bool ext_zve32f;
     bool ext_zve64f;
     bool ext_zmmul;
+    bool ext_sscofpmf;
     bool rvv_ta_all_1s;
 
     uint32_t mvendorid;
@@ -479,6 +485,12 @@ struct ArchCPU {
 
     /* Configuration Settings */
     RISCVCPUConfig cfg;
+
+    QEMUTimer *pmu_timer;
+    /* A bitmask of Available programmable counters */
+    uint32_t pmu_avail_ctrs;
+    /* Mapping of events to counters */
+    GHashTable *pmu_event_ctr_map;
 };
 
 static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
@@ -738,6 +750,19 @@ enum {
     CSR_TABLE_SIZE = 0x1000
 };
 
+/**
+ * The event id are encoded based on the encoding specified in the
+ * SBI specification v0.3
+ */
+
+enum riscv_pmu_event_idx {
+    RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
+    RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
+    RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
+    RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
+    RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
+};
+
 /* CSR function table */
 extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
 
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 6be5a9e9f046..b63c586be563 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -382,6 +382,37 @@
 #define CSR_MHPMEVENT29     0x33d
 #define CSR_MHPMEVENT30     0x33e
 #define CSR_MHPMEVENT31     0x33f
+
+#define CSR_MHPMEVENT3H     0x723
+#define CSR_MHPMEVENT4H     0x724
+#define CSR_MHPMEVENT5H     0x725
+#define CSR_MHPMEVENT6H     0x726
+#define CSR_MHPMEVENT7H     0x727
+#define CSR_MHPMEVENT8H     0x728
+#define CSR_MHPMEVENT9H     0x729
+#define CSR_MHPMEVENT10H    0x72a
+#define CSR_MHPMEVENT11H    0x72b
+#define CSR_MHPMEVENT12H    0x72c
+#define CSR_MHPMEVENT13H    0x72d
+#define CSR_MHPMEVENT14H    0x72e
+#define CSR_MHPMEVENT15H    0x72f
+#define CSR_MHPMEVENT16H    0x730
+#define CSR_MHPMEVENT17H    0x731
+#define CSR_MHPMEVENT18H    0x732
+#define CSR_MHPMEVENT19H    0x733
+#define CSR_MHPMEVENT20H    0x734
+#define CSR_MHPMEVENT21H    0x735
+#define CSR_MHPMEVENT22H    0x736
+#define CSR_MHPMEVENT23H    0x737
+#define CSR_MHPMEVENT24H    0x738
+#define CSR_MHPMEVENT25H    0x739
+#define CSR_MHPMEVENT26H    0x73a
+#define CSR_MHPMEVENT27H    0x73b
+#define CSR_MHPMEVENT28H    0x73c
+#define CSR_MHPMEVENT29H    0x73d
+#define CSR_MHPMEVENT30H    0x73e
+#define CSR_MHPMEVENT31H    0x73f
+
 #define CSR_MHPMCOUNTER3H   0xb83
 #define CSR_MHPMCOUNTER4H   0xb84
 #define CSR_MHPMCOUNTER5H   0xb85
@@ -443,6 +474,7 @@
 #define CSR_VSMTE           0x2c0
 #define CSR_VSPMMASK        0x2c1
 #define CSR_VSPMBASE        0x2c2
+#define CSR_SCOUNTOVF       0xda0
 
 /* Crypto Extension */
 #define CSR_SEED            0x015
@@ -620,6 +652,7 @@ typedef enum RISCVException {
 #define IRQ_VS_EXT                         10
 #define IRQ_M_EXT                          11
 #define IRQ_S_GEXT                         12
+#define IRQ_PMU_OVF                        13
 #define IRQ_LOCAL_MAX                      16
 #define IRQ_LOCAL_GUEST_MAX                (TARGET_LONG_BITS - 1)
 
@@ -637,11 +670,13 @@ typedef enum RISCVException {
 #define MIP_VSEIP                          (1 << IRQ_VS_EXT)
 #define MIP_MEIP                           (1 << IRQ_M_EXT)
 #define MIP_SGEIP                          (1 << IRQ_S_GEXT)
+#define MIP_LCOFIP                         (1 << IRQ_PMU_OVF)
 
 /* sip masks */
 #define SIP_SSIP                           MIP_SSIP
 #define SIP_STIP                           MIP_STIP
 #define SIP_SEIP                           MIP_SEIP
+#define SIP_LCOFIP                         MIP_LCOFIP
 
 /* MIE masks */
 #define MIE_SEIE                           (1 << IRQ_S_EXT)
@@ -795,4 +830,24 @@ typedef enum RISCVException {
 #define SEED_OPST_WAIT                   (0b01 << 30)
 #define SEED_OPST_ES16                   (0b10 << 30)
 #define SEED_OPST_DEAD                   (0b11 << 30)
+/* PMU related bits */
+#define MIE_LCOFIE                         (1 << IRQ_PMU_OVF)
+
+#define MHPMEVENT_BIT_OF                   BIT_ULL(63)
+#define MHPMEVENTH_BIT_OF                  BIT(31)
+#define MHPMEVENT_BIT_MINH                 BIT_ULL(62)
+#define MHPMEVENTH_BIT_MINH                BIT(30)
+#define MHPMEVENT_BIT_SINH                 BIT_ULL(61)
+#define MHPMEVENTH_BIT_SINH                BIT(29)
+#define MHPMEVENT_BIT_UINH                 BIT_ULL(60)
+#define MHPMEVENTH_BIT_UINH                BIT(28)
+#define MHPMEVENT_BIT_VSINH                BIT_ULL(59)
+#define MHPMEVENTH_BIT_VSINH               BIT(27)
+#define MHPMEVENT_BIT_VUINH                BIT_ULL(58)
+#define MHPMEVENTH_BIT_VUINH               BIT(26)
+
+#define MHPMEVENT_SSCOF_MASK               _ULL(0xFFFF000000000000)
+#define MHPMEVENT_IDX_MASK                 0xFFFFF
+#define MHPMEVENT_SSCOF_RESVD              16
+
 #endif
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 235f2a011e70..1233bfa0a726 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
     CPUState *cs = env_cpu(env);
     RISCVCPU *cpu = RISCV_CPU(cs);
     int ctr_index;
-    int base_csrno = CSR_HPMCOUNTER3;
+    int base_csrno = CSR_CYCLE;
     bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
 
     if (rv32 && csrno >= CSR_CYCLEH) {
@@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
     }
     ctr_index = csrno - base_csrno;
 
-    if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
+    if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
+        (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
+        goto skip_ext_pmu_check;
+    }
+
+    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
         /* No counter is enabled in PMU or the counter is out of range */
         return RISCV_EXCP_ILLEGAL_INST;
     }
 
+skip_ext_pmu_check:
+
     if (env->priv == PRV_S) {
         switch (csrno) {
         case CSR_CYCLE:
@@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
             }
             break;
         case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
-            ctr_index = csrno - CSR_CYCLE;
             if (!get_field(env->mcounteren, 1 << ctr_index)) {
                 return RISCV_EXCP_ILLEGAL_INST;
             }
@@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
                 }
                 break;
             case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
-                ctr_index = csrno - CSR_CYCLEH;
                 if (!get_field(env->mcounteren, 1 << ctr_index)) {
                     return RISCV_EXCP_ILLEGAL_INST;
                 }
@@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
             }
             break;
         case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
-            ctr_index = csrno - CSR_CYCLE;
             if (!get_field(env->hcounteren, 1 << ctr_index) &&
                  get_field(env->mcounteren, 1 << ctr_index)) {
                 return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
@@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
                 }
                 break;
             case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
-                ctr_index = csrno - CSR_CYCLEH;
                 if (!get_field(env->hcounteren, 1 << ctr_index) &&
                      get_field(env->mcounteren, 1 << ctr_index)) {
                     return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
@@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
     return mctr(env, csrno);
 }
 
+static RISCVException sscofpmf(CPURISCVState *env, int csrno)
+{
+    CPUState *cs = env_cpu(env);
+    RISCVCPU *cpu = RISCV_CPU(cs);
+
+    if (!cpu->cfg.ext_sscofpmf) {
+        return RISCV_EXCP_ILLEGAL_INST;
+    }
+
+    return RISCV_EXCP_NONE;
+}
+
 static RISCVException any(CPURISCVState *env, int csrno)
 {
     return RISCV_EXCP_NONE;
@@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
 static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
 {
     int evt_index = csrno - CSR_MCOUNTINHIBIT;
+    uint64_t mhpmevt_val = val;
 
     env->mhpmevent_val[evt_index] = val;
 
+    if (riscv_cpu_mxl(env) == MXL_RV32) {
+        mhpmevt_val = mhpmevt_val |
+                      ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
+    }
+    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+
+    return RISCV_EXCP_NONE;
+}
+
+static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
+{
+    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
+
+    *val = env->mhpmeventh_val[evt_index];
+
+    return RISCV_EXCP_NONE;
+}
+
+static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
+{
+    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
+    uint64_t mhpmevth_val = val;
+    uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
+
+    mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
+    env->mhpmeventh_val[evt_index] = val;
+
+    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+
     return RISCV_EXCP_NONE;
 }
 
@@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
 {
     int ctr_idx = csrno - CSR_MCYCLE;
     PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+    uint64_t mhpmctr_val = val;
 
     counter->mhpmcounter_val = val;
     if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
         riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
         counter->mhpmcounter_prev = get_ticks(false);
-    } else {
+        if (ctr_idx > 2) {
+            if (riscv_cpu_mxl(env) == MXL_RV32) {
+                mhpmctr_val = mhpmctr_val |
+                              ((uint64_t)counter->mhpmcounterh_val << 32);
+            }
+            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
+        }
+     } else {
         /* Other counters can keep incrementing from the given value */
         counter->mhpmcounter_prev = val;
     }
@@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
 {
     int ctr_idx = csrno - CSR_MCYCLEH;
     PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+    uint64_t mhpmctr_val = counter->mhpmcounter_val;
+    uint64_t mhpmctrh_val = val;
 
     counter->mhpmcounterh_val = val;
+    mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
     if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
         riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
         counter->mhpmcounterh_prev = get_ticks(true);
+        if (ctr_idx > 2) {
+            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
+        }
     } else {
         counter->mhpmcounterh_prev = val;
     }
@@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
     return riscv_pmu_read_ctr(env, val, true, ctr_index);
 }
 
+static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
+{
+    int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
+    int i;
+    *val = 0;
+    target_ulong *mhpm_evt_val;
+    uint64_t of_bit_mask;
+
+    if (riscv_cpu_mxl(env) == MXL_RV32) {
+        mhpm_evt_val = env->mhpmeventh_val;
+        of_bit_mask = MHPMEVENTH_BIT_OF;
+    } else {
+        mhpm_evt_val = env->mhpmevent_val;
+        of_bit_mask = MHPMEVENT_BIT_OF;
+    }
+
+    for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
+        if ((get_field(env->mcounteren, BIT(i))) &&
+            (mhpm_evt_val[i] & of_bit_mask)) {
+                    *val |= BIT(i);
+            }
+    }
+
+    return RISCV_EXCP_NONE;
+}
+
 static RISCVException read_time(CPURISCVState *env, int csrno,
                                 target_ulong *val)
 {
@@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
 /* Machine constants */
 
 #define M_MODE_INTERRUPTS  ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
-#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
+#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
+                                      MIP_LCOFIP))
 #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
 #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
 
@@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
 static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
     SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
     SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
-static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
+static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
+                                              SIP_LCOFIP;
 static const target_ulong hip_writable_mask = MIP_VSSIP;
 static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
 static const target_ulong vsip_writable_mask = MIP_VSSIP;
@@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
     [CSR_MHPMEVENT31]    = { "mhpmevent31",    any,    read_mhpmevent,
                                                        write_mhpmevent },
 
+    [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+    [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,  read_mhpmeventh,
+                                                       write_mhpmeventh},
+
     [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh },
     [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh },
     [CSR_HPMCOUNTER5H]   = { "hpmcounter5h",   ctr32,  read_hpmcounterh },
@@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
                                                        write_mhpmcounterh },
     [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,  read_hpmcounterh,
                                                        write_mhpmcounterh },
+    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
+
 #endif /* !CONFIG_USER_ONLY */
 };
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index dc182ca81119..33ef9b8e9908 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
         VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
                              vmstate_pmu_ctr_state, PMUCTRState),
         VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
+        VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
         VMSTATE_UINTTL(env.sscratch, RISCVCPU),
         VMSTATE_UINTTL(env.mscratch, RISCVCPU),
         VMSTATE_UINT64(env.mfromhost, RISCVCPU),
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
index 000fe8da45ef..34096941c0ce 100644
--- a/target/riscv/pmu.c
+++ b/target/riscv/pmu.c
@@ -19,14 +19,367 @@
 #include "qemu/osdep.h"
 #include "cpu.h"
 #include "pmu.h"
+#include "sysemu/cpu-timers.h"
+
+#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
+#define MAKE_32BIT_MASK(shift, length) \
+        (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
+
+static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+    if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
+        !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
+        return false;
+    } else {
+        return true;
+    }
+}
+
+static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+    CPURISCVState *env = &cpu->env;
+
+    if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
+        !get_field(env->mcountinhibit, BIT(ctr_idx))) {
+        return true;
+    } else {
+        return false;
+    }
+}
+
+static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+    CPURISCVState *env = &cpu->env;
+    target_ulong max_val = UINT32_MAX;
+    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+    bool virt_on = riscv_cpu_virt_enabled(env);
+
+    /* Privilege mode filtering */
+    if ((env->priv == PRV_M &&
+        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
+        (env->priv == PRV_S && virt_on &&
+        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
+        (env->priv == PRV_U && virt_on &&
+        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
+        (env->priv == PRV_S && !virt_on &&
+        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
+        (env->priv == PRV_U && !virt_on &&
+        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
+        return 0;
+    }
+
+    /* Handle the overflow scenario */
+    if (counter->mhpmcounter_val == max_val) {
+        if (counter->mhpmcounterh_val == max_val) {
+            counter->mhpmcounter_val = 0;
+            counter->mhpmcounterh_val = 0;
+            /* Generate interrupt only if OF bit is clear */
+            if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
+                env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
+                riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+            }
+        } else {
+            counter->mhpmcounterh_val++;
+        }
+    } else {
+        counter->mhpmcounter_val++;
+    }
+
+    return 0;
+}
+
+static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+    CPURISCVState *env = &cpu->env;
+    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+    uint64_t max_val = UINT64_MAX;
+    bool virt_on = riscv_cpu_virt_enabled(env);
+
+    /* Privilege mode filtering */
+    if ((env->priv == PRV_M &&
+        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
+        (env->priv == PRV_S && virt_on &&
+        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
+        (env->priv == PRV_U && virt_on &&
+        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
+        (env->priv == PRV_S && !virt_on &&
+        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
+        (env->priv == PRV_U && !virt_on &&
+        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
+        return 0;
+    }
+
+    /* Handle the overflow scenario */
+    if (counter->mhpmcounter_val == max_val) {
+        counter->mhpmcounter_val = 0;
+        /* Generate interrupt only if OF bit is clear */
+        if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
+            env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
+            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+        }
+    } else {
+        counter->mhpmcounter_val++;
+    }
+    return 0;
+}
+
+int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
+{
+    uint32_t ctr_idx;
+    int ret;
+    CPURISCVState *env = &cpu->env;
+    gpointer value;
+
+    value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
+                                GUINT_TO_POINTER(event_idx));
+    if (!value) {
+        return -1;
+    }
+
+    ctr_idx = GPOINTER_TO_UINT(value);
+    if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
+        get_field(env->mcountinhibit, BIT(ctr_idx))) {
+        return -1;
+    }
+
+    if (riscv_cpu_mxl(env) == MXL_RV32) {
+        ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
+    } else {
+        ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
+    }
+
+    return ret;
+}
 
 bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
                                         uint32_t target_ctr)
 {
-    return (target_ctr == 0) ? true : false;
+    RISCVCPU *cpu;
+    uint32_t event_idx;
+    uint32_t ctr_idx;
+
+    /* Fixed instret counter */
+    if (target_ctr == 2) {
+        return true;
+    }
+
+    cpu = RISCV_CPU(env_cpu(env));
+    event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
+    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+                               GUINT_TO_POINTER(event_idx)));
+    if (!ctr_idx) {
+        return false;
+    }
+
+    return target_ctr == ctr_idx ? true : false;
 }
 
 bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
 {
-    return (target_ctr == 2) ? true : false;
+    RISCVCPU *cpu;
+    uint32_t event_idx;
+    uint32_t ctr_idx;
+
+    /* Fixed mcycle counter */
+    if (target_ctr == 0) {
+        return true;
+    }
+
+    cpu = RISCV_CPU(env_cpu(env));
+    event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
+    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+                               GUINT_TO_POINTER(event_idx)));
+
+    /* Counter zero is not used for event_ctr_map */
+    if (!ctr_idx) {
+        return false;
+    }
+
+    return (target_ctr == ctr_idx) ? true : false;
+}
+
+static gboolean pmu_remove_event_map(gpointer key, gpointer value,
+                                     gpointer udata)
+{
+    return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
+}
+
+static int64_t pmu_icount_ticks_to_ns(int64_t value)
+{
+    int64_t ret = 0;
+
+    if (icount_enabled()) {
+        ret = icount_to_ns(value);
+    } else {
+        ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
+    }
+
+    return ret;
+}
+
+int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
+                               uint32_t ctr_idx)
+{
+    uint32_t event_idx;
+    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
+
+    if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
+        return -1;
+    }
+
+    /**
+     * Expected mhpmevent value is zero for reset case. Remove the current
+     * mapping.
+     */
+    if (!value) {
+        g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
+                                    pmu_remove_event_map,
+                                    GUINT_TO_POINTER(ctr_idx));
+        return 0;
+    }
+
+    event_idx = value & MHPMEVENT_IDX_MASK;
+    if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
+                            GUINT_TO_POINTER(event_idx))) {
+        return 0;
+    }
+
+    switch (event_idx) {
+    case RISCV_PMU_EVENT_HW_CPU_CYCLES:
+    case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
+    case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
+    case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
+    case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
+        break;
+    default:
+        /* We don't support any raw events right now */
+        return -1;
+    }
+    g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
+                        GUINT_TO_POINTER(ctr_idx));
+
+    return 0;
+}
+
+static void pmu_timer_trigger_irq(RISCVCPU *cpu,
+                                  enum riscv_pmu_event_idx evt_idx)
+{
+    uint32_t ctr_idx;
+    CPURISCVState *env = &cpu->env;
+    PMUCTRState *counter;
+    target_ulong *mhpmevent_val;
+    uint64_t of_bit_mask;
+    int64_t irq_trigger_at;
+
+    if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
+        evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
+        return;
+    }
+
+    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+                               GUINT_TO_POINTER(evt_idx)));
+    if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
+        return;
+    }
+
+    if (riscv_cpu_mxl(env) == MXL_RV32) {
+        mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
+        of_bit_mask = MHPMEVENTH_BIT_OF;
+     } else {
+        mhpmevent_val = &env->mhpmevent_val[ctr_idx];
+        of_bit_mask = MHPMEVENT_BIT_OF;
+    }
+
+    counter = &env->pmu_ctrs[ctr_idx];
+    if (counter->irq_overflow_left > 0) {
+        irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
+                        counter->irq_overflow_left;
+        timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
+        counter->irq_overflow_left = 0;
+        return;
+    }
+
+    if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
+        /* Generate interrupt only if OF bit is clear */
+        if (!(*mhpmevent_val & of_bit_mask)) {
+            *mhpmevent_val |= of_bit_mask;
+            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+        }
+    }
+}
+
+/* Timer callback for instret and cycle counter overflow */
+void riscv_pmu_timer_cb(void *priv)
+{
+    RISCVCPU *cpu = priv;
+
+    /* Timer event was triggered only for these events */
+    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
+    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
+}
+
+int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
+{
+    uint64_t overflow_delta, overflow_at;
+    int64_t overflow_ns, overflow_left = 0;
+    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
+    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+
+    if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
+        return -1;
+    }
+
+    if (value) {
+        overflow_delta = UINT64_MAX - value + 1;
+    } else {
+        overflow_delta = UINT64_MAX;
+    }
+
+    /**
+     * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
+     * Compute the leftover and save it so that it can be reprogrammed again
+     * when timer expires.
+     */
+    if (overflow_delta > INT64_MAX) {
+        overflow_left = overflow_delta - INT64_MAX;
+    }
+
+    if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
+        riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
+        overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
+        overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
+    } else {
+        return -1;
+    }
+    overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
+
+    if (overflow_at > INT64_MAX) {
+        overflow_left += overflow_at - INT64_MAX;
+        counter->irq_overflow_left = overflow_left;
+        overflow_at = INT64_MAX;
+    }
+    timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
+
+    return 0;
+}
+
+
+int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
+{
+    if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
+        return -1;
+    }
+
+    cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
+    if (!cpu->pmu_event_ctr_map) {
+        /* PMU support can not be enabled */
+        qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
+        cpu->cfg.pmu_num = 0;
+        return -1;
+    }
+
+    /* Create a bitmask of available programmable counters */
+    cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
+
+    return 0;
 }
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
index 58a5bc3a4089..036653627f78 100644
--- a/target/riscv/pmu.h
+++ b/target/riscv/pmu.h
@@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
                                         uint32_t target_ctr);
 bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
                                   uint32_t target_ctr);
+void riscv_pmu_timer_cb(void *priv);
+int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
+int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
+                               uint32_t ctr_idx);
+int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
+int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
+                          uint32_t ctr_idx);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v11 2/6] target/riscv: Simplify counter predicate function
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
  2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  8:34   ` Weiwei Li
  2022-07-27  6:49 ` [PATCH v11 3/6] target/riscv: Add few cache related PMU events Atish Patra
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Bin Meng, Alistair Francis, Bin Meng,
	Palmer Dabbelt, qemu-riscv

All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
as a unified counter. Thus, the predicate function doesn't need handle each
case separately.

Simplify the predicate function so that we just handle things differently
between RV32/RV64 and S/HS mode.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 target/riscv/csr.c | 112 +++++----------------------------------------
 1 file changed, 11 insertions(+), 101 deletions(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 1233bfa0a726..57dbbf9b09a0 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
     CPUState *cs = env_cpu(env);
     RISCVCPU *cpu = RISCV_CPU(cs);
     int ctr_index;
+    target_ulong ctr_mask;
     int base_csrno = CSR_CYCLE;
     bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
 
@@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
         base_csrno += 0x80;
     }
     ctr_index = csrno - base_csrno;
+    ctr_mask = BIT(ctr_index);
 
     if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
         (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
         goto skip_ext_pmu_check;
     }
 
-    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
+    if (!(cpu->pmu_avail_ctrs & ctr_mask)) {
         /* No counter is enabled in PMU or the counter is out of range */
         return RISCV_EXCP_ILLEGAL_INST;
     }
 
 skip_ext_pmu_check:
 
-    if (env->priv == PRV_S) {
-        switch (csrno) {
-        case CSR_CYCLE:
-            if (!get_field(env->mcounteren, COUNTEREN_CY)) {
-                return RISCV_EXCP_ILLEGAL_INST;
-            }
-            break;
-        case CSR_TIME:
-            if (!get_field(env->mcounteren, COUNTEREN_TM)) {
-                return RISCV_EXCP_ILLEGAL_INST;
-            }
-            break;
-        case CSR_INSTRET:
-            if (!get_field(env->mcounteren, COUNTEREN_IR)) {
-                return RISCV_EXCP_ILLEGAL_INST;
-            }
-            break;
-        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
-            if (!get_field(env->mcounteren, 1 << ctr_index)) {
-                return RISCV_EXCP_ILLEGAL_INST;
-            }
-            break;
-        }
-        if (rv32) {
-            switch (csrno) {
-            case CSR_CYCLEH:
-                if (!get_field(env->mcounteren, COUNTEREN_CY)) {
-                    return RISCV_EXCP_ILLEGAL_INST;
-                }
-                break;
-            case CSR_TIMEH:
-                if (!get_field(env->mcounteren, COUNTEREN_TM)) {
-                    return RISCV_EXCP_ILLEGAL_INST;
-                }
-                break;
-            case CSR_INSTRETH:
-                if (!get_field(env->mcounteren, COUNTEREN_IR)) {
-                    return RISCV_EXCP_ILLEGAL_INST;
-                }
-                break;
-            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
-                if (!get_field(env->mcounteren, 1 << ctr_index)) {
-                    return RISCV_EXCP_ILLEGAL_INST;
-                }
-                break;
-            }
-        }
+    if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
+       ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
+        return RISCV_EXCP_ILLEGAL_INST;
     }
 
     if (riscv_cpu_virt_enabled(env)) {
-        switch (csrno) {
-        case CSR_CYCLE:
-            if (!get_field(env->hcounteren, COUNTEREN_CY) &&
-                get_field(env->mcounteren, COUNTEREN_CY)) {
-                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-            }
-            break;
-        case CSR_TIME:
-            if (!get_field(env->hcounteren, COUNTEREN_TM) &&
-                get_field(env->mcounteren, COUNTEREN_TM)) {
-                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-            }
-            break;
-        case CSR_INSTRET:
-            if (!get_field(env->hcounteren, COUNTEREN_IR) &&
-                get_field(env->mcounteren, COUNTEREN_IR)) {
-                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-            }
-            break;
-        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
-            if (!get_field(env->hcounteren, 1 << ctr_index) &&
-                 get_field(env->mcounteren, 1 << ctr_index)) {
-                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-            }
-            break;
-        }
-        if (rv32) {
-            switch (csrno) {
-            case CSR_CYCLEH:
-                if (!get_field(env->hcounteren, COUNTEREN_CY) &&
-                    get_field(env->mcounteren, COUNTEREN_CY)) {
-                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-                }
-                break;
-            case CSR_TIMEH:
-                if (!get_field(env->hcounteren, COUNTEREN_TM) &&
-                    get_field(env->mcounteren, COUNTEREN_TM)) {
-                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-                }
-                break;
-            case CSR_INSTRETH:
-                if (!get_field(env->hcounteren, COUNTEREN_IR) &&
-                    get_field(env->mcounteren, COUNTEREN_IR)) {
-                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-                }
-                break;
-            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
-                if (!get_field(env->hcounteren, 1 << ctr_index) &&
-                     get_field(env->mcounteren, 1 << ctr_index)) {
-                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
-                }
-                break;
-            }
+        if (!get_field(env->mcounteren, ctr_mask)) {
+            /* The bit must be set in mcountern for HS mode access */
+            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+        } else if (!get_field(env->hcounteren, ctr_mask)) {
+            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
         }
     }
 #endif
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v11 3/6] target/riscv: Add few cache related PMU events
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
  2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
  2022-07-27  6:49 ` [PATCH v11 2/6] target/riscv: Simplify counter predicate function Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  6:49 ` [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Alistair Francis, Heiko Stuebner, Atish Patra,
	Bin Meng, Palmer Dabbelt, qemu-riscv

From: Atish Patra <atish.patra@wdc.com>

Qemu can monitor the following cache related PMU events through
tlb_fill functions.

1. DTLB load/store miss
3. ITLB prefetch miss

Increment the PMU counter in tlb_fill function.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 target/riscv/cpu_helper.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 59b3680b1b23..37afb4e0cc72 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -21,10 +21,12 @@
 #include "qemu/log.h"
 #include "qemu/main-loop.h"
 #include "cpu.h"
+#include "pmu.h"
 #include "exec/exec-all.h"
 #include "tcg/tcg-op.h"
 #include "trace.h"
 #include "semihosting/common-semi.h"
+#include "cpu_bits.h"
 
 int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
 {
@@ -1176,6 +1178,28 @@ void riscv_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
     cpu_loop_exit_restore(cs, retaddr);
 }
 
+
+static void pmu_tlb_fill_incr_ctr(RISCVCPU *cpu, MMUAccessType access_type)
+{
+    enum riscv_pmu_event_idx pmu_event_type;
+
+    switch (access_type) {
+    case MMU_INST_FETCH:
+        pmu_event_type = RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS;
+        break;
+    case MMU_DATA_LOAD:
+        pmu_event_type = RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS;
+        break;
+    case MMU_DATA_STORE:
+        pmu_event_type = RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS;
+        break;
+    default:
+        return;
+    }
+
+    riscv_pmu_incr_ctr(cpu, pmu_event_type);
+}
+
 bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                         MMUAccessType access_type, int mmu_idx,
                         bool probe, uintptr_t retaddr)
@@ -1272,6 +1296,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
             }
         }
     } else {
+        pmu_tlb_fill_incr_ctr(cpu, access_type);
         /* Single stage lookup */
         ret = get_physical_address(env, &pa, &prot, address, NULL,
                                    access_type, mmu_idx, true, false, false);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
                   ` (2 preceding siblings ...)
  2022-07-27  6:49 ` [PATCH v11 3/6] target/riscv: Add few cache related PMU events Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  7:27   ` Sunil V L
  2022-07-27  6:49 ` [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
  2022-07-27  6:49 ` [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit Atish Patra
  5 siblings, 1 reply; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Alistair Francis, Atish Patra, Bin Meng,
	Palmer Dabbelt, qemu-riscv

Qemu virt machine can support few cache events and cycle/instret counters.
It also supports counter overflow for these events.

Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
capabilities. There are some dummy nodes added for testing as well.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 hw/riscv/virt.c    | 28 +++++++++++++++++++++++
 target/riscv/cpu.c |  1 +
 target/riscv/pmu.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++
 target/riscv/pmu.h |  1 +
 4 files changed, 87 insertions(+)

diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index bc424dd2f523..0f3fdb4908b8 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -29,6 +29,7 @@
 #include "hw/char/serial.h"
 #include "target/riscv/cpu.h"
 #include "hw/core/sysbus-fdt.h"
+#include "target/riscv/pmu.h"
 #include "hw/riscv/riscv_hart.h"
 #include "hw/riscv/virt.h"
 #include "hw/riscv/boot.h"
@@ -714,6 +715,32 @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
     aplic_phandles[socket] = aplic_s_phandle;
 }
 
+static void create_fdt_socket_pmu(RISCVVirtState *s,
+                                  int socket, uint32_t *phandle,
+                                  uint32_t *intc_phandles)
+{
+    int cpu;
+    char *pmu_name;
+    uint32_t *pmu_cells;
+    MachineState *mc = MACHINE(s);
+    RISCVCPU hart = s->soc[socket].harts[0];
+
+    pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
+
+    for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
+        pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
+        pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
+    }
+
+    pmu_name = g_strdup_printf("/soc/pmu");
+    qemu_fdt_add_subnode(mc->fdt, pmu_name);
+    qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible", "riscv,pmu");
+    riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
+
+    g_free(pmu_name);
+    g_free(pmu_cells);
+}
+
 static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
                                bool is_32_bit, uint32_t *phandle,
                                uint32_t *irq_mmio_phandle,
@@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
                     &intc_phandles[phandle_pos]);
             }
         }
+        create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
     }
 
     if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index c1d62b81a725..5c8417a56e5b 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -1114,6 +1114,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
         ISA_EDATA_ENTRY(zve64f, ext_zve64f),
         ISA_EDATA_ENTRY(zhinx, ext_zhinx),
         ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
+        ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
         ISA_EDATA_ENTRY(svinval, ext_svinval),
         ISA_EDATA_ENTRY(svnapot, ext_svnapot),
         ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
index 34096941c0ce..59feb3c243dd 100644
--- a/target/riscv/pmu.c
+++ b/target/riscv/pmu.c
@@ -20,11 +20,68 @@
 #include "cpu.h"
 #include "pmu.h"
 #include "sysemu/cpu-timers.h"
+#include "sysemu/device_tree.h"
 
 #define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
 #define MAKE_32BIT_MASK(shift, length) \
         (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
 
+/**
+ * To keep it simple, any event can be mapped to any programmable counters in
+ * QEMU. The generic cycle & instruction count events can also be monitored
+ * using programmable counters. In that case, mcycle & minstret must continue
+ * to provide the correct value as well. Heterogeneous PMU per hart is not
+ * supported yet. Thus, number of counters are same across all harts.
+ */
+void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char *pmu_name)
+{
+    uint32_t fdt_event_ctr_map[20] = {};
+    uint32_t cmask;
+
+    /* All the programmable counters can map to any event */
+    cmask = MAKE_32BIT_MASK(3, num_ctrs);
+
+   /**
+    * The event encoding is specified in the SBI specification
+    * Event idx is a 20bits wide number encoded as follows:
+    * event_idx[19:16] = type
+    * event_idx[15:0] = code
+    * The code field in cache events are encoded as follows:
+    * event_idx.code[15:3] = cache_id
+    * event_idx.code[2:1] = op_id
+    * event_idx.code[0:0] = result_id
+    */
+
+   /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
+   fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
+   fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
+   fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
+
+   /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
+   fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
+   fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
+   fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
+
+   /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
+   fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
+   fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
+   fdt_event_ctr_map[8] = cpu_to_be32(cmask);
+
+   /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01) */
+   fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
+   fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
+   fdt_event_ctr_map[11] = cpu_to_be32(cmask);
+
+   /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
+   fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
+   fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
+   fdt_event_ctr_map[14] = cpu_to_be32(cmask);
+
+   /* This a OpenSBI specific DT property documented in OpenSBI docs */
+   qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
+                    fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
+}
+
 static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
 {
     if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
index 036653627f78..3004ce37b636 100644
--- a/target/riscv/pmu.h
+++ b/target/riscv/pmu.h
@@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
 int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
                                uint32_t ctr_idx);
 int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
+void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char *pmu_name);
 int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
                           uint32_t ctr_idx);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
                   ` (3 preceding siblings ...)
  2022-07-27  6:49 ` [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  8:26   ` Weiwei Li
  2022-07-27  6:49 ` [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit Atish Patra
  5 siblings, 1 reply; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv

The sscofpmf extension was ratified as a part of priv spec v1.12.
Mark the csr_ops accordingly.

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 target/riscv/csr.c | 90 ++++++++++++++++++++++++++++++----------------
 1 file changed, 60 insertions(+), 30 deletions(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 57dbbf9b09a0..ec6d7f022ad5 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -3859,63 +3859,92 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
                                                        write_mhpmevent },
 
     [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                          write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
     [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,  read_mhpmeventh,
-                                                       write_mhpmeventh},
+                                                           write_mhpmeventh,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
 
     [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh },
     [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh },
@@ -4005,7 +4034,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
                                                        write_mhpmcounterh },
     [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,  read_hpmcounterh,
                                                        write_mhpmcounterh },
-    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
+    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf,
+                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
 
 #endif /* !CONFIG_USER_ONLY */
 };
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit
  2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
                   ` (4 preceding siblings ...)
  2022-07-27  6:49 ` [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
@ 2022-07-27  6:49 ` Atish Patra
  2022-07-27  8:22   ` Weiwei Li
  5 siblings, 1 reply; 20+ messages in thread
From: Atish Patra @ 2022-07-27  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: Atish Patra, Heiko Stuebner, Alistair Francis, Bin Meng,
	Palmer Dabbelt, qemu-riscv

With .min_priv_version, additiona priv version check is uncessary
for mcountinhibit read/write functions.

Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
 target/riscv/csr.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ec6d7f022ad5..eac003d6b950 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1484,10 +1484,6 @@ static RISCVException write_mtvec(CPURISCVState *env, int csrno,
 static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
                                          target_ulong *val)
 {
-    if (env->priv_ver < PRIV_VERSION_1_11_0) {
-        return RISCV_EXCP_ILLEGAL_INST;
-    }
-
     *val = env->mcountinhibit;
     return RISCV_EXCP_NONE;
 }
@@ -1498,10 +1494,6 @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
     int cidx;
     PMUCTRState *counter;
 
-    if (env->priv_ver < PRIV_VERSION_1_11_0) {
-        return RISCV_EXCP_ILLEGAL_INST;
-    }
-
     env->mcountinhibit = val;
 
     /* Check if any other counter is also monitoring cycles/instructions */
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree
  2022-07-27  6:49 ` [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
@ 2022-07-27  7:27   ` Sunil V L
  2022-07-27  7:50     ` Atish Kumar Patra
  0 siblings, 1 reply; 20+ messages in thread
From: Sunil V L @ 2022-07-27  7:27 UTC (permalink / raw)
  To: Atish Patra
  Cc: qemu-devel, Alistair Francis, Atish Patra, Bin Meng,
	Palmer Dabbelt, qemu-riscv

Hi Atish,

On Tue, Jul 26, 2022 at 11:49:11PM -0700, Atish Patra wrote:
> Qemu virt machine can support few cache events and cycle/instret counters.
> It also supports counter overflow for these events.
> 
> Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
> capabilities. There are some dummy nodes added for testing as well.
> 
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>  hw/riscv/virt.c    | 28 +++++++++++++++++++++++
>  target/riscv/cpu.c |  1 +
>  target/riscv/pmu.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++
>  target/riscv/pmu.h |  1 +
>  4 files changed, 87 insertions(+)
> 
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index bc424dd2f523..0f3fdb4908b8 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -29,6 +29,7 @@
>  #include "hw/char/serial.h"
>  #include "target/riscv/cpu.h"
>  #include "hw/core/sysbus-fdt.h"
> +#include "target/riscv/pmu.h"
>  #include "hw/riscv/riscv_hart.h"
>  #include "hw/riscv/virt.h"
>  #include "hw/riscv/boot.h"
> @@ -714,6 +715,32 @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
>      aplic_phandles[socket] = aplic_s_phandle;
>  }
>  
> +static void create_fdt_socket_pmu(RISCVVirtState *s,
> +                                  int socket, uint32_t *phandle,
> +                                  uint32_t *intc_phandles)
> +{
> +    int cpu;
> +    char *pmu_name;
> +    uint32_t *pmu_cells;
> +    MachineState *mc = MACHINE(s);
> +    RISCVCPU hart = s->soc[socket].harts[0];
> +
> +    pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
> +
> +    for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
> +        pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
> +        pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
> +    }
> +
> +    pmu_name = g_strdup_printf("/soc/pmu");
> +    qemu_fdt_add_subnode(mc->fdt, pmu_name);

Does this work for you when there are more than 1 sockets? Shouldn't
this be unique name for each socket?

Thanks
Sunil

> +    qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible", "riscv,pmu");
> +    riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
> +
> +    g_free(pmu_name);
> +    g_free(pmu_cells);
> +}
> +
>  static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
>                                 bool is_32_bit, uint32_t *phandle,
>                                 uint32_t *irq_mmio_phandle,
> @@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
>                      &intc_phandles[phandle_pos]);
>              }
>          }
> +        create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
>      }
>  
>      if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index c1d62b81a725..5c8417a56e5b 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -1114,6 +1114,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
>          ISA_EDATA_ENTRY(zve64f, ext_zve64f),
>          ISA_EDATA_ENTRY(zhinx, ext_zhinx),
>          ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
> +        ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
>          ISA_EDATA_ENTRY(svinval, ext_svinval),
>          ISA_EDATA_ENTRY(svnapot, ext_svnapot),
>          ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 34096941c0ce..59feb3c243dd 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -20,11 +20,68 @@
>  #include "cpu.h"
>  #include "pmu.h"
>  #include "sysemu/cpu-timers.h"
> +#include "sysemu/device_tree.h"
>  
>  #define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
>  #define MAKE_32BIT_MASK(shift, length) \
>          (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
>  
> +/**
> + * To keep it simple, any event can be mapped to any programmable counters in
> + * QEMU. The generic cycle & instruction count events can also be monitored
> + * using programmable counters. In that case, mcycle & minstret must continue
> + * to provide the correct value as well. Heterogeneous PMU per hart is not
> + * supported yet. Thus, number of counters are same across all harts.
> + */
> +void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char *pmu_name)
> +{
> +    uint32_t fdt_event_ctr_map[20] = {};
> +    uint32_t cmask;
> +
> +    /* All the programmable counters can map to any event */
> +    cmask = MAKE_32BIT_MASK(3, num_ctrs);
> +
> +   /**
> +    * The event encoding is specified in the SBI specification
> +    * Event idx is a 20bits wide number encoded as follows:
> +    * event_idx[19:16] = type
> +    * event_idx[15:0] = code
> +    * The code field in cache events are encoded as follows:
> +    * event_idx.code[15:3] = cache_id
> +    * event_idx.code[2:1] = op_id
> +    * event_idx.code[0:0] = result_id
> +    */
> +
> +   /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
> +   fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
> +   fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
> +   fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
> +
> +   /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
> +   fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
> +   fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
> +   fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
> +
> +   /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
> +   fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
> +   fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
> +   fdt_event_ctr_map[8] = cpu_to_be32(cmask);
> +
> +   /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01) */
> +   fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
> +   fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
> +   fdt_event_ctr_map[11] = cpu_to_be32(cmask);
> +
> +   /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
> +   fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
> +   fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
> +   fdt_event_ctr_map[14] = cpu_to_be32(cmask);
> +
> +   /* This a OpenSBI specific DT property documented in OpenSBI docs */
> +   qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
> +                    fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
> +}
> +
>  static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
>  {
>      if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 036653627f78..3004ce37b636 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
>  int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
>                                 uint32_t ctr_idx);
>  int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char *pmu_name);
>  int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
>                            uint32_t ctr_idx);
> -- 
> 2.25.1
> 
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree
  2022-07-27  7:27   ` Sunil V L
@ 2022-07-27  7:50     ` Atish Kumar Patra
  0 siblings, 0 replies; 20+ messages in thread
From: Atish Kumar Patra @ 2022-07-27  7:50 UTC (permalink / raw)
  To: Sunil V L
  Cc: qemu-devel@nongnu.org Developers, Alistair Francis, Atish Patra,
	Bin Meng, Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 7694 bytes --]

On Wed, Jul 27, 2022 at 12:27 AM Sunil V L <sunilvl@ventanamicro.com> wrote:

> Hi Atish,
>
> On Tue, Jul 26, 2022 at 11:49:11PM -0700, Atish Patra wrote:
> > Qemu virt machine can support few cache events and cycle/instret
> counters.
> > It also supports counter overflow for these events.
> >
> > Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
> > capabilities. There are some dummy nodes added for testing as well.
> >
> > Acked-by: Alistair Francis <alistair.francis@wdc.com>
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >  hw/riscv/virt.c    | 28 +++++++++++++++++++++++
> >  target/riscv/cpu.c |  1 +
> >  target/riscv/pmu.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++
> >  target/riscv/pmu.h |  1 +
> >  4 files changed, 87 insertions(+)
> >
> > diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> > index bc424dd2f523..0f3fdb4908b8 100644
> > --- a/hw/riscv/virt.c
> > +++ b/hw/riscv/virt.c
> > @@ -29,6 +29,7 @@
> >  #include "hw/char/serial.h"
> >  #include "target/riscv/cpu.h"
> >  #include "hw/core/sysbus-fdt.h"
> > +#include "target/riscv/pmu.h"
> >  #include "hw/riscv/riscv_hart.h"
> >  #include "hw/riscv/virt.h"
> >  #include "hw/riscv/boot.h"
> > @@ -714,6 +715,32 @@ static void create_fdt_socket_aplic(RISCVVirtState
> *s,
> >      aplic_phandles[socket] = aplic_s_phandle;
> >  }
> >
> > +static void create_fdt_socket_pmu(RISCVVirtState *s,
> > +                                  int socket, uint32_t *phandle,
> > +                                  uint32_t *intc_phandles)
> > +{
> > +    int cpu;
> > +    char *pmu_name;
> > +    uint32_t *pmu_cells;
> > +    MachineState *mc = MACHINE(s);
> > +    RISCVCPU hart = s->soc[socket].harts[0];
> > +
> > +    pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
> > +
> > +    for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
> > +        pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
> > +        pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
> > +    }
> > +
> > +    pmu_name = g_strdup_printf("/soc/pmu");
> > +    qemu_fdt_add_subnode(mc->fdt, pmu_name);
>
> Does this work for you when there are more than 1 sockets? Shouldn't
> this be unique name for each socket?
>
>
Ahh. Sorry. I had fixed that when you last commented. But forgot to include
that fix. Will revise it.


> Thanks
> Sunil
>
> > +    qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible",
> "riscv,pmu");
> > +    riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
> > +
> > +    g_free(pmu_name);
> > +    g_free(pmu_cells);
> > +}
> > +
> >  static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry
> *memmap,
> >                                 bool is_32_bit, uint32_t *phandle,
> >                                 uint32_t *irq_mmio_phandle,
> > @@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s,
> const MemMapEntry *memmap,
> >                      &intc_phandles[phandle_pos]);
> >              }
> >          }
> > +        create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
> >      }
> >
> >      if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index c1d62b81a725..5c8417a56e5b 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -1114,6 +1114,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu,
> char **isa_str, int max_str_len)
> >          ISA_EDATA_ENTRY(zve64f, ext_zve64f),
> >          ISA_EDATA_ENTRY(zhinx, ext_zhinx),
> >          ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
> > +        ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
> >          ISA_EDATA_ENTRY(svinval, ext_svinval),
> >          ISA_EDATA_ENTRY(svnapot, ext_svnapot),
> >          ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
> > diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> > index 34096941c0ce..59feb3c243dd 100644
> > --- a/target/riscv/pmu.c
> > +++ b/target/riscv/pmu.c
> > @@ -20,11 +20,68 @@
> >  #include "cpu.h"
> >  #include "pmu.h"
> >  #include "sysemu/cpu-timers.h"
> > +#include "sysemu/device_tree.h"
> >
> >  #define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> >  #define MAKE_32BIT_MASK(shift, length) \
> >          (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> >
> > +/**
> > + * To keep it simple, any event can be mapped to any programmable
> counters in
> > + * QEMU. The generic cycle & instruction count events can also be
> monitored
> > + * using programmable counters. In that case, mcycle & minstret must
> continue
> > + * to provide the correct value as well. Heterogeneous PMU per hart is
> not
> > + * supported yet. Thus, number of counters are same across all harts.
> > + */
> > +void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char
> *pmu_name)
> > +{
> > +    uint32_t fdt_event_ctr_map[20] = {};
> > +    uint32_t cmask;
> > +
> > +    /* All the programmable counters can map to any event */
> > +    cmask = MAKE_32BIT_MASK(3, num_ctrs);
> > +
> > +   /**
> > +    * The event encoding is specified in the SBI specification
> > +    * Event idx is a 20bits wide number encoded as follows:
> > +    * event_idx[19:16] = type
> > +    * event_idx[15:0] = code
> > +    * The code field in cache events are encoded as follows:
> > +    * event_idx.code[15:3] = cache_id
> > +    * event_idx.code[2:1] = op_id
> > +    * event_idx.code[0:0] = result_id
> > +    */
> > +
> > +   /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
> > +   fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
> > +   fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
> > +   fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
> > +
> > +   /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
> > +   fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
> > +   fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
> > +   fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
> > +
> > +   /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
> > +   fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
> > +   fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
> > +   fdt_event_ctr_map[8] = cpu_to_be32(cmask);
> > +
> > +   /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01)
> */
> > +   fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
> > +   fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
> > +   fdt_event_ctr_map[11] = cpu_to_be32(cmask);
> > +
> > +   /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
> > +   fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
> > +   fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
> > +   fdt_event_ctr_map[14] = cpu_to_be32(cmask);
> > +
> > +   /* This a OpenSBI specific DT property documented in OpenSBI docs */
> > +   qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
> > +                    fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
> > +}
> > +
> >  static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> >  {
> >      if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> > diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> > index 036653627f78..3004ce37b636 100644
> > --- a/target/riscv/pmu.h
> > +++ b/target/riscv/pmu.h
> > @@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> >  int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> >                                 uint32_t ctr_idx);
> >  int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> event_idx);
> > +void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char
> *pmu_name);
> >  int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> >                            uint32_t ctr_idx);
> > --
> > 2.25.1
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 9945 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 1/6] target/riscv: Add sscofpmf extension support
  2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
@ 2022-07-27  8:11   ` Weiwei Li
  2022-07-27 21:32     ` Atish Kumar Patra
  2022-07-31  4:10   ` Rahul Pathak
  1 sibling, 1 reply; 20+ messages in thread
From: Weiwei Li @ 2022-07-27  8:11 UTC (permalink / raw)
  To: Atish Patra, qemu-devel
  Cc: Heiko Stuebner, Atish Patra, Alistair Francis, Bin Meng,
	Palmer Dabbelt, qemu-riscv


在 2022/7/27 下午2:49, Atish Patra 写道:
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Tested-by: Heiko Stuebner <heiko@sntech.de>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>   target/riscv/cpu.c      |  11 ++
>   target/riscv/cpu.h      |  25 +++
>   target/riscv/cpu_bits.h |  55 +++++++
>   target/riscv/csr.c      | 166 ++++++++++++++++++-
>   target/riscv/machine.c  |   1 +
>   target/riscv/pmu.c      | 357 +++++++++++++++++++++++++++++++++++++++-
>   target/riscv/pmu.h      |   7 +
>   7 files changed, 611 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1bb3973806d2..c1d62b81a725 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
>   #include "qemu/ctype.h"
>   #include "qemu/log.h"
>   #include "cpu.h"
> +#include "pmu.h"
>   #include "internals.h"
>   #include "exec/exec-all.h"
>   #include "qapi/error.h"
> @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
>           set_misa(env, env->misa_mxl, ext);
>       }
>   
> +#ifndef CONFIG_USER_ONLY
> +    if (cpu->cfg.pmu_num) {
> +        if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
> +            cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> +                                          riscv_pmu_timer_cb, cpu);
> +        }
> +     }
> +#endif
> +
>       riscv_cpu_register_gdb_regs_for_features(cs);
>   
>       qemu_init_vcpu(cs);
> @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] = {
>       DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
>       DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
>       DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> +    DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
>       DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
>       DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
>       DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
>       /* Snapshort value of a counter in RV32 */
>       target_ulong mhpmcounterh_prev;
>       bool started;
> +    /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
> +    target_ulong irq_overflow_left;
>   } PMUCTRState;
>   
>   struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
>       /* PMU event selector configured values. First three are unused*/
>       target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>   
> +    /* PMU event selector configured values for RV32*/
> +    target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
>       target_ulong sscratch;
>       target_ulong mscratch;
>   
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
>       bool ext_zve32f;
>       bool ext_zve64f;
>       bool ext_zmmul;
> +    bool ext_sscofpmf;
>       bool rvv_ta_all_1s;
>   
>       uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>   
>       /* Configuration Settings */
>       RISCVCPUConfig cfg;
> +
> +    QEMUTimer *pmu_timer;
> +    /* A bitmask of Available programmable counters */
> +    uint32_t pmu_avail_ctrs;
> +    /* Mapping of events to counters */
> +    GHashTable *pmu_event_ctr_map;
>   };
>   
>   static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> @@ -738,6 +750,19 @@ enum {
>       CSR_TABLE_SIZE = 0x1000
>   };
>   
> +/**
> + * The event id are encoded based on the encoding specified in the
> + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> +    RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> +    RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> +    RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> +    RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> +    RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> +};
> +
>   /* CSR function table */
>   extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>   
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 6be5a9e9f046..b63c586be563 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -382,6 +382,37 @@
>   #define CSR_MHPMEVENT29     0x33d
>   #define CSR_MHPMEVENT30     0x33e
>   #define CSR_MHPMEVENT31     0x33f
> +
> +#define CSR_MHPMEVENT3H     0x723
> +#define CSR_MHPMEVENT4H     0x724
> +#define CSR_MHPMEVENT5H     0x725
> +#define CSR_MHPMEVENT6H     0x726
> +#define CSR_MHPMEVENT7H     0x727
> +#define CSR_MHPMEVENT8H     0x728
> +#define CSR_MHPMEVENT9H     0x729
> +#define CSR_MHPMEVENT10H    0x72a
> +#define CSR_MHPMEVENT11H    0x72b
> +#define CSR_MHPMEVENT12H    0x72c
> +#define CSR_MHPMEVENT13H    0x72d
> +#define CSR_MHPMEVENT14H    0x72e
> +#define CSR_MHPMEVENT15H    0x72f
> +#define CSR_MHPMEVENT16H    0x730
> +#define CSR_MHPMEVENT17H    0x731
> +#define CSR_MHPMEVENT18H    0x732
> +#define CSR_MHPMEVENT19H    0x733
> +#define CSR_MHPMEVENT20H    0x734
> +#define CSR_MHPMEVENT21H    0x735
> +#define CSR_MHPMEVENT22H    0x736
> +#define CSR_MHPMEVENT23H    0x737
> +#define CSR_MHPMEVENT24H    0x738
> +#define CSR_MHPMEVENT25H    0x739
> +#define CSR_MHPMEVENT26H    0x73a
> +#define CSR_MHPMEVENT27H    0x73b
> +#define CSR_MHPMEVENT28H    0x73c
> +#define CSR_MHPMEVENT29H    0x73d
> +#define CSR_MHPMEVENT30H    0x73e
> +#define CSR_MHPMEVENT31H    0x73f
> +
>   #define CSR_MHPMCOUNTER3H   0xb83
>   #define CSR_MHPMCOUNTER4H   0xb84
>   #define CSR_MHPMCOUNTER5H   0xb85
> @@ -443,6 +474,7 @@
>   #define CSR_VSMTE           0x2c0
>   #define CSR_VSPMMASK        0x2c1
>   #define CSR_VSPMBASE        0x2c2
> +#define CSR_SCOUNTOVF       0xda0
>   
>   /* Crypto Extension */
>   #define CSR_SEED            0x015
> @@ -620,6 +652,7 @@ typedef enum RISCVException {
>   #define IRQ_VS_EXT                         10
>   #define IRQ_M_EXT                          11
>   #define IRQ_S_GEXT                         12
> +#define IRQ_PMU_OVF                        13
>   #define IRQ_LOCAL_MAX                      16
>   #define IRQ_LOCAL_GUEST_MAX                (TARGET_LONG_BITS - 1)
>   
> @@ -637,11 +670,13 @@ typedef enum RISCVException {
>   #define MIP_VSEIP                          (1 << IRQ_VS_EXT)
>   #define MIP_MEIP                           (1 << IRQ_M_EXT)
>   #define MIP_SGEIP                          (1 << IRQ_S_GEXT)
> +#define MIP_LCOFIP                         (1 << IRQ_PMU_OVF)
>   
>   /* sip masks */
>   #define SIP_SSIP                           MIP_SSIP
>   #define SIP_STIP                           MIP_STIP
>   #define SIP_SEIP                           MIP_SEIP
> +#define SIP_LCOFIP                         MIP_LCOFIP
>   
>   /* MIE masks */
>   #define MIE_SEIE                           (1 << IRQ_S_EXT)
> @@ -795,4 +830,24 @@ typedef enum RISCVException {
>   #define SEED_OPST_WAIT                   (0b01 << 30)
>   #define SEED_OPST_ES16                   (0b10 << 30)
>   #define SEED_OPST_DEAD                   (0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE                         (1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF                   BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF                  BIT(31)
> +#define MHPMEVENT_BIT_MINH                 BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH                BIT(30)
> +#define MHPMEVENT_BIT_SINH                 BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH                BIT(29)
> +#define MHPMEVENT_BIT_UINH                 BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH                BIT(28)
> +#define MHPMEVENT_BIT_VSINH                BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH               BIT(27)
> +#define MHPMEVENT_BIT_VUINH                BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH               BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK               _ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK                 0xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD              16
> +
>   #endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 235f2a011e70..1233bfa0a726 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>       CPUState *cs = env_cpu(env);
>       RISCVCPU *cpu = RISCV_CPU(cs);
>       int ctr_index;
> -    int base_csrno = CSR_HPMCOUNTER3;
> +    int base_csrno = CSR_CYCLE;
>       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>   
>       if (rv32 && csrno >= CSR_CYCLEH) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>       }
>       ctr_index = csrno - base_csrno;
>   
> -    if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> +    if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> +        (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> +        goto skip_ext_pmu_check;
> +    }
> +
> +    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
>           /* No counter is enabled in PMU or the counter is out of range */
>           return RISCV_EXCP_ILLEGAL_INST;
>       }
>   
Maybe it's better to remove !cpu->cfg.pmu_num here, not in later commit.
> +skip_ext_pmu_check:
> +
>       if (env->priv == PRV_S) {
>           switch (csrno) {
>           case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>               }
>               break;
>           case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            ctr_index = csrno - CSR_CYCLE;
>               if (!get_field(env->mcounteren, 1 << ctr_index)) {
>                   return RISCV_EXCP_ILLEGAL_INST;
>               }
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>                   }
>                   break;
>               case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                ctr_index = csrno - CSR_CYCLEH;
>                   if (!get_field(env->mcounteren, 1 << ctr_index)) {
>                       return RISCV_EXCP_ILLEGAL_INST;
>                   }
> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>               }
>               break;
>           case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            ctr_index = csrno - CSR_CYCLE;
>               if (!get_field(env->hcounteren, 1 << ctr_index) &&
>                    get_field(env->mcounteren, 1 << ctr_index)) {
>                   return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>                   }
>                   break;
>               case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                ctr_index = csrno - CSR_CYCLEH;
>                   if (!get_field(env->hcounteren, 1 << ctr_index) &&
>                        get_field(env->mcounteren, 1 << ctr_index)) {
>                       return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
>       return mctr(env, csrno);
>   }
>   
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> +    CPUState *cs = env_cpu(env);
> +    RISCVCPU *cpu = RISCV_CPU(cs);
> +
> +    if (!cpu->cfg.ext_sscofpmf) {
> +        return RISCV_EXCP_ILLEGAL_INST;
> +    }
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
>   static RISCVException any(CPURISCVState *env, int csrno)
>   {
>       return RISCV_EXCP_NONE;
> @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
>   static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
>   {
>       int evt_index = csrno - CSR_MCOUNTINHIBIT;
> +    uint64_t mhpmevt_val = val;
>   
>       env->mhpmevent_val[evt_index] = val;
>   
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpmevt_val = mhpmevt_val |
> +                      ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> +    }
> +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +
> +    *val = env->mhpmeventh_val[evt_index];
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +    uint64_t mhpmevth_val = val;
> +    uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> +    mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> +    env->mhpmeventh_val[evt_index] = val;
> +
> +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
>       return RISCV_EXCP_NONE;
>   }
>   
> @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
>   {
>       int ctr_idx = csrno - CSR_MCYCLE;
>       PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t mhpmctr_val = val;
>   
>       counter->mhpmcounter_val = val;
>       if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>           riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>           counter->mhpmcounter_prev = get_ticks(false);
> -    } else {
> +        if (ctr_idx > 2) {
> +            if (riscv_cpu_mxl(env) == MXL_RV32) {
> +                mhpmctr_val = mhpmctr_val |
> +                              ((uint64_t)counter->mhpmcounterh_val << 32);
> +            }
> +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> +        }
> +     } else {
>           /* Other counters can keep incrementing from the given value */
>           counter->mhpmcounter_prev = val;
>       }
> @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
>   {
>       int ctr_idx = csrno - CSR_MCYCLEH;
>       PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t mhpmctr_val = counter->mhpmcounter_val;
> +    uint64_t mhpmctrh_val = val;
>   
>       counter->mhpmcounterh_val = val;
> +    mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
>       if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>           riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>           counter->mhpmcounterh_prev = get_ticks(true);
> +        if (ctr_idx > 2) {
> +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> +        }
>       } else {
>           counter->mhpmcounterh_prev = val;
>       }
> @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
>       return riscv_pmu_read_ctr(env, val, true, ctr_index);
>   }
>   
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> +    int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> +    int i;
> +    *val = 0;
> +    target_ulong *mhpm_evt_val;
> +    uint64_t of_bit_mask;
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpm_evt_val = env->mhpmeventh_val;
> +        of_bit_mask = MHPMEVENTH_BIT_OF;
> +    } else {
> +        mhpm_evt_val = env->mhpmevent_val;
> +        of_bit_mask = MHPMEVENT_BIT_OF;
> +    }
> +
> +    for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> +        if ((get_field(env->mcounteren, BIT(i))) &&
> +            (mhpm_evt_val[i] & of_bit_mask)) {
> +                    *val |= BIT(i);
> +            }
> +    }
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
>   static RISCVException read_time(CPURISCVState *env, int csrno,
>                                   target_ulong *val)
>   {
> @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
>   /* Machine constants */
>   
>   #define M_MODE_INTERRUPTS  ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> +                                      MIP_LCOFIP))

It's better to align with MIP_SSIP here.

>   #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
>   #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>   
> @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
>   static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
>       SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
>       SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> +                                              SIP_LCOFIP;
>   static const target_ulong hip_writable_mask = MIP_VSSIP;
>   static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
>   static const target_ulong vsip_writable_mask = MIP_VSSIP;
> @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>       [CSR_MHPMEVENT31]    = { "mhpmevent31",    any,    read_mhpmevent,
>                                                          write_mhpmevent },
>   
> +    [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},

The new lines have been updated to align with the last line in my 
previous patchset(accepted).

you can see it if rebase to  riscv-to-apply.next. So it's better to make 
write_mhpmeventh align with

' " '.  The same to following new lines.

Regards,

Weiwei Li

> +    [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +
>       [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh },
>       [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh },
>       [CSR_HPMCOUNTER5H]   = { "hpmcounter5h",   ctr32,  read_hpmcounterh },
> @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>                                                          write_mhpmcounterh },
>       [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,  read_hpmcounterh,
>                                                          write_mhpmcounterh },
> +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> +
>   #endif /* !CONFIG_USER_ONLY */
>   };
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
>           VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
>                                vmstate_pmu_ctr_state, PMUCTRState),
>           VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> +        VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
>           VMSTATE_UINTTL(env.sscratch, RISCVCPU),
>           VMSTATE_UINTTL(env.mscratch, RISCVCPU),
>           VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
>   #include "qemu/osdep.h"
>   #include "cpu.h"
>   #include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> +        (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> +        !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> +        return false;
> +    } else {
> +        return true;
> +    }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +
> +    if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> +        !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> +        return true;
> +    } else {
> +        return false;
> +    }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +    target_ulong max_val = UINT32_MAX;
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    bool virt_on = riscv_cpu_virt_enabled(env);
> +
> +    /* Privilege mode filtering */
> +    if ((env->priv == PRV_M &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> +        (env->priv == PRV_S && virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> +        (env->priv == PRV_U && virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> +        (env->priv == PRV_S && !virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> +        (env->priv == PRV_U && !virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> +        return 0;
> +    }
> +
> +    /* Handle the overflow scenario */
> +    if (counter->mhpmcounter_val == max_val) {
> +        if (counter->mhpmcounterh_val == max_val) {
> +            counter->mhpmcounter_val = 0;
> +            counter->mhpmcounterh_val = 0;
> +            /* Generate interrupt only if OF bit is clear */
> +            if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> +                env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> +                riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +            }
> +        } else {
> +            counter->mhpmcounterh_val++;
> +        }
> +    } else {
> +        counter->mhpmcounter_val++;
> +    }
> +
> +    return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t max_val = UINT64_MAX;
> +    bool virt_on = riscv_cpu_virt_enabled(env);
> +
> +    /* Privilege mode filtering */
> +    if ((env->priv == PRV_M &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> +        (env->priv == PRV_S && virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> +        (env->priv == PRV_U && virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> +        (env->priv == PRV_S && !virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> +        (env->priv == PRV_U && !virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> +        return 0;
> +    }
> +
> +    /* Handle the overflow scenario */
> +    if (counter->mhpmcounter_val == max_val) {
> +        counter->mhpmcounter_val = 0;
> +        /* Generate interrupt only if OF bit is clear */
> +        if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> +            env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +        }
> +    } else {
> +        counter->mhpmcounter_val++;
> +    }
> +    return 0;
> +}
> +
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> +{
> +    uint32_t ctr_idx;
> +    int ret;
> +    CPURISCVState *env = &cpu->env;
> +    gpointer value;
> +
> +    value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                                GUINT_TO_POINTER(event_idx));
> +    if (!value) {
> +        return -1;
> +    }
> +
> +    ctr_idx = GPOINTER_TO_UINT(value);
> +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> +        get_field(env->mcountinhibit, BIT(ctr_idx))) {
> +        return -1;
> +    }
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> +    } else {
> +        ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> +    }
> +
> +    return ret;
> +}
>   
>   bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>                                           uint32_t target_ctr)
>   {
> -    return (target_ctr == 0) ? true : false;
> +    RISCVCPU *cpu;
> +    uint32_t event_idx;
> +    uint32_t ctr_idx;
> +
> +    /* Fixed instret counter */
> +    if (target_ctr == 2) {
> +        return true;
> +    }
> +
> +    cpu = RISCV_CPU(env_cpu(env));
> +    event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(event_idx)));
> +    if (!ctr_idx) {
> +        return false;
> +    }
> +
> +    return target_ctr == ctr_idx ? true : false;
>   }
>   
>   bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
>   {
> -    return (target_ctr == 2) ? true : false;
> +    RISCVCPU *cpu;
> +    uint32_t event_idx;
> +    uint32_t ctr_idx;
> +
> +    /* Fixed mcycle counter */
> +    if (target_ctr == 0) {
> +        return true;
> +    }
> +
> +    cpu = RISCV_CPU(env_cpu(env));
> +    event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(event_idx)));
> +
> +    /* Counter zero is not used for event_ctr_map */
> +    if (!ctr_idx) {
> +        return false;
> +    }
> +
> +    return (target_ctr == ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> +                                     gpointer udata)
> +{
> +    return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> +    int64_t ret = 0;
> +
> +    if (icount_enabled()) {
> +        ret = icount_to_ns(value);
> +    } else {
> +        ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> +    }
> +
> +    return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> +                               uint32_t ctr_idx)
> +{
> +    uint32_t event_idx;
> +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +
> +    if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> +        return -1;
> +    }
> +
> +    /**
> +     * Expected mhpmevent value is zero for reset case. Remove the current
> +     * mapping.
> +     */
> +    if (!value) {
> +        g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> +                                    pmu_remove_event_map,
> +                                    GUINT_TO_POINTER(ctr_idx));
> +        return 0;
> +    }
> +
> +    event_idx = value & MHPMEVENT_IDX_MASK;
> +    if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                            GUINT_TO_POINTER(event_idx))) {
> +        return 0;
> +    }
> +
> +    switch (event_idx) {
> +    case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> +    case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> +    case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> +    case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> +    case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> +        break;
> +    default:
> +        /* We don't support any raw events right now */
> +        return -1;
> +    }
> +    g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
> +                        GUINT_TO_POINTER(ctr_idx));
> +
> +    return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> +                                  enum riscv_pmu_event_idx evt_idx)
> +{
> +    uint32_t ctr_idx;
> +    CPURISCVState *env = &cpu->env;
> +    PMUCTRState *counter;
> +    target_ulong *mhpmevent_val;
> +    uint64_t of_bit_mask;
> +    int64_t irq_trigger_at;
> +
> +    if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> +        evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> +        return;
> +    }
> +
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(evt_idx)));
> +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> +        return;
> +    }
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> +        of_bit_mask = MHPMEVENTH_BIT_OF;
> +     } else {
> +        mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> +        of_bit_mask = MHPMEVENT_BIT_OF;
> +    }
> +
> +    counter = &env->pmu_ctrs[ctr_idx];
> +    if (counter->irq_overflow_left > 0) {
> +        irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> +                        counter->irq_overflow_left;
> +        timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> +        counter->irq_overflow_left = 0;
> +        return;
> +    }
> +
> +    if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> +        /* Generate interrupt only if OF bit is clear */
> +        if (!(*mhpmevent_val & of_bit_mask)) {
> +            *mhpmevent_val |= of_bit_mask;
> +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +        }
> +    }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> +    RISCVCPU *cpu = priv;
> +
> +    /* Timer event was triggered only for these events */
> +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
> +{
> +    uint64_t overflow_delta, overflow_at;
> +    int64_t overflow_ns, overflow_left = 0;
> +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +
> +    if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
> +        return -1;
> +    }
> +
> +    if (value) {
> +        overflow_delta = UINT64_MAX - value + 1;
> +    } else {
> +        overflow_delta = UINT64_MAX;
> +    }
> +
> +    /**
> +     * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
> +     * Compute the leftover and save it so that it can be reprogrammed again
> +     * when timer expires.
> +     */
> +    if (overflow_delta > INT64_MAX) {
> +        overflow_left = overflow_delta - INT64_MAX;
> +    }
> +
> +    if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> +        riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> +        overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> +        overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> +    } else {
> +        return -1;
> +    }
> +    overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
> +
> +    if (overflow_at > INT64_MAX) {
> +        overflow_left += overflow_at - INT64_MAX;
> +        counter->irq_overflow_left = overflow_left;
> +        overflow_at = INT64_MAX;
> +    }
> +    timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> +
> +    return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> +    if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> +        return -1;
> +    }
> +
> +    cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
> +    if (!cpu->pmu_event_ctr_map) {
> +        /* PMU support can not be enabled */
> +        qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> +        cpu->cfg.pmu_num = 0;
> +        return -1;
> +    }
> +
> +    /* Create a bitmask of available programmable counters */
> +    cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> +
> +    return 0;
>   }
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>                                           uint32_t target_ctr);
>   bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
>                                     uint32_t target_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> +                               uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> +                          uint32_t ctr_idx);



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit
  2022-07-27  6:49 ` [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit Atish Patra
@ 2022-07-27  8:22   ` Weiwei Li
  0 siblings, 0 replies; 20+ messages in thread
From: Weiwei Li @ 2022-07-27  8:22 UTC (permalink / raw)
  To: Atish Patra, qemu-devel
  Cc: Heiko Stuebner, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv


在 2022/7/27 下午2:49, Atish Patra 写道:
> With .min_priv_version, additiona priv version check is uncessary
> for mcountinhibit read/write functions.
>
> Reviewed-by: Heiko Stuebner <heiko@sntech.de>
> Tested-by: Heiko Stuebner <heiko@sntech.de>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>   target/riscv/csr.c | 8 --------
>   1 file changed, 8 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index ec6d7f022ad5..eac003d6b950 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -1484,10 +1484,6 @@ static RISCVException write_mtvec(CPURISCVState *env, int csrno,
>   static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
>                                            target_ulong *val)
>   {
> -    if (env->priv_ver < PRIV_VERSION_1_11_0) {
> -        return RISCV_EXCP_ILLEGAL_INST;
> -    }
> -
>       *val = env->mcountinhibit;
>       return RISCV_EXCP_NONE;
>   }
> @@ -1498,10 +1494,6 @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
>       int cidx;
>       PMUCTRState *counter;
>   
> -    if (env->priv_ver < PRIV_VERSION_1_11_0) {
> -        return RISCV_EXCP_ILLEGAL_INST;
> -    }
> -
>       env->mcountinhibit = val;
>   
>       /* Check if any other counter is also monitoring cycles/instructions */

Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>

Regards,
Weiwei Li



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs
  2022-07-27  6:49 ` [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
@ 2022-07-27  8:26   ` Weiwei Li
  2022-07-27 22:50     ` Atish Kumar Patra
  0 siblings, 1 reply; 20+ messages in thread
From: Weiwei Li @ 2022-07-27  8:26 UTC (permalink / raw)
  To: Atish Patra, qemu-devel
  Cc: Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv


在 2022/7/27 下午2:49, Atish Patra 写道:
> The sscofpmf extension was ratified as a part of priv spec v1.12.
> Mark the csr_ops accordingly.
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>   target/riscv/csr.c | 90 ++++++++++++++++++++++++++++++----------------
>   1 file changed, 60 insertions(+), 30 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 57dbbf9b09a0..ec6d7f022ad5 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -3859,63 +3859,92 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>                                                          write_mhpmevent },
>   
>       [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },

Similar to the first commit, it's better to align with the first element 
"mhpmevent3h" .Otherwise,

Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>

Regards,

Weiwei Li

>       [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                          write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>       [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,  read_mhpmeventh,
> -                                                       write_mhpmeventh},
> +                                                           write_mhpmeventh,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>   
>       [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh },
>       [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh },
> @@ -4005,7 +4034,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>                                                          write_mhpmcounterh },
>       [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,  read_hpmcounterh,
>                                                          write_mhpmcounterh },
> -    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf,
> +                                         .min_priv_ver = PRIV_VERSION_1_12_0 },
>   
>   #endif /* !CONFIG_USER_ONLY */
>   };



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 2/6] target/riscv: Simplify counter predicate function
  2022-07-27  6:49 ` [PATCH v11 2/6] target/riscv: Simplify counter predicate function Atish Patra
@ 2022-07-27  8:34   ` Weiwei Li
  2022-07-27 21:40     ` Atish Kumar Patra
  0 siblings, 1 reply; 20+ messages in thread
From: Weiwei Li @ 2022-07-27  8:34 UTC (permalink / raw)
  To: Atish Patra, qemu-devel
  Cc: Bin Meng, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv


在 2022/7/27 下午2:49, Atish Patra 写道:
> All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
> as a unified counter. Thus, the predicate function doesn't need handle each
> case separately.
>
> Simplify the predicate function so that we just handle things differently
> between RV32/RV64 and S/HS mode.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>   target/riscv/csr.c | 112 +++++----------------------------------------
>   1 file changed, 11 insertions(+), 101 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 1233bfa0a726..57dbbf9b09a0 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>       CPUState *cs = env_cpu(env);
>       RISCVCPU *cpu = RISCV_CPU(cs);
>       int ctr_index;
> +    target_ulong ctr_mask;
>       int base_csrno = CSR_CYCLE;
>       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>   
> @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>           base_csrno += 0x80;
>       }
>       ctr_index = csrno - base_csrno;
> +    ctr_mask = BIT(ctr_index);
>   
>       if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
>           (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
>           goto skip_ext_pmu_check;
>       }
>   
> -    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> +    if (!(cpu->pmu_avail_ctrs & ctr_mask)) {
>           /* No counter is enabled in PMU or the counter is out of range */
>           return RISCV_EXCP_ILLEGAL_INST;
>       }
>   
>   skip_ext_pmu_check:
>   
> -    if (env->priv == PRV_S) {
> -        switch (csrno) {
> -        case CSR_CYCLE:
> -            if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> -                return RISCV_EXCP_ILLEGAL_INST;
> -            }
> -            break;
> -        case CSR_TIME:
> -            if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> -                return RISCV_EXCP_ILLEGAL_INST;
> -            }
> -            break;
> -        case CSR_INSTRET:
> -            if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> -                return RISCV_EXCP_ILLEGAL_INST;
> -            }
> -            break;
> -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            if (!get_field(env->mcounteren, 1 << ctr_index)) {
> -                return RISCV_EXCP_ILLEGAL_INST;
> -            }
> -            break;
> -        }
> -        if (rv32) {
> -            switch (csrno) {
> -            case CSR_CYCLEH:
> -                if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> -                    return RISCV_EXCP_ILLEGAL_INST;
> -                }
> -                break;
> -            case CSR_TIMEH:
> -                if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> -                    return RISCV_EXCP_ILLEGAL_INST;
> -                }
> -                break;
> -            case CSR_INSTRETH:
> -                if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> -                    return RISCV_EXCP_ILLEGAL_INST;
> -                }
> -                break;
> -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                if (!get_field(env->mcounteren, 1 << ctr_index)) {
> -                    return RISCV_EXCP_ILLEGAL_INST;
> -                }
> -                break;
> -            }
> -        }
> +    if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
> +       ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
> +        return RISCV_EXCP_ILLEGAL_INST;
>       }
>   
>       if (riscv_cpu_virt_enabled(env)) {
> -        switch (csrno) {
> -        case CSR_CYCLE:
> -            if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> -                get_field(env->mcounteren, COUNTEREN_CY)) {
> -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -            }
> -            break;
> -        case CSR_TIME:
> -            if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> -                get_field(env->mcounteren, COUNTEREN_TM)) {
> -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -            }
> -            break;
> -        case CSR_INSTRET:
> -            if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> -                get_field(env->mcounteren, COUNTEREN_IR)) {
> -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -            }
> -            break;
> -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            if (!get_field(env->hcounteren, 1 << ctr_index) &&
> -                 get_field(env->mcounteren, 1 << ctr_index)) {
> -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -            }
> -            break;
> -        }
> -        if (rv32) {
> -            switch (csrno) {
> -            case CSR_CYCLEH:
> -                if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> -                    get_field(env->mcounteren, COUNTEREN_CY)) {
> -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -                }
> -                break;
> -            case CSR_TIMEH:
> -                if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> -                    get_field(env->mcounteren, COUNTEREN_TM)) {
> -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -                }
> -                break;
> -            case CSR_INSTRETH:
> -                if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> -                    get_field(env->mcounteren, COUNTEREN_IR)) {
> -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -                }
> -                break;
> -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                if (!get_field(env->hcounteren, 1 << ctr_index) &&
> -                     get_field(env->mcounteren, 1 << ctr_index)) {
> -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> -                }
> -                break;
> -            }
> +        if (!get_field(env->mcounteren, ctr_mask)) {
> +            /* The bit must be set in mcountern for HS mode access */
> +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> +        } else if (!get_field(env->hcounteren, ctr_mask)) {
> +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>           }

The logic is changed here. In original logic, 
RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when

!get_field(env->hcounteren, 1 << ctr_index) && get_field(env->mcounteren, 1 << ctr_index)

The new logic is RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when !get_field(env->mcounteren, ctr_mask)

or !get_field(env->hcounteren, 1 << ctr_index) && get_field(env->mcounteren, 1 << ctr_index)

Regards,

Weiwei Li

>       }
>   #endif



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 1/6] target/riscv: Add sscofpmf extension support
  2022-07-27  8:11   ` Weiwei Li
@ 2022-07-27 21:32     ` Atish Kumar Patra
  0 siblings, 0 replies; 20+ messages in thread
From: Atish Kumar Patra @ 2022-07-27 21:32 UTC (permalink / raw)
  To: Weiwei Li
  Cc: qemu-devel@nongnu.org Developers, Heiko Stuebner, Atish Patra,
	Alistair Francis, Bin Meng, Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 38528 bytes --]

On Wed, Jul 27, 2022 at 1:11 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:

>
> 在 2022/7/27 下午2:49, Atish Patra 写道:
> > The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> > and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> > extension allows the perf to handle overflow interrupts and filtering
> > support. This patch provides a framework for programmable
> > counters to leverage the extension. As the extension doesn't have any
> > provision for the overflow bit for fixed counters, the fixed events
> > can also be monitoring using programmable counters. The underlying
> > counters for cycle and instruction counters are always running. Thus,
> > a separate timer device is programmed to handle the overflow.
> >
> > Tested-by: Heiko Stuebner <heiko@sntech.de>
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >   target/riscv/cpu.c      |  11 ++
> >   target/riscv/cpu.h      |  25 +++
> >   target/riscv/cpu_bits.h |  55 +++++++
> >   target/riscv/csr.c      | 166 ++++++++++++++++++-
> >   target/riscv/machine.c  |   1 +
> >   target/riscv/pmu.c      | 357 +++++++++++++++++++++++++++++++++++++++-
> >   target/riscv/pmu.h      |   7 +
> >   7 files changed, 611 insertions(+), 11 deletions(-)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index 1bb3973806d2..c1d62b81a725 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -22,6 +22,7 @@
> >   #include "qemu/ctype.h"
> >   #include "qemu/log.h"
> >   #include "cpu.h"
> > +#include "pmu.h"
> >   #include "internals.h"
> >   #include "exec/exec-all.h"
> >   #include "qapi/error.h"
> > @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev,
> Error **errp)
> >           set_misa(env, env->misa_mxl, ext);
> >       }
> >
> > +#ifndef CONFIG_USER_ONLY
> > +    if (cpu->cfg.pmu_num) {
> > +        if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) &&
> cpu->cfg.ext_sscofpmf) {
> > +            cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> > +                                          riscv_pmu_timer_cb, cpu);
> > +        }
> > +     }
> > +#endif
> > +
> >       riscv_cpu_register_gdb_regs_for_features(cs);
> >
> >       qemu_init_vcpu(cs);
> > @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] = {
> >       DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> >       DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> >       DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> > +    DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
> >       DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> >       DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> >       DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> > diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> > index 5c7acc055ac9..2222db193c3d 100644
> > --- a/target/riscv/cpu.h
> > +++ b/target/riscv/cpu.h
> > @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
> >       /* Snapshort value of a counter in RV32 */
> >       target_ulong mhpmcounterh_prev;
> >       bool started;
> > +    /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt
> trigger */
> > +    target_ulong irq_overflow_left;
> >   } PMUCTRState;
> >
> >   struct CPUArchState {
> > @@ -297,6 +299,9 @@ struct CPUArchState {
> >       /* PMU event selector configured values. First three are unused*/
> >       target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
> >
> > +    /* PMU event selector configured values for RV32*/
> > +    target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> > +
> >       target_ulong sscratch;
> >       target_ulong mscratch;
> >
> > @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
> >       bool ext_zve32f;
> >       bool ext_zve64f;
> >       bool ext_zmmul;
> > +    bool ext_sscofpmf;
> >       bool rvv_ta_all_1s;
> >
> >       uint32_t mvendorid;
> > @@ -479,6 +485,12 @@ struct ArchCPU {
> >
> >       /* Configuration Settings */
> >       RISCVCPUConfig cfg;
> > +
> > +    QEMUTimer *pmu_timer;
> > +    /* A bitmask of Available programmable counters */
> > +    uint32_t pmu_avail_ctrs;
> > +    /* Mapping of events to counters */
> > +    GHashTable *pmu_event_ctr_map;
> >   };
> >
> >   static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> > @@ -738,6 +750,19 @@ enum {
> >       CSR_TABLE_SIZE = 0x1000
> >   };
> >
> > +/**
> > + * The event id are encoded based on the encoding specified in the
> > + * SBI specification v0.3
> > + */
> > +
> > +enum riscv_pmu_event_idx {
> > +    RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> > +    RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> > +    RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> > +    RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> > +    RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> > +};
> > +
> >   /* CSR function table */
> >   extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
> >
> > diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> > index 6be5a9e9f046..b63c586be563 100644
> > --- a/target/riscv/cpu_bits.h
> > +++ b/target/riscv/cpu_bits.h
> > @@ -382,6 +382,37 @@
> >   #define CSR_MHPMEVENT29     0x33d
> >   #define CSR_MHPMEVENT30     0x33e
> >   #define CSR_MHPMEVENT31     0x33f
> > +
> > +#define CSR_MHPMEVENT3H     0x723
> > +#define CSR_MHPMEVENT4H     0x724
> > +#define CSR_MHPMEVENT5H     0x725
> > +#define CSR_MHPMEVENT6H     0x726
> > +#define CSR_MHPMEVENT7H     0x727
> > +#define CSR_MHPMEVENT8H     0x728
> > +#define CSR_MHPMEVENT9H     0x729
> > +#define CSR_MHPMEVENT10H    0x72a
> > +#define CSR_MHPMEVENT11H    0x72b
> > +#define CSR_MHPMEVENT12H    0x72c
> > +#define CSR_MHPMEVENT13H    0x72d
> > +#define CSR_MHPMEVENT14H    0x72e
> > +#define CSR_MHPMEVENT15H    0x72f
> > +#define CSR_MHPMEVENT16H    0x730
> > +#define CSR_MHPMEVENT17H    0x731
> > +#define CSR_MHPMEVENT18H    0x732
> > +#define CSR_MHPMEVENT19H    0x733
> > +#define CSR_MHPMEVENT20H    0x734
> > +#define CSR_MHPMEVENT21H    0x735
> > +#define CSR_MHPMEVENT22H    0x736
> > +#define CSR_MHPMEVENT23H    0x737
> > +#define CSR_MHPMEVENT24H    0x738
> > +#define CSR_MHPMEVENT25H    0x739
> > +#define CSR_MHPMEVENT26H    0x73a
> > +#define CSR_MHPMEVENT27H    0x73b
> > +#define CSR_MHPMEVENT28H    0x73c
> > +#define CSR_MHPMEVENT29H    0x73d
> > +#define CSR_MHPMEVENT30H    0x73e
> > +#define CSR_MHPMEVENT31H    0x73f
> > +
> >   #define CSR_MHPMCOUNTER3H   0xb83
> >   #define CSR_MHPMCOUNTER4H   0xb84
> >   #define CSR_MHPMCOUNTER5H   0xb85
> > @@ -443,6 +474,7 @@
> >   #define CSR_VSMTE           0x2c0
> >   #define CSR_VSPMMASK        0x2c1
> >   #define CSR_VSPMBASE        0x2c2
> > +#define CSR_SCOUNTOVF       0xda0
> >
> >   /* Crypto Extension */
> >   #define CSR_SEED            0x015
> > @@ -620,6 +652,7 @@ typedef enum RISCVException {
> >   #define IRQ_VS_EXT                         10
> >   #define IRQ_M_EXT                          11
> >   #define IRQ_S_GEXT                         12
> > +#define IRQ_PMU_OVF                        13
> >   #define IRQ_LOCAL_MAX                      16
> >   #define IRQ_LOCAL_GUEST_MAX                (TARGET_LONG_BITS - 1)
> >
> > @@ -637,11 +670,13 @@ typedef enum RISCVException {
> >   #define MIP_VSEIP                          (1 << IRQ_VS_EXT)
> >   #define MIP_MEIP                           (1 << IRQ_M_EXT)
> >   #define MIP_SGEIP                          (1 << IRQ_S_GEXT)
> > +#define MIP_LCOFIP                         (1 << IRQ_PMU_OVF)
> >
> >   /* sip masks */
> >   #define SIP_SSIP                           MIP_SSIP
> >   #define SIP_STIP                           MIP_STIP
> >   #define SIP_SEIP                           MIP_SEIP
> > +#define SIP_LCOFIP                         MIP_LCOFIP
> >
> >   /* MIE masks */
> >   #define MIE_SEIE                           (1 << IRQ_S_EXT)
> > @@ -795,4 +830,24 @@ typedef enum RISCVException {
> >   #define SEED_OPST_WAIT                   (0b01 << 30)
> >   #define SEED_OPST_ES16                   (0b10 << 30)
> >   #define SEED_OPST_DEAD                   (0b11 << 30)
> > +/* PMU related bits */
> > +#define MIE_LCOFIE                         (1 << IRQ_PMU_OVF)
> > +
> > +#define MHPMEVENT_BIT_OF                   BIT_ULL(63)
> > +#define MHPMEVENTH_BIT_OF                  BIT(31)
> > +#define MHPMEVENT_BIT_MINH                 BIT_ULL(62)
> > +#define MHPMEVENTH_BIT_MINH                BIT(30)
> > +#define MHPMEVENT_BIT_SINH                 BIT_ULL(61)
> > +#define MHPMEVENTH_BIT_SINH                BIT(29)
> > +#define MHPMEVENT_BIT_UINH                 BIT_ULL(60)
> > +#define MHPMEVENTH_BIT_UINH                BIT(28)
> > +#define MHPMEVENT_BIT_VSINH                BIT_ULL(59)
> > +#define MHPMEVENTH_BIT_VSINH               BIT(27)
> > +#define MHPMEVENT_BIT_VUINH                BIT_ULL(58)
> > +#define MHPMEVENTH_BIT_VUINH               BIT(26)
> > +
> > +#define MHPMEVENT_SSCOF_MASK               _ULL(0xFFFF000000000000)
> > +#define MHPMEVENT_IDX_MASK                 0xFFFFF
> > +#define MHPMEVENT_SSCOF_RESVD              16
> > +
> >   #endif
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 235f2a011e70..1233bfa0a726 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >       CPUState *cs = env_cpu(env);
> >       RISCVCPU *cpu = RISCV_CPU(cs);
> >       int ctr_index;
> > -    int base_csrno = CSR_HPMCOUNTER3;
> > +    int base_csrno = CSR_CYCLE;
> >       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
> >
> >       if (rv32 && csrno >= CSR_CYCLEH) {
> > @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >       }
> >       ctr_index = csrno - base_csrno;
> >
> > -    if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> > +    if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> > +        (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> > +        goto skip_ext_pmu_check;
> > +    }
> > +
> > +    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index))))
> {
> >           /* No counter is enabled in PMU or the counter is out of range
> */
> >           return RISCV_EXCP_ILLEGAL_INST;
> >       }
> >
> Maybe it's better to remove !cpu->cfg.pmu_num here, not in later commit.
> > +skip_ext_pmu_check:
> > +
> >       if (env->priv == PRV_S) {
> >           switch (csrno) {
> >           case CSR_CYCLE:
> > @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >               }
> >               break;
> >           case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            ctr_index = csrno - CSR_CYCLE;
> >               if (!get_field(env->mcounteren, 1 << ctr_index)) {
> >                   return RISCV_EXCP_ILLEGAL_INST;
> >               }
> > @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >                   }
> >                   break;
> >               case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                ctr_index = csrno - CSR_CYCLEH;
> >                   if (!get_field(env->mcounteren, 1 << ctr_index)) {
> >                       return RISCV_EXCP_ILLEGAL_INST;
> >                   }
> > @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >               }
> >               break;
> >           case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            ctr_index = csrno - CSR_CYCLE;
> >               if (!get_field(env->hcounteren, 1 << ctr_index) &&
> >                    get_field(env->mcounteren, 1 << ctr_index)) {
> >                   return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >                   }
> >                   break;
> >               case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                ctr_index = csrno - CSR_CYCLEH;
> >                   if (!get_field(env->hcounteren, 1 << ctr_index) &&
> >                        get_field(env->mcounteren, 1 << ctr_index)) {
> >                       return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env,
> int csrno)
> >       return mctr(env, csrno);
> >   }
> >
> > +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> > +{
> > +    CPUState *cs = env_cpu(env);
> > +    RISCVCPU *cpu = RISCV_CPU(cs);
> > +
> > +    if (!cpu->cfg.ext_sscofpmf) {
> > +        return RISCV_EXCP_ILLEGAL_INST;
> > +    }
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> >   static RISCVException any(CPURISCVState *env, int csrno)
> >   {
> >       return RISCV_EXCP_NONE;
> > @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int
> csrno, target_ulong *val)
> >   static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong
> val)
> >   {
> >       int evt_index = csrno - CSR_MCOUNTINHIBIT;
> > +    uint64_t mhpmevt_val = val;
> >
> >       env->mhpmevent_val[evt_index] = val;
> >
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpmevt_val = mhpmevt_val |
> > +                      ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> > +    }
> > +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> > +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong
> *val)
> > +{
> > +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> > +
> > +    *val = env->mhpmeventh_val[evt_index];
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> > +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong
> val)
> > +{
> > +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> > +    uint64_t mhpmevth_val = val;
> > +    uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> > +
> > +    mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> > +    env->mhpmeventh_val[evt_index] = val;
> > +
> > +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > +
> >       return RISCV_EXCP_NONE;
> >   }
> >
> > @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env,
> int csrno, target_ulong val)
> >   {
> >       int ctr_idx = csrno - CSR_MCYCLE;
> >       PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t mhpmctr_val = val;
> >
> >       counter->mhpmcounter_val = val;
> >       if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >           riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >           counter->mhpmcounter_prev = get_ticks(false);
> > -    } else {
> > +        if (ctr_idx > 2) {
> > +            if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +                mhpmctr_val = mhpmctr_val |
> > +                              ((uint64_t)counter->mhpmcounterh_val <<
> 32);
> > +            }
> > +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> > +        }
> > +     } else {
> >           /* Other counters can keep incrementing from the given value */
> >           counter->mhpmcounter_prev = val;
> >       }
> > @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env,
> int csrno, target_ulong val)
> >   {
> >       int ctr_idx = csrno - CSR_MCYCLEH;
> >       PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t mhpmctr_val = counter->mhpmcounter_val;
> > +    uint64_t mhpmctrh_val = val;
> >
> >       counter->mhpmcounterh_val = val;
> > +    mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> >       if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >           riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >           counter->mhpmcounterh_prev = get_ticks(true);
> > +        if (ctr_idx > 2) {
> > +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> > +        }
> >       } else {
> >           counter->mhpmcounterh_prev = val;
> >       }
> > @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, int
> csrno, target_ulong *val)
> >       return riscv_pmu_read_ctr(env, val, true, ctr_index);
> >   }
> >
> > +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong
> *val)
> > +{
> > +    int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> > +    int i;
> > +    *val = 0;
> > +    target_ulong *mhpm_evt_val;
> > +    uint64_t of_bit_mask;
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpm_evt_val = env->mhpmeventh_val;
> > +        of_bit_mask = MHPMEVENTH_BIT_OF;
> > +    } else {
> > +        mhpm_evt_val = env->mhpmevent_val;
> > +        of_bit_mask = MHPMEVENT_BIT_OF;
> > +    }
> > +
> > +    for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> > +        if ((get_field(env->mcounteren, BIT(i))) &&
> > +            (mhpm_evt_val[i] & of_bit_mask)) {
> > +                    *val |= BIT(i);
> > +            }
> > +    }
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> >   static RISCVException read_time(CPURISCVState *env, int csrno,
> >                                   target_ulong *val)
> >   {
> > @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env,
> int csrno,
> >   /* Machine constants */
> >
> >   #define M_MODE_INTERRUPTS  ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> > -#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> > +#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP |
> \
> > +                                      MIP_LCOFIP))
>
> It's better to align with MIP_SSIP here.
>
> >   #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP |
> MIP_VSEIP))
> >   #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
> >
> > @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps =
> DELEGABLE_EXCPS &
> >   static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE |
> SSTATUS_SPIE |
> >       SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS
> |
> >       SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> > -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP |
> MIP_UEIP;
> > +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP |
> MIP_UEIP |
> > +                                              SIP_LCOFIP;
> >   static const target_ulong hip_writable_mask = MIP_VSSIP;
> >   static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
> MIP_VSEIP;
> >   static const target_ulong vsip_writable_mask = MIP_VSSIP;
> > @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >       [CSR_MHPMEVENT31]    = { "mhpmevent31",    any,    read_mhpmevent,
> >                                                          write_mhpmevent
> },
> >
> > +    [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
>
> The new lines have been updated to align with the last line in my
> previous patchset(accepted).
>
> you can see it if rebase to  riscv-to-apply.next. So it's better to make
> write_mhpmeventh align with
>
> ' " '.  The same to following new lines.
>
>
Got it. Rebased it. But I aligned them '{' with CSR_HPMCOUNTER3H onwards.
We probably should align CSR_MHPMEVENT3..CSR_MHPMEVENT31 accordingly as
well.


> Regards,
>
> Weiwei Li
>
> > +    [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +
> >       [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,
> read_hpmcounterh },
> >       [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,
> read_hpmcounterh },
> >       [CSR_HPMCOUNTER5H]   = { "hpmcounter5h",   ctr32,
> read_hpmcounterh },
> > @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >
> write_mhpmcounterh },
> >       [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,
> read_hpmcounterh,
> >
> write_mhpmcounterh },
> > +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> > +
> >   #endif /* !CONFIG_USER_ONLY */
> >   };
> > diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> > index dc182ca81119..33ef9b8e9908 100644
> > --- a/target/riscv/machine.c
> > +++ b/target/riscv/machine.c
> > @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> >           VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU,
> RV_MAX_MHPMCOUNTERS, 0,
> >                                vmstate_pmu_ctr_state, PMUCTRState),
> >           VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU,
> RV_MAX_MHPMEVENTS),
> > +        VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU,
> RV_MAX_MHPMEVENTS),
> >           VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> >           VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> >           VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> > diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> > index 000fe8da45ef..34096941c0ce 100644
> > --- a/target/riscv/pmu.c
> > +++ b/target/riscv/pmu.c
> > @@ -19,14 +19,367 @@
> >   #include "qemu/osdep.h"
> >   #include "cpu.h"
> >   #include "pmu.h"
> > +#include "sysemu/cpu-timers.h"
> > +
> > +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> > +#define MAKE_32BIT_MASK(shift, length) \
> > +        (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> > +
> > +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> > +        !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> > +        return false;
> > +    } else {
> > +        return true;
> > +    }
> > +}
> > +
> > +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +
> > +    if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> > +        !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> > +        return true;
> > +    } else {
> > +        return false;
> > +    }
> > +}
> > +
> > +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +    target_ulong max_val = UINT32_MAX;
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    bool virt_on = riscv_cpu_virt_enabled(env);
> > +
> > +    /* Privilege mode filtering */
> > +    if ((env->priv == PRV_M &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> > +        (env->priv == PRV_S && virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> > +        (env->priv == PRV_U && virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> > +        (env->priv == PRV_S && !virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> > +        (env->priv == PRV_U && !virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> > +        return 0;
> > +    }
> > +
> > +    /* Handle the overflow scenario */
> > +    if (counter->mhpmcounter_val == max_val) {
> > +        if (counter->mhpmcounterh_val == max_val) {
> > +            counter->mhpmcounter_val = 0;
> > +            counter->mhpmcounterh_val = 0;
> > +            /* Generate interrupt only if OF bit is clear */
> > +            if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> > +                env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> > +                riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +            }
> > +        } else {
> > +            counter->mhpmcounterh_val++;
> > +        }
> > +    } else {
> > +        counter->mhpmcounter_val++;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t max_val = UINT64_MAX;
> > +    bool virt_on = riscv_cpu_virt_enabled(env);
> > +
> > +    /* Privilege mode filtering */
> > +    if ((env->priv == PRV_M &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> > +        (env->priv == PRV_S && virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> > +        (env->priv == PRV_U && virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> > +        (env->priv == PRV_S && !virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> > +        (env->priv == PRV_U && !virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> > +        return 0;
> > +    }
> > +
> > +    /* Handle the overflow scenario */
> > +    if (counter->mhpmcounter_val == max_val) {
> > +        counter->mhpmcounter_val = 0;
> > +        /* Generate interrupt only if OF bit is clear */
> > +        if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> > +            env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> > +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +        }
> > +    } else {
> > +        counter->mhpmcounter_val++;
> > +    }
> > +    return 0;
> > +}
> > +
> > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> event_idx)
> > +{
> > +    uint32_t ctr_idx;
> > +    int ret;
> > +    CPURISCVState *env = &cpu->env;
> > +    gpointer value;
> > +
> > +    value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                                GUINT_TO_POINTER(event_idx));
> > +    if (!value) {
> > +        return -1;
> > +    }
> > +
> > +    ctr_idx = GPOINTER_TO_UINT(value);
> > +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> > +        get_field(env->mcountinhibit, BIT(ctr_idx))) {
> > +        return -1;
> > +    }
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> > +    } else {
> > +        ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> > +    }
> > +
> > +    return ret;
> > +}
> >
> >   bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> >                                           uint32_t target_ctr)
> >   {
> > -    return (target_ctr == 0) ? true : false;
> > +    RISCVCPU *cpu;
> > +    uint32_t event_idx;
> > +    uint32_t ctr_idx;
> > +
> > +    /* Fixed instret counter */
> > +    if (target_ctr == 2) {
> > +        return true;
> > +    }
> > +
> > +    cpu = RISCV_CPU(env_cpu(env));
> > +    event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(event_idx)));
> > +    if (!ctr_idx) {
> > +        return false;
> > +    }
> > +
> > +    return target_ctr == ctr_idx ? true : false;
> >   }
> >
> >   bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t
> target_ctr)
> >   {
> > -    return (target_ctr == 2) ? true : false;
> > +    RISCVCPU *cpu;
> > +    uint32_t event_idx;
> > +    uint32_t ctr_idx;
> > +
> > +    /* Fixed mcycle counter */
> > +    if (target_ctr == 0) {
> > +        return true;
> > +    }
> > +
> > +    cpu = RISCV_CPU(env_cpu(env));
> > +    event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(event_idx)));
> > +
> > +    /* Counter zero is not used for event_ctr_map */
> > +    if (!ctr_idx) {
> > +        return false;
> > +    }
> > +
> > +    return (target_ctr == ctr_idx) ? true : false;
> > +}
> > +
> > +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> > +                                     gpointer udata)
> > +{
> > +    return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true
> : false;
> > +}
> > +
> > +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> > +{
> > +    int64_t ret = 0;
> > +
> > +    if (icount_enabled()) {
> > +        ret = icount_to_ns(value);
> > +    } else {
> > +        ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> > +    }
> > +
> > +    return ret;
> > +}
> > +
> > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> > +                               uint32_t ctr_idx)
> > +{
> > +    uint32_t event_idx;
> > +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> > +
> > +    if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> > +        return -1;
> > +    }
> > +
> > +    /**
> > +     * Expected mhpmevent value is zero for reset case. Remove the
> current
> > +     * mapping.
> > +     */
> > +    if (!value) {
> > +        g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> > +                                    pmu_remove_event_map,
> > +                                    GUINT_TO_POINTER(ctr_idx));
> > +        return 0;
> > +    }
> > +
> > +    event_idx = value & MHPMEVENT_IDX_MASK;
> > +    if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                            GUINT_TO_POINTER(event_idx))) {
> > +        return 0;
> > +    }
> > +
> > +    switch (event_idx) {
> > +    case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> > +    case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> > +    case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> > +    case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> > +    case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> > +        break;
> > +    default:
> > +        /* We don't support any raw events right now */
> > +        return -1;
> > +    }
> > +    g_hash_table_insert(cpu->pmu_event_ctr_map,
> GUINT_TO_POINTER(event_idx),
> > +                        GUINT_TO_POINTER(ctr_idx));
> > +
> > +    return 0;
> > +}
> > +
> > +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> > +                                  enum riscv_pmu_event_idx evt_idx)
> > +{
> > +    uint32_t ctr_idx;
> > +    CPURISCVState *env = &cpu->env;
> > +    PMUCTRState *counter;
> > +    target_ulong *mhpmevent_val;
> > +    uint64_t of_bit_mask;
> > +    int64_t irq_trigger_at;
> > +
> > +    if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> > +        evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> > +        return;
> > +    }
> > +
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(evt_idx)));
> > +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> > +        return;
> > +    }
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> > +        of_bit_mask = MHPMEVENTH_BIT_OF;
> > +     } else {
> > +        mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> > +        of_bit_mask = MHPMEVENT_BIT_OF;
> > +    }
> > +
> > +    counter = &env->pmu_ctrs[ctr_idx];
> > +    if (counter->irq_overflow_left > 0) {
> > +        irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> > +                        counter->irq_overflow_left;
> > +        timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> > +        counter->irq_overflow_left = 0;
> > +        return;
> > +    }
> > +
> > +    if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> > +        /* Generate interrupt only if OF bit is clear */
> > +        if (!(*mhpmevent_val & of_bit_mask)) {
> > +            *mhpmevent_val |= of_bit_mask;
> > +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +        }
> > +    }
> > +}
> > +
> > +/* Timer callback for instret and cycle counter overflow */
> > +void riscv_pmu_timer_cb(void *priv)
> > +{
> > +    RISCVCPU *cpu = priv;
> > +
> > +    /* Timer event was triggered only for these events */
> > +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> > +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> > +}
> > +
> > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t
> ctr_idx)
> > +{
> > +    uint64_t overflow_delta, overflow_at;
> > +    int64_t overflow_ns, overflow_left = 0;
> > +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +
> > +    if (!riscv_pmu_counter_valid(cpu, ctr_idx) ||
> !cpu->cfg.ext_sscofpmf) {
> > +        return -1;
> > +    }
> > +
> > +    if (value) {
> > +        overflow_delta = UINT64_MAX - value + 1;
> > +    } else {
> > +        overflow_delta = UINT64_MAX;
> > +    }
> > +
> > +    /**
> > +     * QEMU supports only int64_t timers while RISC-V counters are
> uint64_t.
> > +     * Compute the leftover and save it so that it can be reprogrammed
> again
> > +     * when timer expires.
> > +     */
> > +    if (overflow_delta > INT64_MAX) {
> > +        overflow_left = overflow_delta - INT64_MAX;
> > +    }
> > +
> > +    if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> > +        riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> > +        overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> > +        overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> > +    } else {
> > +        return -1;
> > +    }
> > +    overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> overflow_ns;
> > +
> > +    if (overflow_at > INT64_MAX) {
> > +        overflow_left += overflow_at - INT64_MAX;
> > +        counter->irq_overflow_left = overflow_left;
> > +        overflow_at = INT64_MAX;
> > +    }
> > +    timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> > +
> > +    return 0;
> > +}
> > +
> > +
> > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> > +{
> > +    if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> > +        return -1;
> > +    }
> > +
> > +    cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash,
> g_direct_equal);
> > +    if (!cpu->pmu_event_ctr_map) {
> > +        /* PMU support can not be enabled */
> > +        qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> > +        cpu->cfg.pmu_num = 0;
> > +        return -1;
> > +    }
> > +
> > +    /* Create a bitmask of available programmable counters */
> > +    cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> > +
> > +    return 0;
> >   }
> > diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> > index 58a5bc3a4089..036653627f78 100644
> > --- a/target/riscv/pmu.h
> > +++ b/target/riscv/pmu.h
> > @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState
> *env,
> >                                           uint32_t target_ctr);
> >   bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
> >                                     uint32_t target_ctr);
> > +void riscv_pmu_timer_cb(void *priv);
> > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> > +                               uint32_t ctr_idx);
> > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> event_idx);
> > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> > +                          uint32_t ctr_idx);
>
>

[-- Attachment #2: Type: text/html, Size: 50457 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 2/6] target/riscv: Simplify counter predicate function
  2022-07-27  8:34   ` Weiwei Li
@ 2022-07-27 21:40     ` Atish Kumar Patra
  2022-07-28  0:56       ` Weiwei Li
  0 siblings, 1 reply; 20+ messages in thread
From: Atish Kumar Patra @ 2022-07-27 21:40 UTC (permalink / raw)
  To: Weiwei Li
  Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
	Bin Meng, Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 7565 bytes --]

On Wed, Jul 27, 2022 at 1:35 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:

>
> 在 2022/7/27 下午2:49, Atish Patra 写道:
> > All the hpmcounters and the fixed counters (CY, IR, TM) can be
> represented
> > as a unified counter. Thus, the predicate function doesn't need handle
> each
> > case separately.
> >
> > Simplify the predicate function so that we just handle things differently
> > between RV32/RV64 and S/HS mode.
> >
> > Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> > Acked-by: Alistair Francis <alistair.francis@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >   target/riscv/csr.c | 112 +++++----------------------------------------
> >   1 file changed, 11 insertions(+), 101 deletions(-)
> >
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 1233bfa0a726..57dbbf9b09a0 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >       CPUState *cs = env_cpu(env);
> >       RISCVCPU *cpu = RISCV_CPU(cs);
> >       int ctr_index;
> > +    target_ulong ctr_mask;
> >       int base_csrno = CSR_CYCLE;
> >       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
> >
> > @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >           base_csrno += 0x80;
> >       }
> >       ctr_index = csrno - base_csrno;
> > +    ctr_mask = BIT(ctr_index);
> >
> >       if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> >           (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> >           goto skip_ext_pmu_check;
> >       }
> >
> > -    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index))))
> {
> > +    if (!(cpu->pmu_avail_ctrs & ctr_mask)) {
> >           /* No counter is enabled in PMU or the counter is out of range
> */
> >           return RISCV_EXCP_ILLEGAL_INST;
> >       }
> >
> >   skip_ext_pmu_check:
> >
> > -    if (env->priv == PRV_S) {
> > -        switch (csrno) {
> > -        case CSR_CYCLE:
> > -            if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> > -                return RISCV_EXCP_ILLEGAL_INST;
> > -            }
> > -            break;
> > -        case CSR_TIME:
> > -            if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> > -                return RISCV_EXCP_ILLEGAL_INST;
> > -            }
> > -            break;
> > -        case CSR_INSTRET:
> > -            if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> > -                return RISCV_EXCP_ILLEGAL_INST;
> > -            }
> > -            break;
> > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            if (!get_field(env->mcounteren, 1 << ctr_index)) {
> > -                return RISCV_EXCP_ILLEGAL_INST;
> > -            }
> > -            break;
> > -        }
> > -        if (rv32) {
> > -            switch (csrno) {
> > -            case CSR_CYCLEH:
> > -                if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> > -                    return RISCV_EXCP_ILLEGAL_INST;
> > -                }
> > -                break;
> > -            case CSR_TIMEH:
> > -                if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> > -                    return RISCV_EXCP_ILLEGAL_INST;
> > -                }
> > -                break;
> > -            case CSR_INSTRETH:
> > -                if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> > -                    return RISCV_EXCP_ILLEGAL_INST;
> > -                }
> > -                break;
> > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                if (!get_field(env->mcounteren, 1 << ctr_index)) {
> > -                    return RISCV_EXCP_ILLEGAL_INST;
> > -                }
> > -                break;
> > -            }
> > -        }
> > +    if (((env->priv == PRV_S) && (!get_field(env->mcounteren,
> ctr_mask))) ||
> > +       ((env->priv == PRV_U) && (!get_field(env->scounteren,
> ctr_mask)))) {
> > +        return RISCV_EXCP_ILLEGAL_INST;
> >       }
> >
> >       if (riscv_cpu_virt_enabled(env)) {
> > -        switch (csrno) {
> > -        case CSR_CYCLE:
> > -            if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> > -                get_field(env->mcounteren, COUNTEREN_CY)) {
> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -            }
> > -            break;
> > -        case CSR_TIME:
> > -            if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> > -                get_field(env->mcounteren, COUNTEREN_TM)) {
> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -            }
> > -            break;
> > -        case CSR_INSTRET:
> > -            if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> > -                get_field(env->mcounteren, COUNTEREN_IR)) {
> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -            }
> > -            break;
> > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            if (!get_field(env->hcounteren, 1 << ctr_index) &&
> > -                 get_field(env->mcounteren, 1 << ctr_index)) {
> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -            }
> > -            break;
> > -        }
> > -        if (rv32) {
> > -            switch (csrno) {
> > -            case CSR_CYCLEH:
> > -                if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> > -                    get_field(env->mcounteren, COUNTEREN_CY)) {
> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -                }
> > -                break;
> > -            case CSR_TIMEH:
> > -                if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> > -                    get_field(env->mcounteren, COUNTEREN_TM)) {
> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -                }
> > -                break;
> > -            case CSR_INSTRETH:
> > -                if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> > -                    get_field(env->mcounteren, COUNTEREN_IR)) {
> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -                }
> > -                break;
> > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                if (!get_field(env->hcounteren, 1 << ctr_index) &&
> > -                     get_field(env->mcounteren, 1 << ctr_index)) {
> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > -                }
> > -                break;
> > -            }
> > +        if (!get_field(env->mcounteren, ctr_mask)) {
> > +            /* The bit must be set in mcountern for HS mode access */
> > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > +        } else if (!get_field(env->hcounteren, ctr_mask)) {
> > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> >           }
>
> The logic is changed here. In original logic,
> RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when
>
> !get_field(env->hcounteren, 1 << ctr_index) && get_field(env->mcounteren,
> 1 << ctr_index)
>
> The new logic is RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when
> !get_field(env->mcounteren, ctr_mask)
>
> or !get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)
>
>
Yes. It's just an optimization where we can break early just by checking
mcountern. Do you see any issue with it ?


> Regards,
>
> Weiwei Li
>
> >       }
> >   #endif
>
>

[-- Attachment #2: Type: text/html, Size: 10193 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs
  2022-07-27  8:26   ` Weiwei Li
@ 2022-07-27 22:50     ` Atish Kumar Patra
  0 siblings, 0 replies; 20+ messages in thread
From: Atish Kumar Patra @ 2022-07-27 22:50 UTC (permalink / raw)
  To: Weiwei Li
  Cc: qemu-devel@nongnu.org Developers, Alistair Francis, Bin Meng,
	Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 8503 bytes --]

On Wed, Jul 27, 2022 at 1:27 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:

>
> 在 2022/7/27 下午2:49, Atish Patra 写道:
> > The sscofpmf extension was ratified as a part of priv spec v1.12.
> > Mark the csr_ops accordingly.
> >
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >   target/riscv/csr.c | 90 ++++++++++++++++++++++++++++++----------------
> >   1 file changed, 60 insertions(+), 30 deletions(-)
> >
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 57dbbf9b09a0..ec6d7f022ad5 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -3859,63 +3859,92 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >                                                          write_mhpmevent
> },
> >
> >       [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
>
> Similar to the first commit, it's better to align with the first element
> "mhpmevent3h" .Otherwise,
>
>
Fixed it. Thanks for the review.


> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
>
> Regards,
>
> Weiwei Li
>
> >       [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
> write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >       [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,
> read_mhpmeventh,
> > -
>  write_mhpmeventh},
> > +
>  write_mhpmeventh,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >
> >       [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,
> read_hpmcounterh },
> >       [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,
> read_hpmcounterh },
> > @@ -4005,7 +4034,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >
> write_mhpmcounterh },
> >       [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,
> read_hpmcounterh,
> >
> write_mhpmcounterh },
> > -    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> > +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf,
> > +                                         .min_priv_ver =
> PRIV_VERSION_1_12_0 },
> >
> >   #endif /* !CONFIG_USER_ONLY */
> >   };
>
>

[-- Attachment #2: Type: text/html, Size: 15955 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 2/6] target/riscv: Simplify counter predicate function
  2022-07-27 21:40     ` Atish Kumar Patra
@ 2022-07-28  0:56       ` Weiwei Li
  2022-08-02 21:02         ` Atish Kumar Patra
  0 siblings, 1 reply; 20+ messages in thread
From: Weiwei Li @ 2022-07-28  0:56 UTC (permalink / raw)
  To: Atish Kumar Patra
  Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
	Bin Meng, Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 9472 bytes --]


在 2022/7/28 上午5:40, Atish Kumar Patra 写道:
>
>
> On Wed, Jul 27, 2022 at 1:35 AM Weiwei Li <liweiwei@iscas.ac.cn 
> <mailto:liweiwei@iscas.ac.cn>> wrote:
>
>
>     在 2022/7/27 下午2:49, Atish Patra 写道:
>     > All the hpmcounters and the fixed counters (CY, IR, TM) can be
>     represented
>     > as a unified counter. Thus, the predicate function doesn't need
>     handle each
>     > case separately.
>     >
>     > Simplify the predicate function so that we just handle things
>     differently
>     > between RV32/RV64 and S/HS mode.
>     >
>     > Reviewed-by: Bin Meng <bmeng.cn@gmail.com
>     <mailto:bmeng.cn@gmail.com>>
>     > Acked-by: Alistair Francis <alistair.francis@wdc.com
>     <mailto:alistair.francis@wdc.com>>
>     > Signed-off-by: Atish Patra <atishp@rivosinc.com
>     <mailto:atishp@rivosinc.com>>
>     > ---
>     >   target/riscv/csr.c | 112
>     +++++----------------------------------------
>     >   1 file changed, 11 insertions(+), 101 deletions(-)
>     >
>     > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>     > index 1233bfa0a726..57dbbf9b09a0 100644
>     > --- a/target/riscv/csr.c
>     > +++ b/target/riscv/csr.c
>     > @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env,
>     int csrno)
>     >       CPUState *cs = env_cpu(env);
>     >       RISCVCPU *cpu = RISCV_CPU(cs);
>     >       int ctr_index;
>     > +    target_ulong ctr_mask;
>     >       int base_csrno = CSR_CYCLE;
>     >       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>     >
>     > @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState
>     *env, int csrno)
>     >           base_csrno += 0x80;
>     >       }
>     >       ctr_index = csrno - base_csrno;
>     > +    ctr_mask = BIT(ctr_index);
>     >
>     >       if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
>     >           (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
>     >           goto skip_ext_pmu_check;
>     >       }
>     >
>     > -    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs &
>     BIT(ctr_index)))) {
>     > +    if (!(cpu->pmu_avail_ctrs & ctr_mask)) {
>     >           /* No counter is enabled in PMU or the counter is out
>     of range */
>     >           return RISCV_EXCP_ILLEGAL_INST;
>     >       }
>     >
>     >   skip_ext_pmu_check:
>     >
>     > -    if (env->priv == PRV_S) {
>     > -        switch (csrno) {
>     > -        case CSR_CYCLE:
>     > -            if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>     > -                return RISCV_EXCP_ILLEGAL_INST;
>     > -            }
>     > -            break;
>     > -        case CSR_TIME:
>     > -            if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>     > -                return RISCV_EXCP_ILLEGAL_INST;
>     > -            }
>     > -            break;
>     > -        case CSR_INSTRET:
>     > -            if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>     > -                return RISCV_EXCP_ILLEGAL_INST;
>     > -            }
>     > -            break;
>     > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>     > -            if (!get_field(env->mcounteren, 1 << ctr_index)) {
>     > -                return RISCV_EXCP_ILLEGAL_INST;
>     > -            }
>     > -            break;
>     > -        }
>     > -        if (rv32) {
>     > -            switch (csrno) {
>     > -            case CSR_CYCLEH:
>     > -                if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>     > -                    return RISCV_EXCP_ILLEGAL_INST;
>     > -                }
>     > -                break;
>     > -            case CSR_TIMEH:
>     > -                if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>     > -                    return RISCV_EXCP_ILLEGAL_INST;
>     > -                }
>     > -                break;
>     > -            case CSR_INSTRETH:
>     > -                if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>     > -                    return RISCV_EXCP_ILLEGAL_INST;
>     > -                }
>     > -                break;
>     > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>     > -                if (!get_field(env->mcounteren, 1 << ctr_index)) {
>     > -                    return RISCV_EXCP_ILLEGAL_INST;
>     > -                }
>     > -                break;
>     > -            }
>     > -        }
>     > +    if (((env->priv == PRV_S) && (!get_field(env->mcounteren,
>     ctr_mask))) ||
>     > +       ((env->priv == PRV_U) && (!get_field(env->scounteren,
>     ctr_mask)))) {
>     > +        return RISCV_EXCP_ILLEGAL_INST;
>     >       }
>     >
>     >       if (riscv_cpu_virt_enabled(env)) {
>     > -        switch (csrno) {
>     > -        case CSR_CYCLE:
>     > -            if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>     > -                get_field(env->mcounteren, COUNTEREN_CY)) {
>     > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -            }
>     > -            break;
>     > -        case CSR_TIME:
>     > -            if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>     > -                get_field(env->mcounteren, COUNTEREN_TM)) {
>     > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -            }
>     > -            break;
>     > -        case CSR_INSTRET:
>     > -            if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>     > -                get_field(env->mcounteren, COUNTEREN_IR)) {
>     > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -            }
>     > -            break;
>     > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>     > -            if (!get_field(env->hcounteren, 1 << ctr_index) &&
>     > -                 get_field(env->mcounteren, 1 << ctr_index)) {
>     > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -            }
>     > -            break;
>     > -        }
>     > -        if (rv32) {
>     > -            switch (csrno) {
>     > -            case CSR_CYCLEH:
>     > -                if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>     > -                    get_field(env->mcounteren, COUNTEREN_CY)) {
>     > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -                }
>     > -                break;
>     > -            case CSR_TIMEH:
>     > -                if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>     > -                    get_field(env->mcounteren, COUNTEREN_TM)) {
>     > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -                }
>     > -                break;
>     > -            case CSR_INSTRETH:
>     > -                if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>     > -                    get_field(env->mcounteren, COUNTEREN_IR)) {
>     > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -                }
>     > -                break;
>     > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>     > -                if (!get_field(env->hcounteren, 1 << ctr_index) &&
>     > -                     get_field(env->mcounteren, 1 << ctr_index)) {
>     > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > -                }
>     > -                break;
>     > -            }
>     > +        if (!get_field(env->mcounteren, ctr_mask)) {
>     > +            /* The bit must be set in mcountern for HS mode
>     access */
>     > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     > +        } else if (!get_field(env->hcounteren, ctr_mask)) {
>     > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>     >           }
>
>     The logic is changed here. In original logic,
>     RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when
>
>     !get_field(env->hcounteren, 1 << ctr_index) &&
>     get_field(env->mcounteren, 1 << ctr_index)
>
>     The new logic is RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered
>     when !get_field(env->mcounteren, ctr_mask)
>
>     or !get_field(env->hcounteren, 1 << ctr_index) &&
>     get_field(env->mcounteren, 1 << ctr_index)
>
>
> Yes. It's just an optimization where we can break early just by 
> checking mcountern. Do you see any issue with it ?

The section 8.6.1 of  riscv- privileged spec lists the cases (including 
the Xcounten ralated cases) which will raise a

virtual instruction exception. However all the the Xcounten ralated 
cases have a common condition

         "the same bit in mcounteren is 1".

So  this  optimization seems not correct.

Regards,

Weiwei Li

>     Regards,
>
>     Weiwei Li
>
>     >       }
>     >   #endif
>

[-- Attachment #2: Type: text/html, Size: 13971 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 1/6] target/riscv: Add sscofpmf extension support
  2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
  2022-07-27  8:11   ` Weiwei Li
@ 2022-07-31  4:10   ` Rahul Pathak
  2022-07-31  4:34     ` Atish Kumar Patra
  1 sibling, 1 reply; 20+ messages in thread
From: Rahul Pathak @ 2022-07-31  4:10 UTC (permalink / raw)
  To: Atish Patra
  Cc: qemu-devel@nongnu.org Developers, Heiko Stuebner, Atish Patra,
	Alistair Francis, Bin Meng, Palmer Dabbelt, open list:RISC-V

Hi Atish,

I tried your riscv_pmu_v12  branch but I couldn't made it working
without below change -
without this change qemu was not exporting the sscofpmf in the cpu dt nodes

-------------
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 2e0dd6ddb5..2b36830755 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -108,6 +108,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
     ISA_EXT_DATA_ENTRY(svinval, true, PRIV_VERSION_1_12_0, ext_svinval),
     ISA_EXT_DATA_ENTRY(svnapot, true, PRIV_VERSION_1_12_0, ext_svnapot),
     ISA_EXT_DATA_ENTRY(svpbmt, true, PRIV_VERSION_1_12_0, ext_svpbmt),
+    ISA_EXT_DATA_ENTRY(sscofpmf, true, PRIV_VERSION_1_12_0, ext_sscofpmf),
 };

 static bool isa_ext_is_enabled(RISCVCPU *cpu,
------------------

Thanks
Rahul

On Wed, Jul 27, 2022 at 12:26 PM Atish Patra <atishp@rivosinc.com> wrote:
>
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Tested-by: Heiko Stuebner <heiko@sntech.de>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>  target/riscv/cpu.c      |  11 ++
>  target/riscv/cpu.h      |  25 +++
>  target/riscv/cpu_bits.h |  55 +++++++
>  target/riscv/csr.c      | 166 ++++++++++++++++++-
>  target/riscv/machine.c  |   1 +
>  target/riscv/pmu.c      | 357 +++++++++++++++++++++++++++++++++++++++-
>  target/riscv/pmu.h      |   7 +
>  7 files changed, 611 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1bb3973806d2..c1d62b81a725 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
>  #include "qemu/ctype.h"
>  #include "qemu/log.h"
>  #include "cpu.h"
> +#include "pmu.h"
>  #include "internals.h"
>  #include "exec/exec-all.h"
>  #include "qapi/error.h"
> @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
>          set_misa(env, env->misa_mxl, ext);
>      }
>
> +#ifndef CONFIG_USER_ONLY
> +    if (cpu->cfg.pmu_num) {
> +        if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
> +            cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> +                                          riscv_pmu_timer_cb, cpu);
> +        }
> +     }
> +#endif
> +
>      riscv_cpu_register_gdb_regs_for_features(cs);
>
>      qemu_init_vcpu(cs);
> @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] = {
>      DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
>      DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
>      DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> +    DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
>      DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
>      DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
>      DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
>      /* Snapshort value of a counter in RV32 */
>      target_ulong mhpmcounterh_prev;
>      bool started;
> +    /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
> +    target_ulong irq_overflow_left;
>  } PMUCTRState;
>
>  struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
>      /* PMU event selector configured values. First three are unused*/
>      target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>
> +    /* PMU event selector configured values for RV32*/
> +    target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
>      target_ulong sscratch;
>      target_ulong mscratch;
>
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
>      bool ext_zve32f;
>      bool ext_zve64f;
>      bool ext_zmmul;
> +    bool ext_sscofpmf;
>      bool rvv_ta_all_1s;
>
>      uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>
>      /* Configuration Settings */
>      RISCVCPUConfig cfg;
> +
> +    QEMUTimer *pmu_timer;
> +    /* A bitmask of Available programmable counters */
> +    uint32_t pmu_avail_ctrs;
> +    /* Mapping of events to counters */
> +    GHashTable *pmu_event_ctr_map;
>  };
>
>  static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> @@ -738,6 +750,19 @@ enum {
>      CSR_TABLE_SIZE = 0x1000
>  };
>
> +/**
> + * The event id are encoded based on the encoding specified in the
> + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> +    RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> +    RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> +    RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> +    RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> +    RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> +};
> +
>  /* CSR function table */
>  extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 6be5a9e9f046..b63c586be563 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -382,6 +382,37 @@
>  #define CSR_MHPMEVENT29     0x33d
>  #define CSR_MHPMEVENT30     0x33e
>  #define CSR_MHPMEVENT31     0x33f
> +
> +#define CSR_MHPMEVENT3H     0x723
> +#define CSR_MHPMEVENT4H     0x724
> +#define CSR_MHPMEVENT5H     0x725
> +#define CSR_MHPMEVENT6H     0x726
> +#define CSR_MHPMEVENT7H     0x727
> +#define CSR_MHPMEVENT8H     0x728
> +#define CSR_MHPMEVENT9H     0x729
> +#define CSR_MHPMEVENT10H    0x72a
> +#define CSR_MHPMEVENT11H    0x72b
> +#define CSR_MHPMEVENT12H    0x72c
> +#define CSR_MHPMEVENT13H    0x72d
> +#define CSR_MHPMEVENT14H    0x72e
> +#define CSR_MHPMEVENT15H    0x72f
> +#define CSR_MHPMEVENT16H    0x730
> +#define CSR_MHPMEVENT17H    0x731
> +#define CSR_MHPMEVENT18H    0x732
> +#define CSR_MHPMEVENT19H    0x733
> +#define CSR_MHPMEVENT20H    0x734
> +#define CSR_MHPMEVENT21H    0x735
> +#define CSR_MHPMEVENT22H    0x736
> +#define CSR_MHPMEVENT23H    0x737
> +#define CSR_MHPMEVENT24H    0x738
> +#define CSR_MHPMEVENT25H    0x739
> +#define CSR_MHPMEVENT26H    0x73a
> +#define CSR_MHPMEVENT27H    0x73b
> +#define CSR_MHPMEVENT28H    0x73c
> +#define CSR_MHPMEVENT29H    0x73d
> +#define CSR_MHPMEVENT30H    0x73e
> +#define CSR_MHPMEVENT31H    0x73f
> +
>  #define CSR_MHPMCOUNTER3H   0xb83
>  #define CSR_MHPMCOUNTER4H   0xb84
>  #define CSR_MHPMCOUNTER5H   0xb85
> @@ -443,6 +474,7 @@
>  #define CSR_VSMTE           0x2c0
>  #define CSR_VSPMMASK        0x2c1
>  #define CSR_VSPMBASE        0x2c2
> +#define CSR_SCOUNTOVF       0xda0
>
>  /* Crypto Extension */
>  #define CSR_SEED            0x015
> @@ -620,6 +652,7 @@ typedef enum RISCVException {
>  #define IRQ_VS_EXT                         10
>  #define IRQ_M_EXT                          11
>  #define IRQ_S_GEXT                         12
> +#define IRQ_PMU_OVF                        13
>  #define IRQ_LOCAL_MAX                      16
>  #define IRQ_LOCAL_GUEST_MAX                (TARGET_LONG_BITS - 1)
>
> @@ -637,11 +670,13 @@ typedef enum RISCVException {
>  #define MIP_VSEIP                          (1 << IRQ_VS_EXT)
>  #define MIP_MEIP                           (1 << IRQ_M_EXT)
>  #define MIP_SGEIP                          (1 << IRQ_S_GEXT)
> +#define MIP_LCOFIP                         (1 << IRQ_PMU_OVF)
>
>  /* sip masks */
>  #define SIP_SSIP                           MIP_SSIP
>  #define SIP_STIP                           MIP_STIP
>  #define SIP_SEIP                           MIP_SEIP
> +#define SIP_LCOFIP                         MIP_LCOFIP
>
>  /* MIE masks */
>  #define MIE_SEIE                           (1 << IRQ_S_EXT)
> @@ -795,4 +830,24 @@ typedef enum RISCVException {
>  #define SEED_OPST_WAIT                   (0b01 << 30)
>  #define SEED_OPST_ES16                   (0b10 << 30)
>  #define SEED_OPST_DEAD                   (0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE                         (1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF                   BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF                  BIT(31)
> +#define MHPMEVENT_BIT_MINH                 BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH                BIT(30)
> +#define MHPMEVENT_BIT_SINH                 BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH                BIT(29)
> +#define MHPMEVENT_BIT_UINH                 BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH                BIT(28)
> +#define MHPMEVENT_BIT_VSINH                BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH               BIT(27)
> +#define MHPMEVENT_BIT_VUINH                BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH               BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK               _ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK                 0xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD              16
> +
>  #endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 235f2a011e70..1233bfa0a726 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>      CPUState *cs = env_cpu(env);
>      RISCVCPU *cpu = RISCV_CPU(cs);
>      int ctr_index;
> -    int base_csrno = CSR_HPMCOUNTER3;
> +    int base_csrno = CSR_CYCLE;
>      bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
>      if (rv32 && csrno >= CSR_CYCLEH) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>      }
>      ctr_index = csrno - base_csrno;
>
> -    if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> +    if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> +        (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> +        goto skip_ext_pmu_check;
> +    }
> +
> +    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
>          /* No counter is enabled in PMU or the counter is out of range */
>          return RISCV_EXCP_ILLEGAL_INST;
>      }
>
> +skip_ext_pmu_check:
> +
>      if (env->priv == PRV_S) {
>          switch (csrno) {
>          case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>              }
>              break;
>          case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            ctr_index = csrno - CSR_CYCLE;
>              if (!get_field(env->mcounteren, 1 << ctr_index)) {
>                  return RISCV_EXCP_ILLEGAL_INST;
>              }
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>                  }
>                  break;
>              case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                ctr_index = csrno - CSR_CYCLEH;
>                  if (!get_field(env->mcounteren, 1 << ctr_index)) {
>                      return RISCV_EXCP_ILLEGAL_INST;
>                  }
> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>              }
>              break;
>          case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> -            ctr_index = csrno - CSR_CYCLE;
>              if (!get_field(env->hcounteren, 1 << ctr_index) &&
>                   get_field(env->mcounteren, 1 << ctr_index)) {
>                  return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>                  }
>                  break;
>              case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> -                ctr_index = csrno - CSR_CYCLEH;
>                  if (!get_field(env->hcounteren, 1 << ctr_index) &&
>                       get_field(env->mcounteren, 1 << ctr_index)) {
>                      return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
>      return mctr(env, csrno);
>  }
>
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> +    CPUState *cs = env_cpu(env);
> +    RISCVCPU *cpu = RISCV_CPU(cs);
> +
> +    if (!cpu->cfg.ext_sscofpmf) {
> +        return RISCV_EXCP_ILLEGAL_INST;
> +    }
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
>  static RISCVException any(CPURISCVState *env, int csrno)
>  {
>      return RISCV_EXCP_NONE;
> @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
>  static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
>  {
>      int evt_index = csrno - CSR_MCOUNTINHIBIT;
> +    uint64_t mhpmevt_val = val;
>
>      env->mhpmevent_val[evt_index] = val;
>
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpmevt_val = mhpmevt_val |
> +                      ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> +    }
> +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +
> +    *val = env->mhpmeventh_val[evt_index];
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +    uint64_t mhpmevth_val = val;
> +    uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> +    mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> +    env->mhpmeventh_val[evt_index] = val;
> +
> +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
>      return RISCV_EXCP_NONE;
>  }
>
> @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
>  {
>      int ctr_idx = csrno - CSR_MCYCLE;
>      PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t mhpmctr_val = val;
>
>      counter->mhpmcounter_val = val;
>      if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>          riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>          counter->mhpmcounter_prev = get_ticks(false);
> -    } else {
> +        if (ctr_idx > 2) {
> +            if (riscv_cpu_mxl(env) == MXL_RV32) {
> +                mhpmctr_val = mhpmctr_val |
> +                              ((uint64_t)counter->mhpmcounterh_val << 32);
> +            }
> +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> +        }
> +     } else {
>          /* Other counters can keep incrementing from the given value */
>          counter->mhpmcounter_prev = val;
>      }
> @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
>  {
>      int ctr_idx = csrno - CSR_MCYCLEH;
>      PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t mhpmctr_val = counter->mhpmcounter_val;
> +    uint64_t mhpmctrh_val = val;
>
>      counter->mhpmcounterh_val = val;
> +    mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
>      if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>          riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>          counter->mhpmcounterh_prev = get_ticks(true);
> +        if (ctr_idx > 2) {
> +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> +        }
>      } else {
>          counter->mhpmcounterh_prev = val;
>      }
> @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
>      return riscv_pmu_read_ctr(env, val, true, ctr_index);
>  }
>
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> +    int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> +    int i;
> +    *val = 0;
> +    target_ulong *mhpm_evt_val;
> +    uint64_t of_bit_mask;
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpm_evt_val = env->mhpmeventh_val;
> +        of_bit_mask = MHPMEVENTH_BIT_OF;
> +    } else {
> +        mhpm_evt_val = env->mhpmevent_val;
> +        of_bit_mask = MHPMEVENT_BIT_OF;
> +    }
> +
> +    for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> +        if ((get_field(env->mcounteren, BIT(i))) &&
> +            (mhpm_evt_val[i] & of_bit_mask)) {
> +                    *val |= BIT(i);
> +            }
> +    }
> +
> +    return RISCV_EXCP_NONE;
> +}
> +
>  static RISCVException read_time(CPURISCVState *env, int csrno,
>                                  target_ulong *val)
>  {
> @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
>  /* Machine constants */
>
>  #define M_MODE_INTERRUPTS  ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> +                                      MIP_LCOFIP))
>  #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
>  #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>
> @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
>  static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
>      SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
>      SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> +                                              SIP_LCOFIP;
>  static const target_ulong hip_writable_mask = MIP_VSSIP;
>  static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
>  static const target_ulong vsip_writable_mask = MIP_VSSIP;
> @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>      [CSR_MHPMEVENT31]    = { "mhpmevent31",    any,    read_mhpmevent,
>                                                         write_mhpmevent },
>
> +    [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +    [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,  read_mhpmeventh,
> +                                                       write_mhpmeventh},
> +
>      [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh },
>      [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh },
>      [CSR_HPMCOUNTER5H]   = { "hpmcounter5h",   ctr32,  read_hpmcounterh },
> @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>                                                         write_mhpmcounterh },
>      [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,  read_hpmcounterh,
>                                                         write_mhpmcounterh },
> +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> +
>  #endif /* !CONFIG_USER_ONLY */
>  };
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
>          VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
>                               vmstate_pmu_ctr_state, PMUCTRState),
>          VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> +        VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
>          VMSTATE_UINTTL(env.sscratch, RISCVCPU),
>          VMSTATE_UINTTL(env.mscratch, RISCVCPU),
>          VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
>  #include "qemu/osdep.h"
>  #include "cpu.h"
>  #include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> +        (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> +        !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> +        return false;
> +    } else {
> +        return true;
> +    }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +
> +    if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> +        !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> +        return true;
> +    } else {
> +        return false;
> +    }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +    target_ulong max_val = UINT32_MAX;
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    bool virt_on = riscv_cpu_virt_enabled(env);
> +
> +    /* Privilege mode filtering */
> +    if ((env->priv == PRV_M &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> +        (env->priv == PRV_S && virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> +        (env->priv == PRV_U && virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> +        (env->priv == PRV_S && !virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> +        (env->priv == PRV_U && !virt_on &&
> +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> +        return 0;
> +    }
> +
> +    /* Handle the overflow scenario */
> +    if (counter->mhpmcounter_val == max_val) {
> +        if (counter->mhpmcounterh_val == max_val) {
> +            counter->mhpmcounter_val = 0;
> +            counter->mhpmcounterh_val = 0;
> +            /* Generate interrupt only if OF bit is clear */
> +            if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> +                env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> +                riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +            }
> +        } else {
> +            counter->mhpmcounterh_val++;
> +        }
> +    } else {
> +        counter->mhpmcounter_val++;
> +    }
> +
> +    return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> +    CPURISCVState *env = &cpu->env;
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +    uint64_t max_val = UINT64_MAX;
> +    bool virt_on = riscv_cpu_virt_enabled(env);
> +
> +    /* Privilege mode filtering */
> +    if ((env->priv == PRV_M &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> +        (env->priv == PRV_S && virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> +        (env->priv == PRV_U && virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> +        (env->priv == PRV_S && !virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> +        (env->priv == PRV_U && !virt_on &&
> +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> +        return 0;
> +    }
> +
> +    /* Handle the overflow scenario */
> +    if (counter->mhpmcounter_val == max_val) {
> +        counter->mhpmcounter_val = 0;
> +        /* Generate interrupt only if OF bit is clear */
> +        if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> +            env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +        }
> +    } else {
> +        counter->mhpmcounter_val++;
> +    }
> +    return 0;
> +}
> +
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> +{
> +    uint32_t ctr_idx;
> +    int ret;
> +    CPURISCVState *env = &cpu->env;
> +    gpointer value;
> +
> +    value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                                GUINT_TO_POINTER(event_idx));
> +    if (!value) {
> +        return -1;
> +    }
> +
> +    ctr_idx = GPOINTER_TO_UINT(value);
> +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> +        get_field(env->mcountinhibit, BIT(ctr_idx))) {
> +        return -1;
> +    }
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> +    } else {
> +        ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> +    }
> +
> +    return ret;
> +}
>
>  bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>                                          uint32_t target_ctr)
>  {
> -    return (target_ctr == 0) ? true : false;
> +    RISCVCPU *cpu;
> +    uint32_t event_idx;
> +    uint32_t ctr_idx;
> +
> +    /* Fixed instret counter */
> +    if (target_ctr == 2) {
> +        return true;
> +    }
> +
> +    cpu = RISCV_CPU(env_cpu(env));
> +    event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(event_idx)));
> +    if (!ctr_idx) {
> +        return false;
> +    }
> +
> +    return target_ctr == ctr_idx ? true : false;
>  }
>
>  bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
>  {
> -    return (target_ctr == 2) ? true : false;
> +    RISCVCPU *cpu;
> +    uint32_t event_idx;
> +    uint32_t ctr_idx;
> +
> +    /* Fixed mcycle counter */
> +    if (target_ctr == 0) {
> +        return true;
> +    }
> +
> +    cpu = RISCV_CPU(env_cpu(env));
> +    event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(event_idx)));
> +
> +    /* Counter zero is not used for event_ctr_map */
> +    if (!ctr_idx) {
> +        return false;
> +    }
> +
> +    return (target_ctr == ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> +                                     gpointer udata)
> +{
> +    return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> +    int64_t ret = 0;
> +
> +    if (icount_enabled()) {
> +        ret = icount_to_ns(value);
> +    } else {
> +        ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> +    }
> +
> +    return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> +                               uint32_t ctr_idx)
> +{
> +    uint32_t event_idx;
> +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +
> +    if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> +        return -1;
> +    }
> +
> +    /**
> +     * Expected mhpmevent value is zero for reset case. Remove the current
> +     * mapping.
> +     */
> +    if (!value) {
> +        g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> +                                    pmu_remove_event_map,
> +                                    GUINT_TO_POINTER(ctr_idx));
> +        return 0;
> +    }
> +
> +    event_idx = value & MHPMEVENT_IDX_MASK;
> +    if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                            GUINT_TO_POINTER(event_idx))) {
> +        return 0;
> +    }
> +
> +    switch (event_idx) {
> +    case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> +    case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> +    case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> +    case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> +    case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> +        break;
> +    default:
> +        /* We don't support any raw events right now */
> +        return -1;
> +    }
> +    g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
> +                        GUINT_TO_POINTER(ctr_idx));
> +
> +    return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> +                                  enum riscv_pmu_event_idx evt_idx)
> +{
> +    uint32_t ctr_idx;
> +    CPURISCVState *env = &cpu->env;
> +    PMUCTRState *counter;
> +    target_ulong *mhpmevent_val;
> +    uint64_t of_bit_mask;
> +    int64_t irq_trigger_at;
> +
> +    if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> +        evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> +        return;
> +    }
> +
> +    ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +                               GUINT_TO_POINTER(evt_idx)));
> +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> +        return;
> +    }
> +
> +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> +        mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> +        of_bit_mask = MHPMEVENTH_BIT_OF;
> +     } else {
> +        mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> +        of_bit_mask = MHPMEVENT_BIT_OF;
> +    }
> +
> +    counter = &env->pmu_ctrs[ctr_idx];
> +    if (counter->irq_overflow_left > 0) {
> +        irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> +                        counter->irq_overflow_left;
> +        timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> +        counter->irq_overflow_left = 0;
> +        return;
> +    }
> +
> +    if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> +        /* Generate interrupt only if OF bit is clear */
> +        if (!(*mhpmevent_val & of_bit_mask)) {
> +            *mhpmevent_val |= of_bit_mask;
> +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +        }
> +    }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> +    RISCVCPU *cpu = priv;
> +
> +    /* Timer event was triggered only for these events */
> +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
> +{
> +    uint64_t overflow_delta, overflow_at;
> +    int64_t overflow_ns, overflow_left = 0;
> +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +
> +    if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
> +        return -1;
> +    }
> +
> +    if (value) {
> +        overflow_delta = UINT64_MAX - value + 1;
> +    } else {
> +        overflow_delta = UINT64_MAX;
> +    }
> +
> +    /**
> +     * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
> +     * Compute the leftover and save it so that it can be reprogrammed again
> +     * when timer expires.
> +     */
> +    if (overflow_delta > INT64_MAX) {
> +        overflow_left = overflow_delta - INT64_MAX;
> +    }
> +
> +    if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> +        riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> +        overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> +        overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> +    } else {
> +        return -1;
> +    }
> +    overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
> +
> +    if (overflow_at > INT64_MAX) {
> +        overflow_left += overflow_at - INT64_MAX;
> +        counter->irq_overflow_left = overflow_left;
> +        overflow_at = INT64_MAX;
> +    }
> +    timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> +
> +    return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> +    if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> +        return -1;
> +    }
> +
> +    cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
> +    if (!cpu->pmu_event_ctr_map) {
> +        /* PMU support can not be enabled */
> +        qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> +        cpu->cfg.pmu_num = 0;
> +        return -1;
> +    }
> +
> +    /* Create a bitmask of available programmable counters */
> +    cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> +
> +    return 0;
>  }
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>                                          uint32_t target_ctr);
>  bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
>                                    uint32_t target_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> +                               uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> +                          uint32_t ctr_idx);
> --
> 2.25.1
>
>


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 1/6] target/riscv: Add sscofpmf extension support
  2022-07-31  4:10   ` Rahul Pathak
@ 2022-07-31  4:34     ` Atish Kumar Patra
  0 siblings, 0 replies; 20+ messages in thread
From: Atish Kumar Patra @ 2022-07-31  4:34 UTC (permalink / raw)
  To: Rahul Pathak
  Cc: Alistair Francis, Atish Patra, Bin Meng, Heiko Stuebner,
	Palmer Dabbelt, open list:RISC-V,
	qemu-devel@nongnu.org Developers

[-- Attachment #1: Type: text/plain, Size: 37731 bytes --]

On Sat, Jul 30, 2022 at 9:11 PM Rahul Pathak <rpathakmailbox@gmail.com>
wrote:

> Hi Atish,
>
> I tried your riscv_pmu_v12  branch but I couldn't made it working
> without below change -
> without this change qemu was not exporting the sscofpmf in the cpu dt nodes
>
> -------------
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 2e0dd6ddb5..2b36830755 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -108,6 +108,7 @@ static const struct isa_ext_data isa_edata_arr[] = {
>      ISA_EXT_DATA_ENTRY(svinval, true, PRIV_VERSION_1_12_0, ext_svinval),
>      ISA_EXT_DATA_ENTRY(svnapot, true, PRIV_VERSION_1_12_0, ext_svnapot),
>      ISA_EXT_DATA_ENTRY(svpbmt, true, PRIV_VERSION_1_12_0, ext_svpbmt),
> +    ISA_EXT_DATA_ENTRY(sscofpmf, true, PRIV_VERSION_1_12_0, ext_sscofpmf),
>  };
>
>  static bool isa_ext_is_enabled(RISCVCPU *cpu,
> ------------------
>

Yeah. This happened while resolving the conflict. My bad. I will send v13
along with other suggested fixes.


> Thanks
> Rahul
>
> On Wed, Jul 27, 2022 at 12:26 PM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> > and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> > extension allows the perf to handle overflow interrupts and filtering
> > support. This patch provides a framework for programmable
> > counters to leverage the extension. As the extension doesn't have any
> > provision for the overflow bit for fixed counters, the fixed events
> > can also be monitoring using programmable counters. The underlying
> > counters for cycle and instruction counters are always running. Thus,
> > a separate timer device is programmed to handle the overflow.
> >
> > Tested-by: Heiko Stuebner <heiko@sntech.de>
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >  target/riscv/cpu.c      |  11 ++
> >  target/riscv/cpu.h      |  25 +++
> >  target/riscv/cpu_bits.h |  55 +++++++
> >  target/riscv/csr.c      | 166 ++++++++++++++++++-
> >  target/riscv/machine.c  |   1 +
> >  target/riscv/pmu.c      | 357 +++++++++++++++++++++++++++++++++++++++-
> >  target/riscv/pmu.h      |   7 +
> >  7 files changed, 611 insertions(+), 11 deletions(-)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index 1bb3973806d2..c1d62b81a725 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -22,6 +22,7 @@
> >  #include "qemu/ctype.h"
> >  #include "qemu/log.h"
> >  #include "cpu.h"
> > +#include "pmu.h"
> >  #include "internals.h"
> >  #include "exec/exec-all.h"
> >  #include "qapi/error.h"
> > @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev,
> Error **errp)
> >          set_misa(env, env->misa_mxl, ext);
> >      }
> >
> > +#ifndef CONFIG_USER_ONLY
> > +    if (cpu->cfg.pmu_num) {
> > +        if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) &&
> cpu->cfg.ext_sscofpmf) {
> > +            cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> > +                                          riscv_pmu_timer_cb, cpu);
> > +        }
> > +     }
> > +#endif
> > +
> >      riscv_cpu_register_gdb_regs_for_features(cs);
> >
> >      qemu_init_vcpu(cs);
> > @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] = {
> >      DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> >      DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> >      DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> > +    DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
> >      DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> >      DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> >      DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> > diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> > index 5c7acc055ac9..2222db193c3d 100644
> > --- a/target/riscv/cpu.h
> > +++ b/target/riscv/cpu.h
> > @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
> >      /* Snapshort value of a counter in RV32 */
> >      target_ulong mhpmcounterh_prev;
> >      bool started;
> > +    /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt
> trigger */
> > +    target_ulong irq_overflow_left;
> >  } PMUCTRState;
> >
> >  struct CPUArchState {
> > @@ -297,6 +299,9 @@ struct CPUArchState {
> >      /* PMU event selector configured values. First three are unused*/
> >      target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
> >
> > +    /* PMU event selector configured values for RV32*/
> > +    target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> > +
> >      target_ulong sscratch;
> >      target_ulong mscratch;
> >
> > @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
> >      bool ext_zve32f;
> >      bool ext_zve64f;
> >      bool ext_zmmul;
> > +    bool ext_sscofpmf;
> >      bool rvv_ta_all_1s;
> >
> >      uint32_t mvendorid;
> > @@ -479,6 +485,12 @@ struct ArchCPU {
> >
> >      /* Configuration Settings */
> >      RISCVCPUConfig cfg;
> > +
> > +    QEMUTimer *pmu_timer;
> > +    /* A bitmask of Available programmable counters */
> > +    uint32_t pmu_avail_ctrs;
> > +    /* Mapping of events to counters */
> > +    GHashTable *pmu_event_ctr_map;
> >  };
> >
> >  static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> > @@ -738,6 +750,19 @@ enum {
> >      CSR_TABLE_SIZE = 0x1000
> >  };
> >
> > +/**
> > + * The event id are encoded based on the encoding specified in the
> > + * SBI specification v0.3
> > + */
> > +
> > +enum riscv_pmu_event_idx {
> > +    RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> > +    RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> > +    RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> > +    RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> > +    RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> > +};
> > +
> >  /* CSR function table */
> >  extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
> >
> > diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> > index 6be5a9e9f046..b63c586be563 100644
> > --- a/target/riscv/cpu_bits.h
> > +++ b/target/riscv/cpu_bits.h
> > @@ -382,6 +382,37 @@
> >  #define CSR_MHPMEVENT29     0x33d
> >  #define CSR_MHPMEVENT30     0x33e
> >  #define CSR_MHPMEVENT31     0x33f
> > +
> > +#define CSR_MHPMEVENT3H     0x723
> > +#define CSR_MHPMEVENT4H     0x724
> > +#define CSR_MHPMEVENT5H     0x725
> > +#define CSR_MHPMEVENT6H     0x726
> > +#define CSR_MHPMEVENT7H     0x727
> > +#define CSR_MHPMEVENT8H     0x728
> > +#define CSR_MHPMEVENT9H     0x729
> > +#define CSR_MHPMEVENT10H    0x72a
> > +#define CSR_MHPMEVENT11H    0x72b
> > +#define CSR_MHPMEVENT12H    0x72c
> > +#define CSR_MHPMEVENT13H    0x72d
> > +#define CSR_MHPMEVENT14H    0x72e
> > +#define CSR_MHPMEVENT15H    0x72f
> > +#define CSR_MHPMEVENT16H    0x730
> > +#define CSR_MHPMEVENT17H    0x731
> > +#define CSR_MHPMEVENT18H    0x732
> > +#define CSR_MHPMEVENT19H    0x733
> > +#define CSR_MHPMEVENT20H    0x734
> > +#define CSR_MHPMEVENT21H    0x735
> > +#define CSR_MHPMEVENT22H    0x736
> > +#define CSR_MHPMEVENT23H    0x737
> > +#define CSR_MHPMEVENT24H    0x738
> > +#define CSR_MHPMEVENT25H    0x739
> > +#define CSR_MHPMEVENT26H    0x73a
> > +#define CSR_MHPMEVENT27H    0x73b
> > +#define CSR_MHPMEVENT28H    0x73c
> > +#define CSR_MHPMEVENT29H    0x73d
> > +#define CSR_MHPMEVENT30H    0x73e
> > +#define CSR_MHPMEVENT31H    0x73f
> > +
> >  #define CSR_MHPMCOUNTER3H   0xb83
> >  #define CSR_MHPMCOUNTER4H   0xb84
> >  #define CSR_MHPMCOUNTER5H   0xb85
> > @@ -443,6 +474,7 @@
> >  #define CSR_VSMTE           0x2c0
> >  #define CSR_VSPMMASK        0x2c1
> >  #define CSR_VSPMBASE        0x2c2
> > +#define CSR_SCOUNTOVF       0xda0
> >
> >  /* Crypto Extension */
> >  #define CSR_SEED            0x015
> > @@ -620,6 +652,7 @@ typedef enum RISCVException {
> >  #define IRQ_VS_EXT                         10
> >  #define IRQ_M_EXT                          11
> >  #define IRQ_S_GEXT                         12
> > +#define IRQ_PMU_OVF                        13
> >  #define IRQ_LOCAL_MAX                      16
> >  #define IRQ_LOCAL_GUEST_MAX                (TARGET_LONG_BITS - 1)
> >
> > @@ -637,11 +670,13 @@ typedef enum RISCVException {
> >  #define MIP_VSEIP                          (1 << IRQ_VS_EXT)
> >  #define MIP_MEIP                           (1 << IRQ_M_EXT)
> >  #define MIP_SGEIP                          (1 << IRQ_S_GEXT)
> > +#define MIP_LCOFIP                         (1 << IRQ_PMU_OVF)
> >
> >  /* sip masks */
> >  #define SIP_SSIP                           MIP_SSIP
> >  #define SIP_STIP                           MIP_STIP
> >  #define SIP_SEIP                           MIP_SEIP
> > +#define SIP_LCOFIP                         MIP_LCOFIP
> >
> >  /* MIE masks */
> >  #define MIE_SEIE                           (1 << IRQ_S_EXT)
> > @@ -795,4 +830,24 @@ typedef enum RISCVException {
> >  #define SEED_OPST_WAIT                   (0b01 << 30)
> >  #define SEED_OPST_ES16                   (0b10 << 30)
> >  #define SEED_OPST_DEAD                   (0b11 << 30)
> > +/* PMU related bits */
> > +#define MIE_LCOFIE                         (1 << IRQ_PMU_OVF)
> > +
> > +#define MHPMEVENT_BIT_OF                   BIT_ULL(63)
> > +#define MHPMEVENTH_BIT_OF                  BIT(31)
> > +#define MHPMEVENT_BIT_MINH                 BIT_ULL(62)
> > +#define MHPMEVENTH_BIT_MINH                BIT(30)
> > +#define MHPMEVENT_BIT_SINH                 BIT_ULL(61)
> > +#define MHPMEVENTH_BIT_SINH                BIT(29)
> > +#define MHPMEVENT_BIT_UINH                 BIT_ULL(60)
> > +#define MHPMEVENTH_BIT_UINH                BIT(28)
> > +#define MHPMEVENT_BIT_VSINH                BIT_ULL(59)
> > +#define MHPMEVENTH_BIT_VSINH               BIT(27)
> > +#define MHPMEVENT_BIT_VUINH                BIT_ULL(58)
> > +#define MHPMEVENTH_BIT_VUINH               BIT(26)
> > +
> > +#define MHPMEVENT_SSCOF_MASK               _ULL(0xFFFF000000000000)
> > +#define MHPMEVENT_IDX_MASK                 0xFFFFF
> > +#define MHPMEVENT_SSCOF_RESVD              16
> > +
> >  #endif
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 235f2a011e70..1233bfa0a726 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >      CPUState *cs = env_cpu(env);
> >      RISCVCPU *cpu = RISCV_CPU(cs);
> >      int ctr_index;
> > -    int base_csrno = CSR_HPMCOUNTER3;
> > +    int base_csrno = CSR_CYCLE;
> >      bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
> >
> >      if (rv32 && csrno >= CSR_CYCLEH) {
> > @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >      }
> >      ctr_index = csrno - base_csrno;
> >
> > -    if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> > +    if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> > +        (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> > +        goto skip_ext_pmu_check;
> > +    }
> > +
> > +    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index))))
> {
> >          /* No counter is enabled in PMU or the counter is out of range
> */
> >          return RISCV_EXCP_ILLEGAL_INST;
> >      }
> >
> > +skip_ext_pmu_check:
> > +
> >      if (env->priv == PRV_S) {
> >          switch (csrno) {
> >          case CSR_CYCLE:
> > @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >              }
> >              break;
> >          case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            ctr_index = csrno - CSR_CYCLE;
> >              if (!get_field(env->mcounteren, 1 << ctr_index)) {
> >                  return RISCV_EXCP_ILLEGAL_INST;
> >              }
> > @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >                  }
> >                  break;
> >              case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                ctr_index = csrno - CSR_CYCLEH;
> >                  if (!get_field(env->mcounteren, 1 << ctr_index)) {
> >                      return RISCV_EXCP_ILLEGAL_INST;
> >                  }
> > @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >              }
> >              break;
> >          case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> > -            ctr_index = csrno - CSR_CYCLE;
> >              if (!get_field(env->hcounteren, 1 << ctr_index) &&
> >                   get_field(env->mcounteren, 1 << ctr_index)) {
> >                  return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int
> csrno)
> >                  }
> >                  break;
> >              case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> > -                ctr_index = csrno - CSR_CYCLEH;
> >                  if (!get_field(env->hcounteren, 1 << ctr_index) &&
> >                       get_field(env->mcounteren, 1 << ctr_index)) {
> >                      return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env,
> int csrno)
> >      return mctr(env, csrno);
> >  }
> >
> > +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> > +{
> > +    CPUState *cs = env_cpu(env);
> > +    RISCVCPU *cpu = RISCV_CPU(cs);
> > +
> > +    if (!cpu->cfg.ext_sscofpmf) {
> > +        return RISCV_EXCP_ILLEGAL_INST;
> > +    }
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> >  static RISCVException any(CPURISCVState *env, int csrno)
> >  {
> >      return RISCV_EXCP_NONE;
> > @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int
> csrno, target_ulong *val)
> >  static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong
> val)
> >  {
> >      int evt_index = csrno - CSR_MCOUNTINHIBIT;
> > +    uint64_t mhpmevt_val = val;
> >
> >      env->mhpmevent_val[evt_index] = val;
> >
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpmevt_val = mhpmevt_val |
> > +                      ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> > +    }
> > +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> > +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong
> *val)
> > +{
> > +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> > +
> > +    *val = env->mhpmeventh_val[evt_index];
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> > +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong
> val)
> > +{
> > +    int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> > +    uint64_t mhpmevth_val = val;
> > +    uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> > +
> > +    mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> > +    env->mhpmeventh_val[evt_index] = val;
> > +
> > +    riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > +
> >      return RISCV_EXCP_NONE;
> >  }
> >
> > @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env,
> int csrno, target_ulong val)
> >  {
> >      int ctr_idx = csrno - CSR_MCYCLE;
> >      PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t mhpmctr_val = val;
> >
> >      counter->mhpmcounter_val = val;
> >      if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >          riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >          counter->mhpmcounter_prev = get_ticks(false);
> > -    } else {
> > +        if (ctr_idx > 2) {
> > +            if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +                mhpmctr_val = mhpmctr_val |
> > +                              ((uint64_t)counter->mhpmcounterh_val <<
> 32);
> > +            }
> > +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> > +        }
> > +     } else {
> >          /* Other counters can keep incrementing from the given value */
> >          counter->mhpmcounter_prev = val;
> >      }
> > @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env,
> int csrno, target_ulong val)
> >  {
> >      int ctr_idx = csrno - CSR_MCYCLEH;
> >      PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t mhpmctr_val = counter->mhpmcounter_val;
> > +    uint64_t mhpmctrh_val = val;
> >
> >      counter->mhpmcounterh_val = val;
> > +    mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> >      if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >          riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >          counter->mhpmcounterh_prev = get_ticks(true);
> > +        if (ctr_idx > 2) {
> > +            riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> > +        }
> >      } else {
> >          counter->mhpmcounterh_prev = val;
> >      }
> > @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, int
> csrno, target_ulong *val)
> >      return riscv_pmu_read_ctr(env, val, true, ctr_index);
> >  }
> >
> > +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong
> *val)
> > +{
> > +    int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> > +    int i;
> > +    *val = 0;
> > +    target_ulong *mhpm_evt_val;
> > +    uint64_t of_bit_mask;
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpm_evt_val = env->mhpmeventh_val;
> > +        of_bit_mask = MHPMEVENTH_BIT_OF;
> > +    } else {
> > +        mhpm_evt_val = env->mhpmevent_val;
> > +        of_bit_mask = MHPMEVENT_BIT_OF;
> > +    }
> > +
> > +    for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> > +        if ((get_field(env->mcounteren, BIT(i))) &&
> > +            (mhpm_evt_val[i] & of_bit_mask)) {
> > +                    *val |= BIT(i);
> > +            }
> > +    }
> > +
> > +    return RISCV_EXCP_NONE;
> > +}
> > +
> >  static RISCVException read_time(CPURISCVState *env, int csrno,
> >                                  target_ulong *val)
> >  {
> > @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env,
> int csrno,
> >  /* Machine constants */
> >
> >  #define M_MODE_INTERRUPTS  ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> > -#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> > +#define S_MODE_INTERRUPTS  ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP |
> \
> > +                                      MIP_LCOFIP))
> >  #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP |
> MIP_VSEIP))
> >  #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
> >
> > @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps =
> DELEGABLE_EXCPS &
> >  static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE |
> SSTATUS_SPIE |
> >      SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
> >      SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> > -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP |
> MIP_UEIP;
> > +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP |
> MIP_UEIP |
> > +                                              SIP_LCOFIP;
> >  static const target_ulong hip_writable_mask = MIP_VSSIP;
> >  static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
> MIP_VSEIP;
> >  static const target_ulong vsip_writable_mask = MIP_VSSIP;
> > @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >      [CSR_MHPMEVENT31]    = { "mhpmevent31",    any,    read_mhpmevent,
> >                                                         write_mhpmevent
> },
> >
> > +    [CSR_MHPMEVENT3H]    = { "mhpmevent3h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT4H]    = { "mhpmevent4h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT5H]    = { "mhpmevent5h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT6H]    = { "mhpmevent6h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT7H]    = { "mhpmevent7h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT8H]    = { "mhpmevent8h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT9H]    = { "mhpmevent9h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT10H]   = { "mhpmevent10h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT11H]   = { "mhpmevent11h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT12H]   = { "mhpmevent12h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT13H]   = { "mhpmevent13h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT14H]   = { "mhpmevent14h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT15H]   = { "mhpmevent15h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT16H]   = { "mhpmevent16h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT17H]   = { "mhpmevent17h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT18H]   = { "mhpmevent18h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT19H]   = { "mhpmevent19h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT20H]   = { "mhpmevent20h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT21H]   = { "mhpmevent21h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT22H]   = { "mhpmevent22h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT23H]   = { "mhpmevent23h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT24H]   = { "mhpmevent24h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT25H]   = { "mhpmevent25h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT26H]   = { "mhpmevent26h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT27H]   = { "mhpmevent27h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT28H]   = { "mhpmevent28h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT29H]   = { "mhpmevent29h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT30H]   = { "mhpmevent30h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +    [CSR_MHPMEVENT31H]   = { "mhpmevent31h",    sscofpmf,
> read_mhpmeventh,
> > +
>  write_mhpmeventh},
> > +
> >      [CSR_HPMCOUNTER3H]   = { "hpmcounter3h",   ctr32,  read_hpmcounterh
> },
> >      [CSR_HPMCOUNTER4H]   = { "hpmcounter4h",   ctr32,  read_hpmcounterh
> },
> >      [CSR_HPMCOUNTER5H]   = { "hpmcounter5h",   ctr32,  read_hpmcounterh
> },
> > @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >
>  write_mhpmcounterh },
> >      [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32,
> read_hpmcounterh,
> >
>  write_mhpmcounterh },
> > +    [CSR_SCOUNTOVF]      = { "scountovf", sscofpmf,  read_scountovf },
> > +
> >  #endif /* !CONFIG_USER_ONLY */
> >  };
> > diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> > index dc182ca81119..33ef9b8e9908 100644
> > --- a/target/riscv/machine.c
> > +++ b/target/riscv/machine.c
> > @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> >          VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU,
> RV_MAX_MHPMCOUNTERS, 0,
> >                               vmstate_pmu_ctr_state, PMUCTRState),
> >          VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU,
> RV_MAX_MHPMEVENTS),
> > +        VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU,
> RV_MAX_MHPMEVENTS),
> >          VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> >          VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> >          VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> > diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> > index 000fe8da45ef..34096941c0ce 100644
> > --- a/target/riscv/pmu.c
> > +++ b/target/riscv/pmu.c
> > @@ -19,14 +19,367 @@
> >  #include "qemu/osdep.h"
> >  #include "cpu.h"
> >  #include "pmu.h"
> > +#include "sysemu/cpu-timers.h"
> > +
> > +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> > +#define MAKE_32BIT_MASK(shift, length) \
> > +        (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> > +
> > +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> > +        !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> > +        return false;
> > +    } else {
> > +        return true;
> > +    }
> > +}
> > +
> > +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +
> > +    if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> > +        !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> > +        return true;
> > +    } else {
> > +        return false;
> > +    }
> > +}
> > +
> > +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +    target_ulong max_val = UINT32_MAX;
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    bool virt_on = riscv_cpu_virt_enabled(env);
> > +
> > +    /* Privilege mode filtering */
> > +    if ((env->priv == PRV_M &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> > +        (env->priv == PRV_S && virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> > +        (env->priv == PRV_U && virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> > +        (env->priv == PRV_S && !virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> > +        (env->priv == PRV_U && !virt_on &&
> > +        (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> > +        return 0;
> > +    }
> > +
> > +    /* Handle the overflow scenario */
> > +    if (counter->mhpmcounter_val == max_val) {
> > +        if (counter->mhpmcounterh_val == max_val) {
> > +            counter->mhpmcounter_val = 0;
> > +            counter->mhpmcounterh_val = 0;
> > +            /* Generate interrupt only if OF bit is clear */
> > +            if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> > +                env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> > +                riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +            }
> > +        } else {
> > +            counter->mhpmcounterh_val++;
> > +        }
> > +    } else {
> > +        counter->mhpmcounter_val++;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> > +{
> > +    CPURISCVState *env = &cpu->env;
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +    uint64_t max_val = UINT64_MAX;
> > +    bool virt_on = riscv_cpu_virt_enabled(env);
> > +
> > +    /* Privilege mode filtering */
> > +    if ((env->priv == PRV_M &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> > +        (env->priv == PRV_S && virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> > +        (env->priv == PRV_U && virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> > +        (env->priv == PRV_S && !virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> > +        (env->priv == PRV_U && !virt_on &&
> > +        (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> > +        return 0;
> > +    }
> > +
> > +    /* Handle the overflow scenario */
> > +    if (counter->mhpmcounter_val == max_val) {
> > +        counter->mhpmcounter_val = 0;
> > +        /* Generate interrupt only if OF bit is clear */
> > +        if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> > +            env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> > +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +        }
> > +    } else {
> > +        counter->mhpmcounter_val++;
> > +    }
> > +    return 0;
> > +}
> > +
> > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> event_idx)
> > +{
> > +    uint32_t ctr_idx;
> > +    int ret;
> > +    CPURISCVState *env = &cpu->env;
> > +    gpointer value;
> > +
> > +    value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                                GUINT_TO_POINTER(event_idx));
> > +    if (!value) {
> > +        return -1;
> > +    }
> > +
> > +    ctr_idx = GPOINTER_TO_UINT(value);
> > +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> > +        get_field(env->mcountinhibit, BIT(ctr_idx))) {
> > +        return -1;
> > +    }
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> > +    } else {
> > +        ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> > +    }
> > +
> > +    return ret;
> > +}
> >
> >  bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> >                                          uint32_t target_ctr)
> >  {
> > -    return (target_ctr == 0) ? true : false;
> > +    RISCVCPU *cpu;
> > +    uint32_t event_idx;
> > +    uint32_t ctr_idx;
> > +
> > +    /* Fixed instret counter */
> > +    if (target_ctr == 2) {
> > +        return true;
> > +    }
> > +
> > +    cpu = RISCV_CPU(env_cpu(env));
> > +    event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(event_idx)));
> > +    if (!ctr_idx) {
> > +        return false;
> > +    }
> > +
> > +    return target_ctr == ctr_idx ? true : false;
> >  }
> >
> >  bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t
> target_ctr)
> >  {
> > -    return (target_ctr == 2) ? true : false;
> > +    RISCVCPU *cpu;
> > +    uint32_t event_idx;
> > +    uint32_t ctr_idx;
> > +
> > +    /* Fixed mcycle counter */
> > +    if (target_ctr == 0) {
> > +        return true;
> > +    }
> > +
> > +    cpu = RISCV_CPU(env_cpu(env));
> > +    event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(event_idx)));
> > +
> > +    /* Counter zero is not used for event_ctr_map */
> > +    if (!ctr_idx) {
> > +        return false;
> > +    }
> > +
> > +    return (target_ctr == ctr_idx) ? true : false;
> > +}
> > +
> > +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> > +                                     gpointer udata)
> > +{
> > +    return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true
> : false;
> > +}
> > +
> > +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> > +{
> > +    int64_t ret = 0;
> > +
> > +    if (icount_enabled()) {
> > +        ret = icount_to_ns(value);
> > +    } else {
> > +        ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> > +    }
> > +
> > +    return ret;
> > +}
> > +
> > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> > +                               uint32_t ctr_idx)
> > +{
> > +    uint32_t event_idx;
> > +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> > +
> > +    if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> > +        return -1;
> > +    }
> > +
> > +    /**
> > +     * Expected mhpmevent value is zero for reset case. Remove the
> current
> > +     * mapping.
> > +     */
> > +    if (!value) {
> > +        g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> > +                                    pmu_remove_event_map,
> > +                                    GUINT_TO_POINTER(ctr_idx));
> > +        return 0;
> > +    }
> > +
> > +    event_idx = value & MHPMEVENT_IDX_MASK;
> > +    if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                            GUINT_TO_POINTER(event_idx))) {
> > +        return 0;
> > +    }
> > +
> > +    switch (event_idx) {
> > +    case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> > +    case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> > +    case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> > +    case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> > +    case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> > +        break;
> > +    default:
> > +        /* We don't support any raw events right now */
> > +        return -1;
> > +    }
> > +    g_hash_table_insert(cpu->pmu_event_ctr_map,
> GUINT_TO_POINTER(event_idx),
> > +                        GUINT_TO_POINTER(ctr_idx));
> > +
> > +    return 0;
> > +}
> > +
> > +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> > +                                  enum riscv_pmu_event_idx evt_idx)
> > +{
> > +    uint32_t ctr_idx;
> > +    CPURISCVState *env = &cpu->env;
> > +    PMUCTRState *counter;
> > +    target_ulong *mhpmevent_val;
> > +    uint64_t of_bit_mask;
> > +    int64_t irq_trigger_at;
> > +
> > +    if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> > +        evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> > +        return;
> > +    }
> > +
> > +    ctr_idx =
> GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> > +                               GUINT_TO_POINTER(evt_idx)));
> > +    if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> > +        return;
> > +    }
> > +
> > +    if (riscv_cpu_mxl(env) == MXL_RV32) {
> > +        mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> > +        of_bit_mask = MHPMEVENTH_BIT_OF;
> > +     } else {
> > +        mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> > +        of_bit_mask = MHPMEVENT_BIT_OF;
> > +    }
> > +
> > +    counter = &env->pmu_ctrs[ctr_idx];
> > +    if (counter->irq_overflow_left > 0) {
> > +        irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> > +                        counter->irq_overflow_left;
> > +        timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> > +        counter->irq_overflow_left = 0;
> > +        return;
> > +    }
> > +
> > +    if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> > +        /* Generate interrupt only if OF bit is clear */
> > +        if (!(*mhpmevent_val & of_bit_mask)) {
> > +            *mhpmevent_val |= of_bit_mask;
> > +            riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> > +        }
> > +    }
> > +}
> > +
> > +/* Timer callback for instret and cycle counter overflow */
> > +void riscv_pmu_timer_cb(void *priv)
> > +{
> > +    RISCVCPU *cpu = priv;
> > +
> > +    /* Timer event was triggered only for these events */
> > +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> > +    pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> > +}
> > +
> > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t
> ctr_idx)
> > +{
> > +    uint64_t overflow_delta, overflow_at;
> > +    int64_t overflow_ns, overflow_left = 0;
> > +    RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> > +    PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > +
> > +    if (!riscv_pmu_counter_valid(cpu, ctr_idx) ||
> !cpu->cfg.ext_sscofpmf) {
> > +        return -1;
> > +    }
> > +
> > +    if (value) {
> > +        overflow_delta = UINT64_MAX - value + 1;
> > +    } else {
> > +        overflow_delta = UINT64_MAX;
> > +    }
> > +
> > +    /**
> > +     * QEMU supports only int64_t timers while RISC-V counters are
> uint64_t.
> > +     * Compute the leftover and save it so that it can be reprogrammed
> again
> > +     * when timer expires.
> > +     */
> > +    if (overflow_delta > INT64_MAX) {
> > +        overflow_left = overflow_delta - INT64_MAX;
> > +    }
> > +
> > +    if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> > +        riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> > +        overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> > +        overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> > +    } else {
> > +        return -1;
> > +    }
> > +    overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> overflow_ns;
> > +
> > +    if (overflow_at > INT64_MAX) {
> > +        overflow_left += overflow_at - INT64_MAX;
> > +        counter->irq_overflow_left = overflow_left;
> > +        overflow_at = INT64_MAX;
> > +    }
> > +    timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> > +
> > +    return 0;
> > +}
> > +
> > +
> > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> > +{
> > +    if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> > +        return -1;
> > +    }
> > +
> > +    cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash,
> g_direct_equal);
> > +    if (!cpu->pmu_event_ctr_map) {
> > +        /* PMU support can not be enabled */
> > +        qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> > +        cpu->cfg.pmu_num = 0;
> > +        return -1;
> > +    }
> > +
> > +    /* Create a bitmask of available programmable counters */
> > +    cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> > +
> > +    return 0;
> >  }
> > diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> > index 58a5bc3a4089..036653627f78 100644
> > --- a/target/riscv/pmu.h
> > +++ b/target/riscv/pmu.h
> > @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState
> *env,
> >                                          uint32_t target_ctr);
> >  bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
> >                                    uint32_t target_ctr);
> > +void riscv_pmu_timer_cb(void *priv);
> > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> > +                               uint32_t ctr_idx);
> > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> event_idx);
> > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> > +                          uint32_t ctr_idx);
> > --
> > 2.25.1
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 50556 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v11 2/6] target/riscv: Simplify counter predicate function
  2022-07-28  0:56       ` Weiwei Li
@ 2022-08-02 21:02         ` Atish Kumar Patra
  0 siblings, 0 replies; 20+ messages in thread
From: Atish Kumar Patra @ 2022-08-02 21:02 UTC (permalink / raw)
  To: Weiwei Li
  Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
	Bin Meng, Palmer Dabbelt, open list:RISC-V

[-- Attachment #1: Type: text/plain, Size: 8359 bytes --]

On Wed, Jul 27, 2022 at 5:56 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:

>
> 在 2022/7/28 上午5:40, Atish Kumar Patra 写道:
>
>
>
> On Wed, Jul 27, 2022 at 1:35 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>>
>> 在 2022/7/27 下午2:49, Atish Patra 写道:
>> > All the hpmcounters and the fixed counters (CY, IR, TM) can be
>> represented
>> > as a unified counter. Thus, the predicate function doesn't need handle
>> each
>> > case separately.
>> >
>> > Simplify the predicate function so that we just handle things
>> differently
>> > between RV32/RV64 and S/HS mode.
>> >
>> > Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
>> > Acked-by: Alistair Francis <alistair.francis@wdc.com>
>> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> > ---
>> >   target/riscv/csr.c | 112 +++++----------------------------------------
>> >   1 file changed, 11 insertions(+), 101 deletions(-)
>> >
>> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>> > index 1233bfa0a726..57dbbf9b09a0 100644
>> > --- a/target/riscv/csr.c
>> > +++ b/target/riscv/csr.c
>> > @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int
>> csrno)
>> >       CPUState *cs = env_cpu(env);
>> >       RISCVCPU *cpu = RISCV_CPU(cs);
>> >       int ctr_index;
>> > +    target_ulong ctr_mask;
>> >       int base_csrno = CSR_CYCLE;
>> >       bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>> >
>> > @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int
>> csrno)
>> >           base_csrno += 0x80;
>> >       }
>> >       ctr_index = csrno - base_csrno;
>> > +    ctr_mask = BIT(ctr_index);
>> >
>> >       if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
>> >           (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
>> >           goto skip_ext_pmu_check;
>> >       }
>> >
>> > -    if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs &
>> BIT(ctr_index)))) {
>> > +    if (!(cpu->pmu_avail_ctrs & ctr_mask)) {
>> >           /* No counter is enabled in PMU or the counter is out of
>> range */
>> >           return RISCV_EXCP_ILLEGAL_INST;
>> >       }
>> >
>> >   skip_ext_pmu_check:
>> >
>> > -    if (env->priv == PRV_S) {
>> > -        switch (csrno) {
>> > -        case CSR_CYCLE:
>> > -            if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>> > -                return RISCV_EXCP_ILLEGAL_INST;
>> > -            }
>> > -            break;
>> > -        case CSR_TIME:
>> > -            if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>> > -                return RISCV_EXCP_ILLEGAL_INST;
>> > -            }
>> > -            break;
>> > -        case CSR_INSTRET:
>> > -            if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>> > -                return RISCV_EXCP_ILLEGAL_INST;
>> > -            }
>> > -            break;
>> > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> > -            if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> > -                return RISCV_EXCP_ILLEGAL_INST;
>> > -            }
>> > -            break;
>> > -        }
>> > -        if (rv32) {
>> > -            switch (csrno) {
>> > -            case CSR_CYCLEH:
>> > -                if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>> > -                    return RISCV_EXCP_ILLEGAL_INST;
>> > -                }
>> > -                break;
>> > -            case CSR_TIMEH:
>> > -                if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>> > -                    return RISCV_EXCP_ILLEGAL_INST;
>> > -                }
>> > -                break;
>> > -            case CSR_INSTRETH:
>> > -                if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>> > -                    return RISCV_EXCP_ILLEGAL_INST;
>> > -                }
>> > -                break;
>> > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> > -                if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> > -                    return RISCV_EXCP_ILLEGAL_INST;
>> > -                }
>> > -                break;
>> > -            }
>> > -        }
>> > +    if (((env->priv == PRV_S) && (!get_field(env->mcounteren,
>> ctr_mask))) ||
>> > +       ((env->priv == PRV_U) && (!get_field(env->scounteren,
>> ctr_mask)))) {
>> > +        return RISCV_EXCP_ILLEGAL_INST;
>> >       }
>> >
>> >       if (riscv_cpu_virt_enabled(env)) {
>> > -        switch (csrno) {
>> > -        case CSR_CYCLE:
>> > -            if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>> > -                get_field(env->mcounteren, COUNTEREN_CY)) {
>> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -            }
>> > -            break;
>> > -        case CSR_TIME:
>> > -            if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>> > -                get_field(env->mcounteren, COUNTEREN_TM)) {
>> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -            }
>> > -            break;
>> > -        case CSR_INSTRET:
>> > -            if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>> > -                get_field(env->mcounteren, COUNTEREN_IR)) {
>> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -            }
>> > -            break;
>> > -        case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> > -            if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> > -                 get_field(env->mcounteren, 1 << ctr_index)) {
>> > -                return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -            }
>> > -            break;
>> > -        }
>> > -        if (rv32) {
>> > -            switch (csrno) {
>> > -            case CSR_CYCLEH:
>> > -                if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>> > -                    get_field(env->mcounteren, COUNTEREN_CY)) {
>> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -                }
>> > -                break;
>> > -            case CSR_TIMEH:
>> > -                if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>> > -                    get_field(env->mcounteren, COUNTEREN_TM)) {
>> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -                }
>> > -                break;
>> > -            case CSR_INSTRETH:
>> > -                if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>> > -                    get_field(env->mcounteren, COUNTEREN_IR)) {
>> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -                }
>> > -                break;
>> > -            case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> > -                if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> > -                     get_field(env->mcounteren, 1 << ctr_index)) {
>> > -                    return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > -                }
>> > -                break;
>> > -            }
>> > +        if (!get_field(env->mcounteren, ctr_mask)) {
>> > +            /* The bit must be set in mcountern for HS mode access */
>> > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> > +        } else if (!get_field(env->hcounteren, ctr_mask)) {
>> > +            return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> >           }
>>
>> The logic is changed here. In original logic,
>> RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when
>>
>> !get_field(env->hcounteren, 1 << ctr_index) && get_field(env->mcounteren,
>> 1 << ctr_index)
>>
>> The new logic is RISCV_EXCP_VIRT_INSTRUCTION_FAULT is triggered when
>> !get_field(env->mcounteren, ctr_mask)
>>
>> or !get_field(env->hcounteren, 1 << ctr_index) &&
>> get_field(env->mcounteren, 1 << ctr_index)
>>
>>
> Yes. It's just an optimization where we can break early just by checking
> mcountern. Do you see any issue with it ?
>
> The section 8.6.1 of  riscv- privileged spec lists the cases (including
> the Xcounten ralated cases) which will raise a
>
> virtual instruction exception. However all the the Xcounten ralated cases
> have a common condition
>
>         "the same bit in mcounteren is 1".
>

Ahh yes. Got it. I will revert it to the original logic in the next version.


> So  this  optimization seems not correct.
>
> Regards,
>
> Weiwei Li
>
>
>
>> Regards,
>>
>> Weiwei Li
>>
>> >       }
>> >   #endif
>>
>>

[-- Attachment #2: Type: text/html, Size: 14458 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-08-02 21:04 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-27  6:49 [PATCH v11 0/6] Improve PMU support Atish Patra
2022-07-27  6:49 ` [PATCH v11 1/6] target/riscv: Add sscofpmf extension support Atish Patra
2022-07-27  8:11   ` Weiwei Li
2022-07-27 21:32     ` Atish Kumar Patra
2022-07-31  4:10   ` Rahul Pathak
2022-07-31  4:34     ` Atish Kumar Patra
2022-07-27  6:49 ` [PATCH v11 2/6] target/riscv: Simplify counter predicate function Atish Patra
2022-07-27  8:34   ` Weiwei Li
2022-07-27 21:40     ` Atish Kumar Patra
2022-07-28  0:56       ` Weiwei Li
2022-08-02 21:02         ` Atish Kumar Patra
2022-07-27  6:49 ` [PATCH v11 3/6] target/riscv: Add few cache related PMU events Atish Patra
2022-07-27  6:49 ` [PATCH v11 4/6] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
2022-07-27  7:27   ` Sunil V L
2022-07-27  7:50     ` Atish Kumar Patra
2022-07-27  6:49 ` [PATCH v11 5/6] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
2022-07-27  8:26   ` Weiwei Li
2022-07-27 22:50     ` Atish Kumar Patra
2022-07-27  6:49 ` [PATCH v11 6/6] target/riscv: Remove additional priv version check for mcountinhibit Atish Patra
2022-07-27  8:22   ` Weiwei Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.