* [PATCH v10 00/12] Improve PMU support
@ 2022-06-20 23:15 Atish Patra
2022-06-20 23:15 ` [PATCH v10 01/12] target/riscv: Fix PMU CSR predicate function Atish Patra
` (11 more replies)
0 siblings, 12 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv, frank.chang
The latest version of the SBI specification includes a Performance Monitoring
Unit(PMU) extension[1] which allows the supervisor to start/stop/configure
various PMU events. The Sscofpmf ('Ss' for Privileged arch and Supervisor-level
extensions, and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension[2] allows the perf like tool to handle overflow interrupts and
filtering support.
This series implements full PMU infrastructure to support
PMU in virt machine. This will allow us to add any PMU events in future.
Currently, this series enables the following omu events.
1. cycle count
2. instruction count
3. DTLB load/store miss
4. ITLB prefetch miss
The first two are computed using host ticks while last three are counted during
cpu_tlb_fill. We can do both sampling and count from guest userspace.
This series has been tested on both RV64 and RV32. Both Linux[3] and Opensbi[4]
patches are required to get the perf working.
Here is an output of perf stat/report while running hackbench with latest
OpenSBI & Linux kernel.
Perf stat:
==========
[root@fedora-riscv ~]# perf stat -e cycles -e instructions -e dTLB-load-misses -e dTLB-store-misses -e iTLB-load-misses \
> perf bench sched messaging -g 1 -l 10
# Running 'sched/messaging' benchmark:
# 20 sender and receiver processes per group
# 1 groups == 40 processes run
Total time: 0.265 [sec]
Performance counter stats for 'perf bench sched messaging -g 1 -l 10':
4,167,825,362 cycles
4,166,609,256 instructions # 1.00 insn per cycle
3,092,026 dTLB-load-misses
258,280 dTLB-store-misses
2,068,966 iTLB-load-misses
0.585791767 seconds time elapsed
0.373802000 seconds user
1.042359000 seconds sys
Perf record:
============
[root@fedora-riscv ~]# perf record -e cycles -e instructions \
> -e dTLB-load-misses -e dTLB-store-misses -e iTLB-load-misses -c 10000 \
> perf bench sched messaging -g 1 -l 10
# Running 'sched/messaging' benchmark:
# 20 sender and receiver processes per group
# 1 groups == 40 processes run
Total time: 1.397 [sec]
[ perf record: Woken up 10 times to write data ]
Check IO/CPU overload!
[ perf record: Captured and wrote 8.211 MB perf.data (214486 samples) ]
[root@fedora-riscv riscv]# perf report
Available samples
107K cycles ◆
107K instructions ▒
250 dTLB-load-misses ▒
13 dTLB-store-misses ▒
172 iTLB-load-misses
..
Changes from v8->v9:
1. Added the write_done flags to the vmstate.
2. Fixed the hpmcounter read access from M-mode.
Changes from v7->v8:
1. Removeding ordering constraints for mhpmcounter & mhpmevent.
Changes from v6->v7:
1. Fixed all the compilation errors for the usermode.
Changes from v5->v6:
1. Fixed compilation issue with PATCH 1.
2. Addressed other comments.
Changes from v4->v5:
1. Rebased on top of the -next with following patches.
- isa extension
- priv 1.12 spec
2. Addressed all the comments on v4
3. Removed additional isa-ext DT node in favor of riscv,isa string update
Changes from v3->v4:
1. Removed the dummy events from pmu DT node.
2. Fixed pmu_avail_counters mask generation.
3. Added a patch to simplify the predicate function for counters.
Changes from v2->v3:
1. Addressed all the comments on PATCH1-4.
2. Split patch1 into two separate patches.
3. Added explicit comments to explain the event types in DT node.
4. Rebased on latest Qemu.
Changes from v1->v2:
1. Dropped the ACks from v1 as signficant changes happened after v1.
2. sscofpmf support.
3. A generic counter management framework.
[1] https://github.com/riscv-non-isa/riscv-sbi-doc/blob/master/riscv-sbi.adoc
[2] https://drive.google.com/file/d/171j4jFjIkKdj5LWcExphq4xG_2sihbfd/edit
[3] https://github.com/atishp04/qemu/tree/riscv_pmu_v10
Atish Patra (12):
target/riscv: Fix PMU CSR predicate function
target/riscv: Implement PMU CSR predicate function for S-mode
target/riscv: pmu: Rename the counters extension to pmu
target/riscv: pmu: Make number of counters configurable
target/riscv: Implement mcountinhibit CSR
target/riscv: Add support for hpmcounters/hpmevents
target/riscv: Support mcycle/minstret write operation
target/riscv: Add sscofpmf extension support
target/riscv: Simplify counter predicate function
target/riscv: Add few cache related PMU events
hw/riscv: virt: Add PMU DT node to the device tree
target/riscv: Update the privilege field for sscofpmf CSRs
hw/riscv/virt.c | 28 ++
target/riscv/cpu.c | 15 +-
target/riscv/cpu.h | 49 ++-
target/riscv/cpu_bits.h | 59 +++
target/riscv/cpu_helper.c | 25 ++
target/riscv/csr.c | 892 ++++++++++++++++++++++++++++----------
target/riscv/machine.c | 26 ++
target/riscv/meson.build | 3 +-
target/riscv/pmu.c | 442 +++++++++++++++++++
target/riscv/pmu.h | 36 ++
10 files changed, 1339 insertions(+), 236 deletions(-)
create mode 100644 target/riscv/pmu.c
create mode 100644 target/riscv/pmu.h
--
2.25.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v10 01/12] target/riscv: Fix PMU CSR predicate function
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 02/12] target/riscv: Implement PMU CSR predicate function for S-mode Atish Patra
` (10 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Bin Meng, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
The predicate function calculates the counter index incorrectly for
hpmcounterx. Fix the counter index to reflect correct CSR number.
Fixes: e39a8320b088 ("target/riscv: Support the Virtual Instruction fault")
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 6dbe9b541fd8..46bd417cc182 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -72,6 +72,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
#if !defined(CONFIG_USER_ONLY)
CPUState *cs = env_cpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
+ int ctr_index;
if (!cpu->cfg.ext_counters) {
/* The Counters extensions is not enabled */
@@ -99,8 +100,9 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
- if (!get_field(env->hcounteren, 1 << (csrno - CSR_HPMCOUNTER3)) &&
- get_field(env->mcounteren, 1 << (csrno - CSR_HPMCOUNTER3))) {
+ ctr_index = csrno - CSR_CYCLE;
+ if (!get_field(env->hcounteren, 1 << ctr_index) &&
+ get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
}
break;
@@ -126,8 +128,9 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
- if (!get_field(env->hcounteren, 1 << (csrno - CSR_HPMCOUNTER3H)) &&
- get_field(env->mcounteren, 1 << (csrno - CSR_HPMCOUNTER3H))) {
+ ctr_index = csrno - CSR_CYCLEH;
+ if (!get_field(env->hcounteren, 1 << ctr_index) &&
+ get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
}
break;
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 02/12] target/riscv: Implement PMU CSR predicate function for S-mode
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
2022-06-20 23:15 ` [PATCH v10 01/12] target/riscv: Fix PMU CSR predicate function Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 03/12] target/riscv: pmu: Rename the counters extension to pmu Atish Patra
` (9 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Bin Meng, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
Currently, the predicate function for PMU related CSRs only works if
virtualization is enabled. It also does not check mcounteren bits before
before cycle/minstret/hpmcounterx access.
Support supervisor mode access in the predicate function as well.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 46bd417cc182..58d07c511f98 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -79,6 +79,57 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
return RISCV_EXCP_ILLEGAL_INST;
}
+ if (env->priv == PRV_S) {
+ switch (csrno) {
+ case CSR_CYCLE:
+ if (!get_field(env->mcounteren, COUNTEREN_CY)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_TIME:
+ if (!get_field(env->mcounteren, COUNTEREN_TM)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_INSTRET:
+ if (!get_field(env->mcounteren, COUNTEREN_IR)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
+ ctr_index = csrno - CSR_CYCLE;
+ if (!get_field(env->mcounteren, 1 << ctr_index)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ }
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ switch (csrno) {
+ case CSR_CYCLEH:
+ if (!get_field(env->mcounteren, COUNTEREN_CY)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_TIMEH:
+ if (!get_field(env->mcounteren, COUNTEREN_TM)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_INSTRETH:
+ if (!get_field(env->mcounteren, COUNTEREN_IR)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
+ ctr_index = csrno - CSR_CYCLEH;
+ if (!get_field(env->mcounteren, 1 << ctr_index)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ break;
+ }
+ }
+ }
+
if (riscv_cpu_virt_enabled(env)) {
switch (csrno) {
case CSR_CYCLE:
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 03/12] target/riscv: pmu: Rename the counters extension to pmu
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
2022-06-20 23:15 ` [PATCH v10 01/12] target/riscv: Fix PMU CSR predicate function Atish Patra
2022-06-20 23:15 ` [PATCH v10 02/12] target/riscv: Implement PMU CSR predicate function for S-mode Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable Atish Patra
` (8 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Bin Meng, Alistair Francis, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
The PMU counters are supported via cpu config "Counters" which doesn't
indicate the correct purpose of those counters.
Rename the config property to pmu to indicate that these counters
are performance monitoring counters. This aligns with cpu options for
ARM architecture as well.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 4 ++--
target/riscv/cpu.h | 2 +-
target/riscv/csr.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 05e652135171..1b57b3c43980 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -851,7 +851,7 @@ static void riscv_cpu_init(Object *obj)
{
RISCVCPU *cpu = RISCV_CPU(obj);
- cpu->cfg.ext_counters = true;
+ cpu->cfg.ext_pmu = true;
cpu->cfg.ext_ifencei = true;
cpu->cfg.ext_icsr = true;
cpu->cfg.mmu = true;
@@ -879,7 +879,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
- DEFINE_PROP_BOOL("Counters", RISCVCPU, cfg.ext_counters, true),
+ DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 7d6397acdfb1..252c30a55d78 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -397,7 +397,7 @@ struct RISCVCPUConfig {
bool ext_zksed;
bool ext_zksh;
bool ext_zkt;
- bool ext_counters;
+ bool ext_pmu;
bool ext_ifencei;
bool ext_icsr;
bool ext_svinval;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 58d07c511f98..0ca05c77883c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -74,8 +74,8 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
RISCVCPU *cpu = RISCV_CPU(cs);
int ctr_index;
- if (!cpu->cfg.ext_counters) {
- /* The Counters extensions is not enabled */
+ if (!cpu->cfg.ext_pmu) {
+ /* The PMU extension is not enabled */
return RISCV_EXCP_ILLEGAL_INST;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (2 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 03/12] target/riscv: pmu: Rename the counters extension to pmu Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-07-04 15:26 ` Weiwei Li
2022-06-20 23:15 ` [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR Atish Patra
` (7 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Bin Meng, Alistair Francis, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
The RISC-V privilege specification provides flexibility to implement
any number of counters from 29 programmable counters. However, the QEMU
implements all the counters.
Make it configurable through pmu config parameter which now will indicate
how many programmable counters should be implemented by the cpu.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 3 +-
target/riscv/cpu.h | 2 +-
target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
3 files changed, 63 insertions(+), 36 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 1b57b3c43980..d12c6dc630ca 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
{
RISCVCPU *cpu = RISCV_CPU(obj);
- cpu->cfg.ext_pmu = true;
cpu->cfg.ext_ifencei = true;
cpu->cfg.ext_icsr = true;
cpu->cfg.mmu = true;
@@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
- DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
+ DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 252c30a55d78..ffee54ea5c27 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -397,7 +397,6 @@ struct RISCVCPUConfig {
bool ext_zksed;
bool ext_zksh;
bool ext_zkt;
- bool ext_pmu;
bool ext_ifencei;
bool ext_icsr;
bool ext_svinval;
@@ -421,6 +420,7 @@ struct RISCVCPUConfig {
/* Vendor-specific custom extensions */
bool ext_XVentanaCondOps;
+ uint8_t pmu_num;
char *priv_spec;
char *user_spec;
char *bext_spec;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 0ca05c77883c..b4a8e15f498f 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
CPUState *cs = env_cpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
int ctr_index;
+ int base_csrno = CSR_HPMCOUNTER3;
+ bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
- if (!cpu->cfg.ext_pmu) {
- /* The PMU extension is not enabled */
+ if (rv32 && csrno >= CSR_CYCLEH) {
+ /* Offset for RV32 hpmcounternh counters */
+ base_csrno += 0x80;
+ }
+ ctr_index = csrno - base_csrno;
+
+ if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
+ /* No counter is enabled in PMU or the counter is out of range */
return RISCV_EXCP_ILLEGAL_INST;
}
@@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
}
- if (riscv_cpu_mxl(env) == MXL_RV32) {
+ if (rv32) {
switch (csrno) {
case CSR_CYCLEH:
if (!get_field(env->mcounteren, COUNTEREN_CY)) {
@@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
}
- if (riscv_cpu_mxl(env) == MXL_RV32) {
+ if (rv32) {
switch (csrno) {
case CSR_CYCLEH:
if (!get_field(env->hcounteren, COUNTEREN_CY) &&
@@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env, int csrno)
}
#if !defined(CONFIG_USER_ONLY)
+static RISCVException mctr(CPURISCVState *env, int csrno)
+{
+ CPUState *cs = env_cpu(env);
+ RISCVCPU *cpu = RISCV_CPU(cs);
+ int ctr_index;
+ int base_csrno = CSR_MHPMCOUNTER3;
+
+ if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
+ /* Offset for RV32 mhpmcounternh counters */
+ base_csrno += 0x80;
+ }
+ ctr_index = csrno - base_csrno;
+ if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
+ /* The PMU is not enabled or counter is out of range*/
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException any(CPURISCVState *env, int csrno)
{
return RISCV_EXCP_NONE;
@@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
[CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
- [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
- [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
- [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
- [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
- [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
- [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
- [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
- [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
- [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
- [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
- [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
- [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
- [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
- [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
- [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
- [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
- [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
- [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
- [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
- [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
- [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
- [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
- [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
- [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
- [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
- [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
- [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
- [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
- [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
+ [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
+ [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
+ [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
+ [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
+ [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
+ [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
+ [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
+ [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
+ [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
+ [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
+ [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
+ [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
+ [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
+ [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
+ [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
+ [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
+ [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
+ [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
+ [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
+ [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
+ [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
+ [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
+ [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
+ [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
+ [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
+ [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
+ [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
+ [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
+ [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
[CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
[CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (3 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-07-04 15:31 ` Weiwei Li
2022-06-20 23:15 ` [PATCH v10 06/12] target/riscv: Add support for hpmcounters/hpmevents Atish Patra
` (6 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Bin Meng, Alistair Francis, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
As per the privilege specification v1.11, mcountinhibit allows to start/stop
a pmu counter selectively.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.h | 2 ++
target/riscv/cpu_bits.h | 4 ++++
target/riscv/csr.c | 25 +++++++++++++++++++++++++
target/riscv/machine.c | 1 +
4 files changed, 32 insertions(+)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index ffee54ea5c27..0a916db9f614 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -275,6 +275,8 @@ struct CPUArchState {
target_ulong scounteren;
target_ulong mcounteren;
+ target_ulong mcountinhibit;
+
target_ulong sscratch;
target_ulong mscratch;
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 4d04b20d064e..b3f7fa713000 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -367,6 +367,10 @@
#define CSR_MHPMCOUNTER29 0xb1d
#define CSR_MHPMCOUNTER30 0xb1e
#define CSR_MHPMCOUNTER31 0xb1f
+
+/* Machine counter-inhibit register */
+#define CSR_MCOUNTINHIBIT 0x320
+
#define CSR_MHPMEVENT3 0x323
#define CSR_MHPMEVENT4 0x324
#define CSR_MHPMEVENT5 0x325
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index b4a8e15f498f..94d39a4ce1c5 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1475,6 +1475,28 @@ static RISCVException write_mtvec(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ if (env->priv_ver < PRIV_VERSION_1_11_0) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ *val = env->mcountinhibit;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ if (env->priv_ver < PRIV_VERSION_1_11_0) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ env->mcountinhibit = val;
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -3745,6 +3767,9 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
[CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
+ [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_mcountinhibit,
+ write_mcountinhibit },
+
[CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
[CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
[CSR_MHPMEVENT5] = { "mhpmevent5", any, read_zero },
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index 2a437b29a1ce..87cd55bfd3a7 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -330,6 +330,7 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_UINTTL(env.siselect, RISCVCPU),
VMSTATE_UINTTL(env.scounteren, RISCVCPU),
VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
+ VMSTATE_UINTTL(env.mcountinhibit, RISCVCPU),
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
VMSTATE_UINTTL(env.mscratch, RISCVCPU),
VMSTATE_UINT64(env.mfromhost, RISCVCPU),
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 06/12] target/riscv: Add support for hpmcounters/hpmevents
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (4 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 07/12] target/riscv: Support mcycle/minstret write operation Atish Patra
` (5 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Bin Meng, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
With SBI PMU extension, user can use any of the available hpmcounters to
track any perf events based on the value written to mhpmevent csr.
Add read/write functionality for these csrs.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.h | 11 +
target/riscv/csr.c | 469 ++++++++++++++++++++++++++++-------------
target/riscv/machine.c | 3 +
3 files changed, 331 insertions(+), 152 deletions(-)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 0a916db9f614..199d0d570bdd 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -117,6 +117,8 @@ typedef struct CPUArchState CPURISCVState;
#endif
#define RV_VLEN_MAX 1024
+#define RV_MAX_MHPMEVENTS 29
+#define RV_MAX_MHPMCOUNTERS 32
FIELD(VTYPE, VLMUL, 0, 3)
FIELD(VTYPE, VSEW, 3, 3)
@@ -277,6 +279,15 @@ struct CPUArchState {
target_ulong mcountinhibit;
+ /* PMU counter configured values */
+ target_ulong mhpmcounter_val[RV_MAX_MHPMCOUNTERS];
+
+ /* for RV32 */
+ target_ulong mhpmcounterh_val[RV_MAX_MHPMCOUNTERS];
+
+ /* PMU event selector configured values */
+ target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
+
target_ulong sscratch;
target_ulong mscratch;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 94d39a4ce1c5..b931a3970e0f 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -230,6 +230,15 @@ static RISCVException mctr(CPURISCVState *env, int csrno)
return RISCV_EXCP_NONE;
}
+static RISCVException mctr32(CPURISCVState *env, int csrno)
+{
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return mctr(env, csrno);
+}
+
static RISCVException any(CPURISCVState *env, int csrno)
{
return RISCV_EXCP_NONE;
@@ -635,6 +644,75 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
#else /* CONFIG_USER_ONLY */
+static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ int evt_index = csrno - CSR_MHPMEVENT3;
+
+ *val = env->mhpmevent_val[evt_index];
+
+ return RISCV_EXCP_NONE;
+}
+
+static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int evt_index = csrno - CSR_MHPMEVENT3;
+
+ env->mhpmevent_val[evt_index] = val;
+
+ return RISCV_EXCP_NONE;
+}
+
+static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int ctr_index = csrno - CSR_MHPMCOUNTER3 + 3;
+
+ env->mhpmcounter_val[ctr_index] = val;
+
+ return RISCV_EXCP_NONE;
+}
+
+static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int ctr_index = csrno - CSR_MHPMCOUNTER3H + 3;
+
+ env->mhpmcounterh_val[ctr_index] = val;
+
+ return RISCV_EXCP_NONE;
+}
+
+static int read_hpmcounter(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ int ctr_index;
+
+ if (csrno >= CSR_MCYCLE && csrno <= CSR_MHPMCOUNTER31) {
+ ctr_index = csrno - CSR_MHPMCOUNTER3 + 3;
+ } else if (csrno >= CSR_CYCLE && csrno <= CSR_HPMCOUNTER31) {
+ ctr_index = csrno - CSR_HPMCOUNTER3 + 3;
+ } else {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ *val = env->mhpmcounter_val[ctr_index];
+
+ return RISCV_EXCP_NONE;
+}
+
+static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ int ctr_index;
+
+ if (csrno >= CSR_MCYCLEH && csrno <= CSR_MHPMCOUNTER31H) {
+ ctr_index = csrno - CSR_MHPMCOUNTER3H + 3;
+ } else if (csrno >= CSR_CYCLEH && csrno <= CSR_HPMCOUNTER31H) {
+ ctr_index = csrno - CSR_HPMCOUNTER3H + 3;
+ } else {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ *val = env->mhpmcounterh_val[ctr_index];
+
+ return RISCV_EXCP_NONE;
+}
+
+
static RISCVException read_time(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -3707,157 +3785,244 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_SPMBASE] = { "spmbase", pointer_masking, read_spmbase, write_spmbase },
/* Performance Counters */
- [CSR_HPMCOUNTER3] = { "hpmcounter3", ctr, read_zero },
- [CSR_HPMCOUNTER4] = { "hpmcounter4", ctr, read_zero },
- [CSR_HPMCOUNTER5] = { "hpmcounter5", ctr, read_zero },
- [CSR_HPMCOUNTER6] = { "hpmcounter6", ctr, read_zero },
- [CSR_HPMCOUNTER7] = { "hpmcounter7", ctr, read_zero },
- [CSR_HPMCOUNTER8] = { "hpmcounter8", ctr, read_zero },
- [CSR_HPMCOUNTER9] = { "hpmcounter9", ctr, read_zero },
- [CSR_HPMCOUNTER10] = { "hpmcounter10", ctr, read_zero },
- [CSR_HPMCOUNTER11] = { "hpmcounter11", ctr, read_zero },
- [CSR_HPMCOUNTER12] = { "hpmcounter12", ctr, read_zero },
- [CSR_HPMCOUNTER13] = { "hpmcounter13", ctr, read_zero },
- [CSR_HPMCOUNTER14] = { "hpmcounter14", ctr, read_zero },
- [CSR_HPMCOUNTER15] = { "hpmcounter15", ctr, read_zero },
- [CSR_HPMCOUNTER16] = { "hpmcounter16", ctr, read_zero },
- [CSR_HPMCOUNTER17] = { "hpmcounter17", ctr, read_zero },
- [CSR_HPMCOUNTER18] = { "hpmcounter18", ctr, read_zero },
- [CSR_HPMCOUNTER19] = { "hpmcounter19", ctr, read_zero },
- [CSR_HPMCOUNTER20] = { "hpmcounter20", ctr, read_zero },
- [CSR_HPMCOUNTER21] = { "hpmcounter21", ctr, read_zero },
- [CSR_HPMCOUNTER22] = { "hpmcounter22", ctr, read_zero },
- [CSR_HPMCOUNTER23] = { "hpmcounter23", ctr, read_zero },
- [CSR_HPMCOUNTER24] = { "hpmcounter24", ctr, read_zero },
- [CSR_HPMCOUNTER25] = { "hpmcounter25", ctr, read_zero },
- [CSR_HPMCOUNTER26] = { "hpmcounter26", ctr, read_zero },
- [CSR_HPMCOUNTER27] = { "hpmcounter27", ctr, read_zero },
- [CSR_HPMCOUNTER28] = { "hpmcounter28", ctr, read_zero },
- [CSR_HPMCOUNTER29] = { "hpmcounter29", ctr, read_zero },
- [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
- [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
-
- [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
- [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
- [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
- [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
- [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
- [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
- [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
- [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
- [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
- [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
- [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
- [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
- [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
- [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
- [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
- [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
- [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
- [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
- [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
- [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
- [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
- [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
- [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
- [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
- [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
- [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
- [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
- [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
- [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
-
- [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_mcountinhibit,
- write_mcountinhibit },
-
- [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
- [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
- [CSR_MHPMEVENT5] = { "mhpmevent5", any, read_zero },
- [CSR_MHPMEVENT6] = { "mhpmevent6", any, read_zero },
- [CSR_MHPMEVENT7] = { "mhpmevent7", any, read_zero },
- [CSR_MHPMEVENT8] = { "mhpmevent8", any, read_zero },
- [CSR_MHPMEVENT9] = { "mhpmevent9", any, read_zero },
- [CSR_MHPMEVENT10] = { "mhpmevent10", any, read_zero },
- [CSR_MHPMEVENT11] = { "mhpmevent11", any, read_zero },
- [CSR_MHPMEVENT12] = { "mhpmevent12", any, read_zero },
- [CSR_MHPMEVENT13] = { "mhpmevent13", any, read_zero },
- [CSR_MHPMEVENT14] = { "mhpmevent14", any, read_zero },
- [CSR_MHPMEVENT15] = { "mhpmevent15", any, read_zero },
- [CSR_MHPMEVENT16] = { "mhpmevent16", any, read_zero },
- [CSR_MHPMEVENT17] = { "mhpmevent17", any, read_zero },
- [CSR_MHPMEVENT18] = { "mhpmevent18", any, read_zero },
- [CSR_MHPMEVENT19] = { "mhpmevent19", any, read_zero },
- [CSR_MHPMEVENT20] = { "mhpmevent20", any, read_zero },
- [CSR_MHPMEVENT21] = { "mhpmevent21", any, read_zero },
- [CSR_MHPMEVENT22] = { "mhpmevent22", any, read_zero },
- [CSR_MHPMEVENT23] = { "mhpmevent23", any, read_zero },
- [CSR_MHPMEVENT24] = { "mhpmevent24", any, read_zero },
- [CSR_MHPMEVENT25] = { "mhpmevent25", any, read_zero },
- [CSR_MHPMEVENT26] = { "mhpmevent26", any, read_zero },
- [CSR_MHPMEVENT27] = { "mhpmevent27", any, read_zero },
- [CSR_MHPMEVENT28] = { "mhpmevent28", any, read_zero },
- [CSR_MHPMEVENT29] = { "mhpmevent29", any, read_zero },
- [CSR_MHPMEVENT30] = { "mhpmevent30", any, read_zero },
- [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_zero },
-
- [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_zero },
- [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_zero },
- [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_zero },
- [CSR_HPMCOUNTER6H] = { "hpmcounter6h", ctr32, read_zero },
- [CSR_HPMCOUNTER7H] = { "hpmcounter7h", ctr32, read_zero },
- [CSR_HPMCOUNTER8H] = { "hpmcounter8h", ctr32, read_zero },
- [CSR_HPMCOUNTER9H] = { "hpmcounter9h", ctr32, read_zero },
- [CSR_HPMCOUNTER10H] = { "hpmcounter10h", ctr32, read_zero },
- [CSR_HPMCOUNTER11H] = { "hpmcounter11h", ctr32, read_zero },
- [CSR_HPMCOUNTER12H] = { "hpmcounter12h", ctr32, read_zero },
- [CSR_HPMCOUNTER13H] = { "hpmcounter13h", ctr32, read_zero },
- [CSR_HPMCOUNTER14H] = { "hpmcounter14h", ctr32, read_zero },
- [CSR_HPMCOUNTER15H] = { "hpmcounter15h", ctr32, read_zero },
- [CSR_HPMCOUNTER16H] = { "hpmcounter16h", ctr32, read_zero },
- [CSR_HPMCOUNTER17H] = { "hpmcounter17h", ctr32, read_zero },
- [CSR_HPMCOUNTER18H] = { "hpmcounter18h", ctr32, read_zero },
- [CSR_HPMCOUNTER19H] = { "hpmcounter19h", ctr32, read_zero },
- [CSR_HPMCOUNTER20H] = { "hpmcounter20h", ctr32, read_zero },
- [CSR_HPMCOUNTER21H] = { "hpmcounter21h", ctr32, read_zero },
- [CSR_HPMCOUNTER22H] = { "hpmcounter22h", ctr32, read_zero },
- [CSR_HPMCOUNTER23H] = { "hpmcounter23h", ctr32, read_zero },
- [CSR_HPMCOUNTER24H] = { "hpmcounter24h", ctr32, read_zero },
- [CSR_HPMCOUNTER25H] = { "hpmcounter25h", ctr32, read_zero },
- [CSR_HPMCOUNTER26H] = { "hpmcounter26h", ctr32, read_zero },
- [CSR_HPMCOUNTER27H] = { "hpmcounter27h", ctr32, read_zero },
- [CSR_HPMCOUNTER28H] = { "hpmcounter28h", ctr32, read_zero },
- [CSR_HPMCOUNTER29H] = { "hpmcounter29h", ctr32, read_zero },
- [CSR_HPMCOUNTER30H] = { "hpmcounter30h", ctr32, read_zero },
- [CSR_HPMCOUNTER31H] = { "hpmcounter31h", ctr32, read_zero },
-
- [CSR_MHPMCOUNTER3H] = { "mhpmcounter3h", any32, read_zero },
- [CSR_MHPMCOUNTER4H] = { "mhpmcounter4h", any32, read_zero },
- [CSR_MHPMCOUNTER5H] = { "mhpmcounter5h", any32, read_zero },
- [CSR_MHPMCOUNTER6H] = { "mhpmcounter6h", any32, read_zero },
- [CSR_MHPMCOUNTER7H] = { "mhpmcounter7h", any32, read_zero },
- [CSR_MHPMCOUNTER8H] = { "mhpmcounter8h", any32, read_zero },
- [CSR_MHPMCOUNTER9H] = { "mhpmcounter9h", any32, read_zero },
- [CSR_MHPMCOUNTER10H] = { "mhpmcounter10h", any32, read_zero },
- [CSR_MHPMCOUNTER11H] = { "mhpmcounter11h", any32, read_zero },
- [CSR_MHPMCOUNTER12H] = { "mhpmcounter12h", any32, read_zero },
- [CSR_MHPMCOUNTER13H] = { "mhpmcounter13h", any32, read_zero },
- [CSR_MHPMCOUNTER14H] = { "mhpmcounter14h", any32, read_zero },
- [CSR_MHPMCOUNTER15H] = { "mhpmcounter15h", any32, read_zero },
- [CSR_MHPMCOUNTER16H] = { "mhpmcounter16h", any32, read_zero },
- [CSR_MHPMCOUNTER17H] = { "mhpmcounter17h", any32, read_zero },
- [CSR_MHPMCOUNTER18H] = { "mhpmcounter18h", any32, read_zero },
- [CSR_MHPMCOUNTER19H] = { "mhpmcounter19h", any32, read_zero },
- [CSR_MHPMCOUNTER20H] = { "mhpmcounter20h", any32, read_zero },
- [CSR_MHPMCOUNTER21H] = { "mhpmcounter21h", any32, read_zero },
- [CSR_MHPMCOUNTER22H] = { "mhpmcounter22h", any32, read_zero },
- [CSR_MHPMCOUNTER23H] = { "mhpmcounter23h", any32, read_zero },
- [CSR_MHPMCOUNTER24H] = { "mhpmcounter24h", any32, read_zero },
- [CSR_MHPMCOUNTER25H] = { "mhpmcounter25h", any32, read_zero },
- [CSR_MHPMCOUNTER26H] = { "mhpmcounter26h", any32, read_zero },
- [CSR_MHPMCOUNTER27H] = { "mhpmcounter27h", any32, read_zero },
- [CSR_MHPMCOUNTER28H] = { "mhpmcounter28h", any32, read_zero },
- [CSR_MHPMCOUNTER29H] = { "mhpmcounter29h", any32, read_zero },
- [CSR_MHPMCOUNTER30H] = { "mhpmcounter30h", any32, read_zero },
- [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", any32, read_zero },
+ [CSR_HPMCOUNTER3] = { "hpmcounter3", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER4] = { "hpmcounter4", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER5] = { "hpmcounter5", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER6] = { "hpmcounter6", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER7] = { "hpmcounter7", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER8] = { "hpmcounter8", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER9] = { "hpmcounter9", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER10] = { "hpmcounter10", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER11] = { "hpmcounter11", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER12] = { "hpmcounter12", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER13] = { "hpmcounter13", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER14] = { "hpmcounter14", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER15] = { "hpmcounter15", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER16] = { "hpmcounter16", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER17] = { "hpmcounter17", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER18] = { "hpmcounter18", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER19] = { "hpmcounter19", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER20] = { "hpmcounter20", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER21] = { "hpmcounter21", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER22] = { "hpmcounter22", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER23] = { "hpmcounter23", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER24] = { "hpmcounter24", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER25] = { "hpmcounter25", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER26] = { "hpmcounter26", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER27] = { "hpmcounter27", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER28] = { "hpmcounter28", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER29] = { "hpmcounter29", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_hpmcounter },
+ [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_hpmcounter },
+
+ [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_hpmcounter,
+ write_mhpmcounter },
+ [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_hpmcounter,
+ write_mhpmcounter },
+
+ [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_mcountinhibit,
+ write_mcountinhibit },
+
+ [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT5] = { "mhpmevent5", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT6] = { "mhpmevent6", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT7] = { "mhpmevent7", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT8] = { "mhpmevent8", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT9] = { "mhpmevent9", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT10] = { "mhpmevent10", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT11] = { "mhpmevent11", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT12] = { "mhpmevent12", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT13] = { "mhpmevent13", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT14] = { "mhpmevent14", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT15] = { "mhpmevent15", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT16] = { "mhpmevent16", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT17] = { "mhpmevent17", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT18] = { "mhpmevent18", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT19] = { "mhpmevent19", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT20] = { "mhpmevent20", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT21] = { "mhpmevent21", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT22] = { "mhpmevent22", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT23] = { "mhpmevent23", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT24] = { "mhpmevent24", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT25] = { "mhpmevent25", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT26] = { "mhpmevent26", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT27] = { "mhpmevent27", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT28] = { "mhpmevent28", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT29] = { "mhpmevent29", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT30] = { "mhpmevent30", any, read_mhpmevent,
+ write_mhpmevent },
+ [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
+ write_mhpmevent },
+
+ [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER6H] = { "hpmcounter6h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER7H] = { "hpmcounter7h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER8H] = { "hpmcounter8h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER9H] = { "hpmcounter9h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER10H] = { "hpmcounter10h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER11H] = { "hpmcounter11h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER12H] = { "hpmcounter12h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER13H] = { "hpmcounter13h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER14H] = { "hpmcounter14h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER15H] = { "hpmcounter15h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER16H] = { "hpmcounter16h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER17H] = { "hpmcounter17h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER18H] = { "hpmcounter18h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER19H] = { "hpmcounter19h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER20H] = { "hpmcounter20h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER21H] = { "hpmcounter21h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER22H] = { "hpmcounter22h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER23H] = { "hpmcounter23h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER24H] = { "hpmcounter24h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER25H] = { "hpmcounter25h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER26H] = { "hpmcounter26h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER27H] = { "hpmcounter27h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER28H] = { "hpmcounter28h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER29H] = { "hpmcounter29h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER30H] = { "hpmcounter30h", ctr32, read_hpmcounterh },
+ [CSR_HPMCOUNTER31H] = { "hpmcounter31h", ctr32, read_hpmcounterh },
+
+ [CSR_MHPMCOUNTER3H] = { "mhpmcounter3h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER4H] = { "mhpmcounter4h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER5H] = { "mhpmcounter5h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER6H] = { "mhpmcounter6h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER7H] = { "mhpmcounter7h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER8H] = { "mhpmcounter8h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER9H] = { "mhpmcounter9h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER10H] = { "mhpmcounter10h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER11H] = { "mhpmcounter11h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER12H] = { "mhpmcounter12h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER13H] = { "mhpmcounter13h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER14H] = { "mhpmcounter14h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER15H] = { "mhpmcounter15h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER16H] = { "mhpmcounter16h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER17H] = { "mhpmcounter17h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER18H] = { "mhpmcounter18h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER19H] = { "mhpmcounter19h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER20H] = { "mhpmcounter20h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER21H] = { "mhpmcounter21h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER22H] = { "mhpmcounter22h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER23H] = { "mhpmcounter23h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER24H] = { "mhpmcounter24h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER25H] = { "mhpmcounter25h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER26H] = { "mhpmcounter26h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER27H] = { "mhpmcounter27h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER28H] = { "mhpmcounter28h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER29H] = { "mhpmcounter29h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER30H] = { "mhpmcounter30h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
+ [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
+ write_mhpmcounterh },
#endif /* !CONFIG_USER_ONLY */
};
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index 87cd55bfd3a7..99193c85bb97 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -331,6 +331,9 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_UINTTL(env.scounteren, RISCVCPU),
VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
VMSTATE_UINTTL(env.mcountinhibit, RISCVCPU),
+ VMSTATE_UINTTL_ARRAY(env.mhpmcounter_val, RISCVCPU, RV_MAX_MHPMCOUNTERS),
+ VMSTATE_UINTTL_ARRAY(env.mhpmcounterh_val, RISCVCPU, RV_MAX_MHPMCOUNTERS),
+ VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
VMSTATE_UINTTL(env.mscratch, RISCVCPU),
VMSTATE_UINT64(env.mfromhost, RISCVCPU),
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 07/12] target/riscv: Support mcycle/minstret write operation
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (5 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 06/12] target/riscv: Add support for hpmcounters/hpmevents Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
` (4 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
mcycle/minstret are actually WARL registers and can be written with any
given value. With SBI PMU extension, it will be used to store a initial
value provided from supervisor OS. The Qemu also need prohibit the counter
increment if mcountinhibit is set.
Support mcycle/minstret through generic counter infrastructure.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.h | 23 ++++--
target/riscv/csr.c | 155 ++++++++++++++++++++++++++++-----------
target/riscv/machine.c | 25 ++++++-
target/riscv/meson.build | 3 +-
target/riscv/pmu.c | 32 ++++++++
target/riscv/pmu.h | 28 +++++++
6 files changed, 213 insertions(+), 53 deletions(-)
create mode 100644 target/riscv/pmu.c
create mode 100644 target/riscv/pmu.h
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 199d0d570bdd..5c7acc055ac9 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -117,7 +117,7 @@ typedef struct CPUArchState CPURISCVState;
#endif
#define RV_VLEN_MAX 1024
-#define RV_MAX_MHPMEVENTS 29
+#define RV_MAX_MHPMEVENTS 32
#define RV_MAX_MHPMCOUNTERS 32
FIELD(VTYPE, VLMUL, 0, 3)
@@ -127,6 +127,18 @@ FIELD(VTYPE, VMA, 7, 1)
FIELD(VTYPE, VEDIV, 8, 2)
FIELD(VTYPE, RESERVED, 10, sizeof(target_ulong) * 8 - 11)
+typedef struct PMUCTRState {
+ /* Current value of a counter */
+ target_ulong mhpmcounter_val;
+ /* Current value of a counter in RV32*/
+ target_ulong mhpmcounterh_val;
+ /* Snapshot values of counter */
+ target_ulong mhpmcounter_prev;
+ /* Snapshort value of a counter in RV32 */
+ target_ulong mhpmcounterh_prev;
+ bool started;
+} PMUCTRState;
+
struct CPUArchState {
target_ulong gpr[32];
target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
@@ -279,13 +291,10 @@ struct CPUArchState {
target_ulong mcountinhibit;
- /* PMU counter configured values */
- target_ulong mhpmcounter_val[RV_MAX_MHPMCOUNTERS];
-
- /* for RV32 */
- target_ulong mhpmcounterh_val[RV_MAX_MHPMCOUNTERS];
+ /* PMU counter state */
+ PMUCTRState pmu_ctrs[RV_MAX_MHPMCOUNTERS];
- /* PMU event selector configured values */
+ /* PMU event selector configured values. First three are unused*/
target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
target_ulong sscratch;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index b931a3970e0f..d65318dcc62d 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -21,6 +21,7 @@
#include "qemu/log.h"
#include "qemu/timer.h"
#include "cpu.h"
+#include "pmu.h"
#include "qemu/main-loop.h"
#include "exec/exec-all.h"
#include "sysemu/cpu-timers.h"
@@ -597,34 +598,28 @@ static int write_vcsr(CPURISCVState *env, int csrno, target_ulong val)
}
/* User Timers and Counters */
-static RISCVException read_instret(CPURISCVState *env, int csrno,
- target_ulong *val)
+static target_ulong get_ticks(bool shift)
{
+ int64_t val;
+ target_ulong result;
+
#if !defined(CONFIG_USER_ONLY)
if (icount_enabled()) {
- *val = icount_get();
+ val = icount_get();
} else {
- *val = cpu_get_host_ticks();
+ val = cpu_get_host_ticks();
}
#else
- *val = cpu_get_host_ticks();
+ val = cpu_get_host_ticks();
#endif
- return RISCV_EXCP_NONE;
-}
-static RISCVException read_instreth(CPURISCVState *env, int csrno,
- target_ulong *val)
-{
-#if !defined(CONFIG_USER_ONLY)
- if (icount_enabled()) {
- *val = icount_get() >> 32;
+ if (shift) {
+ result = val >> 32;
} else {
- *val = cpu_get_host_ticks() >> 32;
+ result = val;
}
-#else
- *val = cpu_get_host_ticks() >> 32;
-#endif
- return RISCV_EXCP_NONE;
+
+ return result;
}
#if defined(CONFIG_USER_ONLY)
@@ -642,11 +637,23 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static int read_hpmcounter(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ *val = get_ticks(false);
+ return RISCV_EXCP_NONE;
+}
+
+static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ *val = get_ticks(true);
+ return RISCV_EXCP_NONE;
+}
+
#else /* CONFIG_USER_ONLY */
static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
{
- int evt_index = csrno - CSR_MHPMEVENT3;
+ int evt_index = csrno - CSR_MCOUNTINHIBIT;
*val = env->mhpmevent_val[evt_index];
@@ -655,7 +662,7 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
{
- int evt_index = csrno - CSR_MHPMEVENT3;
+ int evt_index = csrno - CSR_MCOUNTINHIBIT;
env->mhpmevent_val[evt_index] = val;
@@ -664,55 +671,105 @@ static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
{
- int ctr_index = csrno - CSR_MHPMCOUNTER3 + 3;
+ int ctr_idx = csrno - CSR_MCYCLE;
+ PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
- env->mhpmcounter_val[ctr_index] = val;
+ counter->mhpmcounter_val = val;
+ if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
+ riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
+ counter->mhpmcounter_prev = get_ticks(false);
+ } else {
+ /* Other counters can keep incrementing from the given value */
+ counter->mhpmcounter_prev = val;
+ }
return RISCV_EXCP_NONE;
}
static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
{
- int ctr_index = csrno - CSR_MHPMCOUNTER3H + 3;
+ int ctr_idx = csrno - CSR_MCYCLEH;
+ PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
- env->mhpmcounterh_val[ctr_index] = val;
+ counter->mhpmcounterh_val = val;
+ if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
+ riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
+ counter->mhpmcounterh_prev = get_ticks(true);
+ } else {
+ counter->mhpmcounterh_prev = val;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
+ bool upper_half, uint32_t ctr_idx)
+{
+ PMUCTRState counter = env->pmu_ctrs[ctr_idx];
+ target_ulong ctr_prev = upper_half ? counter.mhpmcounterh_prev :
+ counter.mhpmcounter_prev;
+ target_ulong ctr_val = upper_half ? counter.mhpmcounterh_val :
+ counter.mhpmcounter_val;
+
+ if (get_field(env->mcountinhibit, BIT(ctr_idx))) {
+ /**
+ * Counter should not increment if inhibit bit is set. We can't really
+ * stop the icount counting. Just return the counter value written by
+ * the supervisor to indicate that counter was not incremented.
+ */
+ if (!counter.started) {
+ *val = ctr_val;
+ return RISCV_EXCP_NONE;
+ } else {
+ /* Mark that the counter has been stopped */
+ counter.started = false;
+ }
+ }
+
+ /**
+ * The kernel computes the perf delta by subtracting the current value from
+ * the value it initialized previously (ctr_val).
+ */
+ if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
+ riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
+ *val = get_ticks(upper_half) - ctr_prev + ctr_val;
+ } else {
+ *val = ctr_val;
+ }
return RISCV_EXCP_NONE;
}
static int read_hpmcounter(CPURISCVState *env, int csrno, target_ulong *val)
{
- int ctr_index;
+ uint16_t ctr_index;
if (csrno >= CSR_MCYCLE && csrno <= CSR_MHPMCOUNTER31) {
- ctr_index = csrno - CSR_MHPMCOUNTER3 + 3;
+ ctr_index = csrno - CSR_MCYCLE;
} else if (csrno >= CSR_CYCLE && csrno <= CSR_HPMCOUNTER31) {
- ctr_index = csrno - CSR_HPMCOUNTER3 + 3;
+ ctr_index = csrno - CSR_CYCLE;
} else {
return RISCV_EXCP_ILLEGAL_INST;
}
- *val = env->mhpmcounter_val[ctr_index];
- return RISCV_EXCP_NONE;
+ return riscv_pmu_read_ctr(env, val, false, ctr_index);
}
static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
{
- int ctr_index;
+ uint16_t ctr_index;
if (csrno >= CSR_MCYCLEH && csrno <= CSR_MHPMCOUNTER31H) {
- ctr_index = csrno - CSR_MHPMCOUNTER3H + 3;
+ ctr_index = csrno - CSR_MCYCLEH;
} else if (csrno >= CSR_CYCLEH && csrno <= CSR_HPMCOUNTER31H) {
- ctr_index = csrno - CSR_HPMCOUNTER3H + 3;
+ ctr_index = csrno - CSR_CYCLEH;
} else {
return RISCV_EXCP_ILLEGAL_INST;
}
- *val = env->mhpmcounterh_val[ctr_index];
- return RISCV_EXCP_NONE;
+ return riscv_pmu_read_ctr(env, val, true, ctr_index);
}
-
static RISCVException read_time(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -1567,11 +1624,23 @@ static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
target_ulong val)
{
+ int cidx;
+ PMUCTRState *counter;
+
if (env->priv_ver < PRIV_VERSION_1_11_0) {
return RISCV_EXCP_ILLEGAL_INST;
}
env->mcountinhibit = val;
+
+ /* Check if any other counter is also monitoring cycles/instructions */
+ for (cidx = 0; cidx < RV_MAX_MHPMCOUNTERS; cidx++) {
+ if (!get_field(env->mcountinhibit, BIT(cidx))) {
+ counter = &env->pmu_ctrs[cidx];
+ counter->started = true;
+ }
+ }
+
return RISCV_EXCP_NONE;
}
@@ -3533,10 +3602,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_VLENB] = { "vlenb", vs, read_vlenb,
.min_priv_ver = PRIV_VERSION_1_12_0 },
/* User Timers and Counters */
- [CSR_CYCLE] = { "cycle", ctr, read_instret },
- [CSR_INSTRET] = { "instret", ctr, read_instret },
- [CSR_CYCLEH] = { "cycleh", ctr32, read_instreth },
- [CSR_INSTRETH] = { "instreth", ctr32, read_instreth },
+ [CSR_CYCLE] = { "cycle", ctr, read_hpmcounter },
+ [CSR_INSTRET] = { "instret", ctr, read_hpmcounter },
+ [CSR_CYCLEH] = { "cycleh", ctr32, read_hpmcounterh },
+ [CSR_INSTRETH] = { "instreth", ctr32, read_hpmcounterh },
/*
* In privileged mode, the monitor will have to emulate TIME CSRs only if
@@ -3550,10 +3619,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
#if !defined(CONFIG_USER_ONLY)
/* Machine Timers and Counters */
- [CSR_MCYCLE] = { "mcycle", any, read_instret },
- [CSR_MINSTRET] = { "minstret", any, read_instret },
- [CSR_MCYCLEH] = { "mcycleh", any32, read_instreth },
- [CSR_MINSTRETH] = { "minstreth", any32, read_instreth },
+ [CSR_MCYCLE] = { "mcycle", any, read_hpmcounter, write_mhpmcounter},
+ [CSR_MINSTRET] = { "minstret", any, read_hpmcounter, write_mhpmcounter},
+ [CSR_MCYCLEH] = { "mcycleh", any32, read_hpmcounterh, write_mhpmcounterh},
+ [CSR_MINSTRETH] = { "minstreth", any32, read_hpmcounterh, write_mhpmcounterh},
/* Machine Information Registers */
[CSR_MVENDORID] = { "mvendorid", any, read_mvendorid },
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index 99193c85bb97..dc182ca81119 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -279,7 +279,28 @@ static const VMStateDescription vmstate_envcfg = {
VMSTATE_UINT64(env.menvcfg, RISCVCPU),
VMSTATE_UINTTL(env.senvcfg, RISCVCPU),
VMSTATE_UINT64(env.henvcfg, RISCVCPU),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+static bool pmu_needed(void *opaque)
+{
+ RISCVCPU *cpu = opaque;
+ return cpu->cfg.pmu_num;
+}
+
+static const VMStateDescription vmstate_pmu_ctr_state = {
+ .name = "cpu/pmu",
+ .version_id = 1,
+ .minimum_version_id = 1,
+ .needed = pmu_needed,
+ .fields = (VMStateField[]) {
+ VMSTATE_UINTTL(mhpmcounter_val, PMUCTRState),
+ VMSTATE_UINTTL(mhpmcounterh_val, PMUCTRState),
+ VMSTATE_UINTTL(mhpmcounter_prev, PMUCTRState),
+ VMSTATE_UINTTL(mhpmcounterh_prev, PMUCTRState),
+ VMSTATE_BOOL(started, PMUCTRState),
VMSTATE_END_OF_LIST()
}
};
@@ -331,8 +352,8 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_UINTTL(env.scounteren, RISCVCPU),
VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
VMSTATE_UINTTL(env.mcountinhibit, RISCVCPU),
- VMSTATE_UINTTL_ARRAY(env.mhpmcounter_val, RISCVCPU, RV_MAX_MHPMCOUNTERS),
- VMSTATE_UINTTL_ARRAY(env.mhpmcounterh_val, RISCVCPU, RV_MAX_MHPMCOUNTERS),
+ VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
+ vmstate_pmu_ctr_state, PMUCTRState),
VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
VMSTATE_UINTTL(env.mscratch, RISCVCPU),
diff --git a/target/riscv/meson.build b/target/riscv/meson.build
index 096249f3a30f..2c1975e72c4e 100644
--- a/target/riscv/meson.build
+++ b/target/riscv/meson.build
@@ -30,7 +30,8 @@ riscv_softmmu_ss.add(files(
'pmp.c',
'debug.c',
'monitor.c',
- 'machine.c'
+ 'machine.c',
+ 'pmu.c'
))
target_arch += {'riscv': riscv_ss}
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
new file mode 100644
index 000000000000..000fe8da45ef
--- /dev/null
+++ b/target/riscv/pmu.c
@@ -0,0 +1,32 @@
+/*
+ * RISC-V PMU file.
+ *
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "pmu.h"
+
+bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
+ uint32_t target_ctr)
+{
+ return (target_ctr == 0) ? true : false;
+}
+
+bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
+{
+ return (target_ctr == 2) ? true : false;
+}
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
new file mode 100644
index 000000000000..58a5bc3a4089
--- /dev/null
+++ b/target/riscv/pmu.h
@@ -0,0 +1,28 @@
+/*
+ * RISC-V PMU header file.
+ *
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "cpu.h"
+#include "qemu/main-loop.h"
+#include "exec/exec-all.h"
+
+bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
+ uint32_t target_ctr);
+bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
+ uint32_t target_ctr);
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (6 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 07/12] target/riscv: Support mcycle/minstret write operation Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-07-05 0:31 ` Weiwei Li
` (2 more replies)
2022-06-20 23:15 ` [PATCH v10 09/12] target/riscv: Simplify counter predicate function Atish Patra
` (3 subsequent siblings)
11 siblings, 3 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Atish Patra, Alistair Francis, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang
The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
extension allows the perf to handle overflow interrupts and filtering
support. This patch provides a framework for programmable
counters to leverage the extension. As the extension doesn't have any
provision for the overflow bit for fixed counters, the fixed events
can also be monitoring using programmable counters. The underlying
counters for cycle and instruction counters are always running. Thus,
a separate timer device is programmed to handle the overflow.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 11 ++
target/riscv/cpu.h | 25 +++
target/riscv/cpu_bits.h | 55 +++++++
target/riscv/csr.c | 165 ++++++++++++++++++-
target/riscv/machine.c | 1 +
target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++-
target/riscv/pmu.h | 7 +
7 files changed, 610 insertions(+), 11 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index d12c6dc630ca..7d9e2aca12a9 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -22,6 +22,7 @@
#include "qemu/ctype.h"
#include "qemu/log.h"
#include "cpu.h"
+#include "pmu.h"
#include "internals.h"
#include "exec/exec-all.h"
#include "qapi/error.h"
@@ -775,6 +776,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
set_misa(env, env->misa_mxl, ext);
}
+#ifndef CONFIG_USER_ONLY
+ if (cpu->cfg.pmu_num) {
+ if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
+ cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
+ riscv_pmu_timer_cb, cpu);
+ }
+ }
+#endif
+
riscv_cpu_register_gdb_regs_for_features(cs);
qemu_init_vcpu(cs);
@@ -879,6 +889,7 @@ static Property riscv_cpu_extensions[] = {
DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
+ DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 5c7acc055ac9..2222db193c3d 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -137,6 +137,8 @@ typedef struct PMUCTRState {
/* Snapshort value of a counter in RV32 */
target_ulong mhpmcounterh_prev;
bool started;
+ /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
+ target_ulong irq_overflow_left;
} PMUCTRState;
struct CPUArchState {
@@ -297,6 +299,9 @@ struct CPUArchState {
/* PMU event selector configured values. First three are unused*/
target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
+ /* PMU event selector configured values for RV32*/
+ target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
+
target_ulong sscratch;
target_ulong mscratch;
@@ -433,6 +438,7 @@ struct RISCVCPUConfig {
bool ext_zve32f;
bool ext_zve64f;
bool ext_zmmul;
+ bool ext_sscofpmf;
bool rvv_ta_all_1s;
uint32_t mvendorid;
@@ -479,6 +485,12 @@ struct ArchCPU {
/* Configuration Settings */
RISCVCPUConfig cfg;
+
+ QEMUTimer *pmu_timer;
+ /* A bitmask of Available programmable counters */
+ uint32_t pmu_avail_ctrs;
+ /* Mapping of events to counters */
+ GHashTable *pmu_event_ctr_map;
};
static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
@@ -738,6 +750,19 @@ enum {
CSR_TABLE_SIZE = 0x1000
};
+/**
+ * The event id are encoded based on the encoding specified in the
+ * SBI specification v0.3
+ */
+
+enum riscv_pmu_event_idx {
+ RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
+ RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
+ RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
+ RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
+ RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
+};
+
/* CSR function table */
extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index b3f7fa713000..d94abefdaa0f 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -400,6 +400,37 @@
#define CSR_MHPMEVENT29 0x33d
#define CSR_MHPMEVENT30 0x33e
#define CSR_MHPMEVENT31 0x33f
+
+#define CSR_MHPMEVENT3H 0x723
+#define CSR_MHPMEVENT4H 0x724
+#define CSR_MHPMEVENT5H 0x725
+#define CSR_MHPMEVENT6H 0x726
+#define CSR_MHPMEVENT7H 0x727
+#define CSR_MHPMEVENT8H 0x728
+#define CSR_MHPMEVENT9H 0x729
+#define CSR_MHPMEVENT10H 0x72a
+#define CSR_MHPMEVENT11H 0x72b
+#define CSR_MHPMEVENT12H 0x72c
+#define CSR_MHPMEVENT13H 0x72d
+#define CSR_MHPMEVENT14H 0x72e
+#define CSR_MHPMEVENT15H 0x72f
+#define CSR_MHPMEVENT16H 0x730
+#define CSR_MHPMEVENT17H 0x731
+#define CSR_MHPMEVENT18H 0x732
+#define CSR_MHPMEVENT19H 0x733
+#define CSR_MHPMEVENT20H 0x734
+#define CSR_MHPMEVENT21H 0x735
+#define CSR_MHPMEVENT22H 0x736
+#define CSR_MHPMEVENT23H 0x737
+#define CSR_MHPMEVENT24H 0x738
+#define CSR_MHPMEVENT25H 0x739
+#define CSR_MHPMEVENT26H 0x73a
+#define CSR_MHPMEVENT27H 0x73b
+#define CSR_MHPMEVENT28H 0x73c
+#define CSR_MHPMEVENT29H 0x73d
+#define CSR_MHPMEVENT30H 0x73e
+#define CSR_MHPMEVENT31H 0x73f
+
#define CSR_MHPMCOUNTER3H 0xb83
#define CSR_MHPMCOUNTER4H 0xb84
#define CSR_MHPMCOUNTER5H 0xb85
@@ -461,6 +492,7 @@
#define CSR_VSMTE 0x2c0
#define CSR_VSPMMASK 0x2c1
#define CSR_VSPMBASE 0x2c2
+#define CSR_SCOUNTOVF 0xda0
/* Crypto Extension */
#define CSR_SEED 0x015
@@ -638,6 +670,7 @@ typedef enum RISCVException {
#define IRQ_VS_EXT 10
#define IRQ_M_EXT 11
#define IRQ_S_GEXT 12
+#define IRQ_PMU_OVF 13
#define IRQ_LOCAL_MAX 16
#define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
@@ -655,11 +688,13 @@ typedef enum RISCVException {
#define MIP_VSEIP (1 << IRQ_VS_EXT)
#define MIP_MEIP (1 << IRQ_M_EXT)
#define MIP_SGEIP (1 << IRQ_S_GEXT)
+#define MIP_LCOFIP (1 << IRQ_PMU_OVF)
/* sip masks */
#define SIP_SSIP MIP_SSIP
#define SIP_STIP MIP_STIP
#define SIP_SEIP MIP_SEIP
+#define SIP_LCOFIP MIP_LCOFIP
/* MIE masks */
#define MIE_SEIE (1 << IRQ_S_EXT)
@@ -813,4 +848,24 @@ typedef enum RISCVException {
#define SEED_OPST_WAIT (0b01 << 30)
#define SEED_OPST_ES16 (0b10 << 30)
#define SEED_OPST_DEAD (0b11 << 30)
+/* PMU related bits */
+#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
+
+#define MHPMEVENT_BIT_OF BIT_ULL(63)
+#define MHPMEVENTH_BIT_OF BIT(31)
+#define MHPMEVENT_BIT_MINH BIT_ULL(62)
+#define MHPMEVENTH_BIT_MINH BIT(30)
+#define MHPMEVENT_BIT_SINH BIT_ULL(61)
+#define MHPMEVENTH_BIT_SINH BIT(29)
+#define MHPMEVENT_BIT_UINH BIT_ULL(60)
+#define MHPMEVENTH_BIT_UINH BIT(28)
+#define MHPMEVENT_BIT_VSINH BIT_ULL(59)
+#define MHPMEVENTH_BIT_VSINH BIT(27)
+#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
+#define MHPMEVENTH_BIT_VUINH BIT(26)
+
+#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
+#define MHPMEVENT_IDX_MASK 0xFFFFF
+#define MHPMEVENT_SSCOF_RESVD 16
+
#endif
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index d65318dcc62d..2664ce265784 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
CPUState *cs = env_cpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
int ctr_index;
- int base_csrno = CSR_HPMCOUNTER3;
+ int base_csrno = CSR_CYCLE;
bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
if (rv32 && csrno >= CSR_CYCLEH) {
@@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
ctr_index = csrno - base_csrno;
- if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
+ if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
+ (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
+ goto skip_ext_pmu_check;
+ }
+
+ if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
/* No counter is enabled in PMU or the counter is out of range */
return RISCV_EXCP_ILLEGAL_INST;
}
+skip_ext_pmu_check:
+
if (env->priv == PRV_S) {
switch (csrno) {
case CSR_CYCLE:
@@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
- ctr_index = csrno - CSR_CYCLE;
if (!get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_ILLEGAL_INST;
}
@@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
- ctr_index = csrno - CSR_CYCLEH;
if (!get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_ILLEGAL_INST;
}
@@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
- ctr_index = csrno - CSR_CYCLE;
if (!get_field(env->hcounteren, 1 << ctr_index) &&
get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
@@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
}
break;
case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
- ctr_index = csrno - CSR_CYCLEH;
if (!get_field(env->hcounteren, 1 << ctr_index) &&
get_field(env->mcounteren, 1 << ctr_index)) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
@@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
return mctr(env, csrno);
}
+static RISCVException sscofpmf(CPURISCVState *env, int csrno)
+{
+ CPUState *cs = env_cpu(env);
+ RISCVCPU *cpu = RISCV_CPU(cs);
+
+ if (!cpu->cfg.ext_sscofpmf) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException any(CPURISCVState *env, int csrno)
{
return RISCV_EXCP_NONE;
@@ -663,9 +678,38 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
{
int evt_index = csrno - CSR_MCOUNTINHIBIT;
+ uint64_t mhpmevt_val = val;
env->mhpmevent_val[evt_index] = val;
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ mhpmevt_val = mhpmevt_val | ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
+ }
+ riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+
+ return RISCV_EXCP_NONE;
+}
+
+static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ int evt_index = csrno - CSR_MHPMEVENT3H + 3;
+
+ *val = env->mhpmeventh_val[evt_index];
+
+ return RISCV_EXCP_NONE;
+}
+
+static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int evt_index = csrno - CSR_MHPMEVENT3H + 3;
+ uint64_t mhpmevth_val = val;
+ uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
+
+ mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
+ env->mhpmeventh_val[evt_index] = val;
+
+ riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+
return RISCV_EXCP_NONE;
}
@@ -673,12 +717,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
{
int ctr_idx = csrno - CSR_MCYCLE;
PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+ uint64_t mhpmctr_val = val;
counter->mhpmcounter_val = val;
if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
counter->mhpmcounter_prev = get_ticks(false);
- } else {
+ if (ctr_idx > 2) {
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ mhpmctr_val = mhpmctr_val |
+ ((uint64_t)counter->mhpmcounterh_val << 32);
+ }
+ riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
+ }
+ } else {
/* Other counters can keep incrementing from the given value */
counter->mhpmcounter_prev = val;
}
@@ -690,11 +742,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
{
int ctr_idx = csrno - CSR_MCYCLEH;
PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+ uint64_t mhpmctr_val = counter->mhpmcounter_val;
+ uint64_t mhpmctrh_val = val;
counter->mhpmcounterh_val = val;
+ mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
counter->mhpmcounterh_prev = get_ticks(true);
+ if (ctr_idx > 2) {
+ riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
+ }
} else {
counter->mhpmcounterh_prev = val;
}
@@ -770,6 +828,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
return riscv_pmu_read_ctr(env, val, true, ctr_index);
}
+static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
+{
+ int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
+ int i;
+ *val = 0;
+ target_ulong *mhpm_evt_val;
+ uint64_t of_bit_mask;
+
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ mhpm_evt_val = env->mhpmeventh_val;
+ of_bit_mask = MHPMEVENTH_BIT_OF;
+ } else {
+ mhpm_evt_val = env->mhpmevent_val;
+ of_bit_mask = MHPMEVENT_BIT_OF;
+ }
+
+ for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
+ if ((get_field(env->mcounteren, BIT(i))) &&
+ (mhpm_evt_val[i] & of_bit_mask)) {
+ *val |= BIT(i);
+ }
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException read_time(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -799,7 +883,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
/* Machine constants */
#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
-#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
+#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
+ MIP_LCOFIP))
#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
@@ -840,7 +925,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
-static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
+static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
+ SIP_LCOFIP;
static const target_ulong hip_writable_mask = MIP_VSSIP;
static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
static const target_ulong vsip_writable_mask = MIP_VSSIP;
@@ -4005,6 +4091,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
write_mhpmevent },
+ [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+ [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
+ write_mhpmeventh},
+
[CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
[CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
[CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
@@ -4093,5 +4238,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
write_mhpmcounterh },
[CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
write_mhpmcounterh },
+ [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
+
#endif /* !CONFIG_USER_ONLY */
};
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index dc182ca81119..33ef9b8e9908 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
vmstate_pmu_ctr_state, PMUCTRState),
VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
+ VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
VMSTATE_UINTTL(env.sscratch, RISCVCPU),
VMSTATE_UINTTL(env.mscratch, RISCVCPU),
VMSTATE_UINT64(env.mfromhost, RISCVCPU),
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
index 000fe8da45ef..34096941c0ce 100644
--- a/target/riscv/pmu.c
+++ b/target/riscv/pmu.c
@@ -19,14 +19,367 @@
#include "qemu/osdep.h"
#include "cpu.h"
#include "pmu.h"
+#include "sysemu/cpu-timers.h"
+
+#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
+#define MAKE_32BIT_MASK(shift, length) \
+ (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
+
+static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+ if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
+ !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
+ return false;
+ } else {
+ return true;
+ }
+}
+
+static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+ CPURISCVState *env = &cpu->env;
+
+ if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
+ !get_field(env->mcountinhibit, BIT(ctr_idx))) {
+ return true;
+ } else {
+ return false;
+ }
+}
+
+static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+ CPURISCVState *env = &cpu->env;
+ target_ulong max_val = UINT32_MAX;
+ PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+ bool virt_on = riscv_cpu_virt_enabled(env);
+
+ /* Privilege mode filtering */
+ if ((env->priv == PRV_M &&
+ (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
+ (env->priv == PRV_S && virt_on &&
+ (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
+ (env->priv == PRV_U && virt_on &&
+ (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
+ (env->priv == PRV_S && !virt_on &&
+ (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
+ (env->priv == PRV_U && !virt_on &&
+ (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
+ return 0;
+ }
+
+ /* Handle the overflow scenario */
+ if (counter->mhpmcounter_val == max_val) {
+ if (counter->mhpmcounterh_val == max_val) {
+ counter->mhpmcounter_val = 0;
+ counter->mhpmcounterh_val = 0;
+ /* Generate interrupt only if OF bit is clear */
+ if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
+ env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
+ riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+ }
+ } else {
+ counter->mhpmcounterh_val++;
+ }
+ } else {
+ counter->mhpmcounter_val++;
+ }
+
+ return 0;
+}
+
+static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
+{
+ CPURISCVState *env = &cpu->env;
+ PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+ uint64_t max_val = UINT64_MAX;
+ bool virt_on = riscv_cpu_virt_enabled(env);
+
+ /* Privilege mode filtering */
+ if ((env->priv == PRV_M &&
+ (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
+ (env->priv == PRV_S && virt_on &&
+ (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
+ (env->priv == PRV_U && virt_on &&
+ (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
+ (env->priv == PRV_S && !virt_on &&
+ (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
+ (env->priv == PRV_U && !virt_on &&
+ (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
+ return 0;
+ }
+
+ /* Handle the overflow scenario */
+ if (counter->mhpmcounter_val == max_val) {
+ counter->mhpmcounter_val = 0;
+ /* Generate interrupt only if OF bit is clear */
+ if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
+ env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
+ riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+ }
+ } else {
+ counter->mhpmcounter_val++;
+ }
+ return 0;
+}
+
+int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
+{
+ uint32_t ctr_idx;
+ int ret;
+ CPURISCVState *env = &cpu->env;
+ gpointer value;
+
+ value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
+ GUINT_TO_POINTER(event_idx));
+ if (!value) {
+ return -1;
+ }
+
+ ctr_idx = GPOINTER_TO_UINT(value);
+ if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
+ get_field(env->mcountinhibit, BIT(ctr_idx))) {
+ return -1;
+ }
+
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
+ } else {
+ ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
+ }
+
+ return ret;
+}
bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
uint32_t target_ctr)
{
- return (target_ctr == 0) ? true : false;
+ RISCVCPU *cpu;
+ uint32_t event_idx;
+ uint32_t ctr_idx;
+
+ /* Fixed instret counter */
+ if (target_ctr == 2) {
+ return true;
+ }
+
+ cpu = RISCV_CPU(env_cpu(env));
+ event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
+ ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+ GUINT_TO_POINTER(event_idx)));
+ if (!ctr_idx) {
+ return false;
+ }
+
+ return target_ctr == ctr_idx ? true : false;
}
bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
{
- return (target_ctr == 2) ? true : false;
+ RISCVCPU *cpu;
+ uint32_t event_idx;
+ uint32_t ctr_idx;
+
+ /* Fixed mcycle counter */
+ if (target_ctr == 0) {
+ return true;
+ }
+
+ cpu = RISCV_CPU(env_cpu(env));
+ event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
+ ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+ GUINT_TO_POINTER(event_idx)));
+
+ /* Counter zero is not used for event_ctr_map */
+ if (!ctr_idx) {
+ return false;
+ }
+
+ return (target_ctr == ctr_idx) ? true : false;
+}
+
+static gboolean pmu_remove_event_map(gpointer key, gpointer value,
+ gpointer udata)
+{
+ return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
+}
+
+static int64_t pmu_icount_ticks_to_ns(int64_t value)
+{
+ int64_t ret = 0;
+
+ if (icount_enabled()) {
+ ret = icount_to_ns(value);
+ } else {
+ ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
+ }
+
+ return ret;
+}
+
+int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
+ uint32_t ctr_idx)
+{
+ uint32_t event_idx;
+ RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
+
+ if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
+ return -1;
+ }
+
+ /**
+ * Expected mhpmevent value is zero for reset case. Remove the current
+ * mapping.
+ */
+ if (!value) {
+ g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
+ pmu_remove_event_map,
+ GUINT_TO_POINTER(ctr_idx));
+ return 0;
+ }
+
+ event_idx = value & MHPMEVENT_IDX_MASK;
+ if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
+ GUINT_TO_POINTER(event_idx))) {
+ return 0;
+ }
+
+ switch (event_idx) {
+ case RISCV_PMU_EVENT_HW_CPU_CYCLES:
+ case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
+ case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
+ case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
+ case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
+ break;
+ default:
+ /* We don't support any raw events right now */
+ return -1;
+ }
+ g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
+ GUINT_TO_POINTER(ctr_idx));
+
+ return 0;
+}
+
+static void pmu_timer_trigger_irq(RISCVCPU *cpu,
+ enum riscv_pmu_event_idx evt_idx)
+{
+ uint32_t ctr_idx;
+ CPURISCVState *env = &cpu->env;
+ PMUCTRState *counter;
+ target_ulong *mhpmevent_val;
+ uint64_t of_bit_mask;
+ int64_t irq_trigger_at;
+
+ if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
+ evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
+ return;
+ }
+
+ ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
+ GUINT_TO_POINTER(evt_idx)));
+ if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
+ return;
+ }
+
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
+ of_bit_mask = MHPMEVENTH_BIT_OF;
+ } else {
+ mhpmevent_val = &env->mhpmevent_val[ctr_idx];
+ of_bit_mask = MHPMEVENT_BIT_OF;
+ }
+
+ counter = &env->pmu_ctrs[ctr_idx];
+ if (counter->irq_overflow_left > 0) {
+ irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
+ counter->irq_overflow_left;
+ timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
+ counter->irq_overflow_left = 0;
+ return;
+ }
+
+ if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
+ /* Generate interrupt only if OF bit is clear */
+ if (!(*mhpmevent_val & of_bit_mask)) {
+ *mhpmevent_val |= of_bit_mask;
+ riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
+ }
+ }
+}
+
+/* Timer callback for instret and cycle counter overflow */
+void riscv_pmu_timer_cb(void *priv)
+{
+ RISCVCPU *cpu = priv;
+
+ /* Timer event was triggered only for these events */
+ pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
+ pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
+}
+
+int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
+{
+ uint64_t overflow_delta, overflow_at;
+ int64_t overflow_ns, overflow_left = 0;
+ RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
+ PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
+
+ if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
+ return -1;
+ }
+
+ if (value) {
+ overflow_delta = UINT64_MAX - value + 1;
+ } else {
+ overflow_delta = UINT64_MAX;
+ }
+
+ /**
+ * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
+ * Compute the leftover and save it so that it can be reprogrammed again
+ * when timer expires.
+ */
+ if (overflow_delta > INT64_MAX) {
+ overflow_left = overflow_delta - INT64_MAX;
+ }
+
+ if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
+ riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
+ overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
+ overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
+ } else {
+ return -1;
+ }
+ overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
+
+ if (overflow_at > INT64_MAX) {
+ overflow_left += overflow_at - INT64_MAX;
+ counter->irq_overflow_left = overflow_left;
+ overflow_at = INT64_MAX;
+ }
+ timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
+
+ return 0;
+}
+
+
+int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
+{
+ if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
+ return -1;
+ }
+
+ cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
+ if (!cpu->pmu_event_ctr_map) {
+ /* PMU support can not be enabled */
+ qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
+ cpu->cfg.pmu_num = 0;
+ return -1;
+ }
+
+ /* Create a bitmask of available programmable counters */
+ cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
+
+ return 0;
}
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
index 58a5bc3a4089..036653627f78 100644
--- a/target/riscv/pmu.h
+++ b/target/riscv/pmu.h
@@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
uint32_t target_ctr);
bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
uint32_t target_ctr);
+void riscv_pmu_timer_cb(void *priv);
+int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
+int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
+ uint32_t ctr_idx);
+int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
+int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
+ uint32_t ctr_idx);
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 09/12] target/riscv: Simplify counter predicate function
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (7 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
@ 2022-06-20 23:15 ` Atish Patra
2022-07-04 15:19 ` Weiwei Li
2022-07-14 9:54 ` Heiko Stübner
2022-06-20 23:16 ` [PATCH v10 10/12] target/riscv: Add few cache related PMU events Atish Patra
` (2 subsequent siblings)
11 siblings, 2 replies; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:15 UTC (permalink / raw)
To: qemu-devel
Cc: Bin Meng, Alistair Francis, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang
All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
as a unified counter. Thus, the predicate function doesn't need handle each
case separately.
Simplify the predicate function so that we just handle things differently
between RV32/RV64 and S/HS mode.
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 112 +++++----------------------------------------
1 file changed, 11 insertions(+), 101 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 2664ce265784..9367e2af9b90 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
CPUState *cs = env_cpu(env);
RISCVCPU *cpu = RISCV_CPU(cs);
int ctr_index;
+ target_ulong ctr_mask;
int base_csrno = CSR_CYCLE;
bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
@@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
base_csrno += 0x80;
}
ctr_index = csrno - base_csrno;
+ ctr_mask = BIT(ctr_index);
if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
(csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
goto skip_ext_pmu_check;
}
- if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
+ if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & ctr_mask))) {
/* No counter is enabled in PMU or the counter is out of range */
return RISCV_EXCP_ILLEGAL_INST;
}
skip_ext_pmu_check:
- if (env->priv == PRV_S) {
- switch (csrno) {
- case CSR_CYCLE:
- if (!get_field(env->mcounteren, COUNTEREN_CY)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_TIME:
- if (!get_field(env->mcounteren, COUNTEREN_TM)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_INSTRET:
- if (!get_field(env->mcounteren, COUNTEREN_IR)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
- if (!get_field(env->mcounteren, 1 << ctr_index)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- }
- if (rv32) {
- switch (csrno) {
- case CSR_CYCLEH:
- if (!get_field(env->mcounteren, COUNTEREN_CY)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_TIMEH:
- if (!get_field(env->mcounteren, COUNTEREN_TM)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_INSTRETH:
- if (!get_field(env->mcounteren, COUNTEREN_IR)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
- if (!get_field(env->mcounteren, 1 << ctr_index)) {
- return RISCV_EXCP_ILLEGAL_INST;
- }
- break;
- }
- }
+ if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
+ ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
+ return RISCV_EXCP_ILLEGAL_INST;
}
if (riscv_cpu_virt_enabled(env)) {
- switch (csrno) {
- case CSR_CYCLE:
- if (!get_field(env->hcounteren, COUNTEREN_CY) &&
- get_field(env->mcounteren, COUNTEREN_CY)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_TIME:
- if (!get_field(env->hcounteren, COUNTEREN_TM) &&
- get_field(env->mcounteren, COUNTEREN_TM)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_INSTRET:
- if (!get_field(env->hcounteren, COUNTEREN_IR) &&
- get_field(env->mcounteren, COUNTEREN_IR)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
- if (!get_field(env->hcounteren, 1 << ctr_index) &&
- get_field(env->mcounteren, 1 << ctr_index)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- }
- if (rv32) {
- switch (csrno) {
- case CSR_CYCLEH:
- if (!get_field(env->hcounteren, COUNTEREN_CY) &&
- get_field(env->mcounteren, COUNTEREN_CY)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_TIMEH:
- if (!get_field(env->hcounteren, COUNTEREN_TM) &&
- get_field(env->mcounteren, COUNTEREN_TM)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_INSTRETH:
- if (!get_field(env->hcounteren, COUNTEREN_IR) &&
- get_field(env->mcounteren, COUNTEREN_IR)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
- if (!get_field(env->hcounteren, 1 << ctr_index) &&
- get_field(env->mcounteren, 1 << ctr_index)) {
- return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
- }
- break;
- }
+ if (!get_field(env->mcounteren, ctr_mask)) {
+ /* The bit must be set in mcountern for HS mode access */
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+ } else if (!get_field(env->hcounteren, ctr_mask)) {
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
}
}
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 10/12] target/riscv: Add few cache related PMU events
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (8 preceding siblings ...)
2022-06-20 23:15 ` [PATCH v10 09/12] target/riscv: Simplify counter predicate function Atish Patra
@ 2022-06-20 23:16 ` Atish Patra
2022-07-14 9:55 ` Heiko Stübner
2022-06-20 23:16 ` [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
2022-06-20 23:16 ` [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
11 siblings, 1 reply; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:16 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang
From: Atish Patra <atish.patra@wdc.com>
Qemu can monitor the following cache related PMU events through
tlb_fill functions.
1. DTLB load/store miss
3. ITLB prefetch miss
Increment the PMU counter in tlb_fill function.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu_helper.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 4a6700c89086..99e944a8c115 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -21,10 +21,12 @@
#include "qemu/log.h"
#include "qemu/main-loop.h"
#include "cpu.h"
+#include "pmu.h"
#include "exec/exec-all.h"
#include "tcg/tcg-op.h"
#include "trace.h"
#include "semihosting/common-semi.h"
+#include "cpu_bits.h"
int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch)
{
@@ -1180,6 +1182,28 @@ void riscv_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
cpu_loop_exit_restore(cs, retaddr);
}
+
+static void pmu_tlb_fill_incr_ctr(RISCVCPU *cpu, MMUAccessType access_type)
+{
+ enum riscv_pmu_event_idx pmu_event_type;
+
+ switch (access_type) {
+ case MMU_INST_FETCH:
+ pmu_event_type = RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS;
+ break;
+ case MMU_DATA_LOAD:
+ pmu_event_type = RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS;
+ break;
+ case MMU_DATA_STORE:
+ pmu_event_type = RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS;
+ break;
+ default:
+ return;
+ }
+
+ riscv_pmu_incr_ctr(cpu, pmu_event_type);
+}
+
bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
MMUAccessType access_type, int mmu_idx,
bool probe, uintptr_t retaddr)
@@ -1276,6 +1300,7 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
}
}
} else {
+ pmu_tlb_fill_incr_ctr(cpu, access_type);
/* Single stage lookup */
ret = get_physical_address(env, &pa, &prot, address, NULL,
access_type, mmu_idx, true, false, false);
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (9 preceding siblings ...)
2022-06-20 23:16 ` [PATCH v10 10/12] target/riscv: Add few cache related PMU events Atish Patra
@ 2022-06-20 23:16 ` Atish Patra
2022-07-14 10:27 ` Heiko Stübner
2022-06-20 23:16 ` [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
11 siblings, 1 reply; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:16 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang
Qemu virt machine can support few cache events and cycle/instret counters.
It also supports counter overflow for these events.
Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
capabilities. There are some dummy nodes added for testing as well.
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
hw/riscv/virt.c | 28 +++++++++++++++++++++++
target/riscv/cpu.c | 1 +
target/riscv/pmu.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++
target/riscv/pmu.h | 1 +
4 files changed, 87 insertions(+)
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index bc424dd2f523..0f3fdb4908b8 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -29,6 +29,7 @@
#include "hw/char/serial.h"
#include "target/riscv/cpu.h"
#include "hw/core/sysbus-fdt.h"
+#include "target/riscv/pmu.h"
#include "hw/riscv/riscv_hart.h"
#include "hw/riscv/virt.h"
#include "hw/riscv/boot.h"
@@ -714,6 +715,32 @@ static void create_fdt_socket_aplic(RISCVVirtState *s,
aplic_phandles[socket] = aplic_s_phandle;
}
+static void create_fdt_socket_pmu(RISCVVirtState *s,
+ int socket, uint32_t *phandle,
+ uint32_t *intc_phandles)
+{
+ int cpu;
+ char *pmu_name;
+ uint32_t *pmu_cells;
+ MachineState *mc = MACHINE(s);
+ RISCVCPU hart = s->soc[socket].harts[0];
+
+ pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
+
+ for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
+ pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
+ pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
+ }
+
+ pmu_name = g_strdup_printf("/soc/pmu");
+ qemu_fdt_add_subnode(mc->fdt, pmu_name);
+ qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible", "riscv,pmu");
+ riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
+
+ g_free(pmu_name);
+ g_free(pmu_cells);
+}
+
static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
bool is_32_bit, uint32_t *phandle,
uint32_t *irq_mmio_phandle,
@@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
&intc_phandles[phandle_pos]);
}
}
+ create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
}
if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 7d9e2aca12a9..69bbd9fff4e1 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -1110,6 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
ISA_EDATA_ENTRY(zve64f, ext_zve64f),
ISA_EDATA_ENTRY(zhinx, ext_zhinx),
ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
+ ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
ISA_EDATA_ENTRY(svinval, ext_svinval),
ISA_EDATA_ENTRY(svnapot, ext_svnapot),
ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
index 34096941c0ce..59feb3c243dd 100644
--- a/target/riscv/pmu.c
+++ b/target/riscv/pmu.c
@@ -20,11 +20,68 @@
#include "cpu.h"
#include "pmu.h"
#include "sysemu/cpu-timers.h"
+#include "sysemu/device_tree.h"
#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
#define MAKE_32BIT_MASK(shift, length) \
(((uint32_t)(~0UL) >> (32 - (length))) << (shift))
+/**
+ * To keep it simple, any event can be mapped to any programmable counters in
+ * QEMU. The generic cycle & instruction count events can also be monitored
+ * using programmable counters. In that case, mcycle & minstret must continue
+ * to provide the correct value as well. Heterogeneous PMU per hart is not
+ * supported yet. Thus, number of counters are same across all harts.
+ */
+void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char *pmu_name)
+{
+ uint32_t fdt_event_ctr_map[20] = {};
+ uint32_t cmask;
+
+ /* All the programmable counters can map to any event */
+ cmask = MAKE_32BIT_MASK(3, num_ctrs);
+
+ /**
+ * The event encoding is specified in the SBI specification
+ * Event idx is a 20bits wide number encoded as follows:
+ * event_idx[19:16] = type
+ * event_idx[15:0] = code
+ * The code field in cache events are encoded as follows:
+ * event_idx.code[15:3] = cache_id
+ * event_idx.code[2:1] = op_id
+ * event_idx.code[0:0] = result_id
+ */
+
+ /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
+ fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
+ fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
+ fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
+
+ /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
+ fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
+ fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
+ fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
+
+ /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
+ fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
+ fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
+ fdt_event_ctr_map[8] = cpu_to_be32(cmask);
+
+ /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01) */
+ fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
+ fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
+ fdt_event_ctr_map[11] = cpu_to_be32(cmask);
+
+ /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
+ fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
+ fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
+ fdt_event_ctr_map[14] = cpu_to_be32(cmask);
+
+ /* This a OpenSBI specific DT property documented in OpenSBI docs */
+ qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
+ fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
+}
+
static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
{
if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
index 036653627f78..3004ce37b636 100644
--- a/target/riscv/pmu.h
+++ b/target/riscv/pmu.h
@@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
uint32_t ctr_idx);
int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
+void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char *pmu_name);
int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
uint32_t ctr_idx);
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
` (10 preceding siblings ...)
2022-06-20 23:16 ` [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
@ 2022-06-20 23:16 ` Atish Patra
2022-07-14 10:29 ` Heiko Stübner
11 siblings, 1 reply; 34+ messages in thread
From: Atish Patra @ 2022-06-20 23:16 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang
The sscofpmf extension was ratified as a part of priv spec v1.12.
Mark the csr_ops accordingly.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 90 ++++++++++++++++++++++++++++++----------------
1 file changed, 60 insertions(+), 30 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 9367e2af9b90..dabd531e0355 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -4002,63 +4002,92 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
write_mhpmevent },
[CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
- write_mhpmeventh},
+ write_mhpmeventh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
[CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
[CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
@@ -4148,7 +4177,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
write_mhpmcounterh },
[CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
write_mhpmcounterh },
- [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
+ [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
#endif /* !CONFIG_USER_ONLY */
};
--
2.25.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH v10 09/12] target/riscv: Simplify counter predicate function
2022-06-20 23:15 ` [PATCH v10 09/12] target/riscv: Simplify counter predicate function Atish Patra
@ 2022-07-04 15:19 ` Weiwei Li
2022-07-05 8:00 ` Atish Kumar Patra
2022-07-14 9:54 ` Heiko Stübner
1 sibling, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-04 15:19 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Bin Meng, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv,
frank.chang
[-- Attachment #1: Type: text/plain, Size: 8087 bytes --]
在 2022/6/21 上午7:15, Atish Patra 写道:
> All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
> as a unified counter. Thus, the predicate function doesn't need handle each
> case separately.
>
> Simplify the predicate function so that we just handle things differently
> between RV32/RV64 and S/HS mode.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/csr.c | 112 +++++----------------------------------------
> 1 file changed, 11 insertions(+), 101 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 2664ce265784..9367e2af9b90 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> + target_ulong ctr_mask;
> int base_csrno = CSR_CYCLE;
> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> base_csrno += 0x80;
> }
> ctr_index = csrno - base_csrno;
> + ctr_mask = BIT(ctr_index);
>
> if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> goto skip_ext_pmu_check;
> }
>
> - if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & ctr_mask))) {
> /* No counter is enabled in PMU or the counter is out of range */
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> skip_ext_pmu_check:
>
> - if (env->priv == PRV_S) {
> - switch (csrno) {
> - case CSR_CYCLE:
> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_TIME:
> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_INSTRET:
> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - }
> - if (rv32) {
> - switch (csrno) {
> - case CSR_CYCLEH:
> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_TIMEH:
> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_INSTRETH:
> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - }
> - }
> + if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
> + ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
> + return RISCV_EXCP_ILLEGAL_INST;
> }
Sorry. I didn't realize this simplification and sent a similar patch to
fix the problems in Xcounteren
related check I found when I tried to learn the patchset for state
enable extension two days ago.
I think there are several difference between our understanding,
following is my modifications:
+ if (csrno <= CSR_HPMCOUNTER31 && csrno >= CSR_CYCLE) {
+ field = 1 << (csrno - CSR_CYCLE);
+ } else if (riscv_cpu_mxl(env) == MXL_RV32 && csrno <= CSR_HPMCOUNTER31H &&
+ csrno >= CSR_CYCLEH) {
+ field = 1 << (csrno - CSR_CYCLEH);
+ }
+
+ if (env->priv < PRV_M && !get_field(env->mcounteren, field)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ if (riscv_cpu_virt_enabled(env) && !get_field(env->hcounteren, field)) {
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+ }
+
+ if (riscv_has_ext(env, RVS) && env->priv == PRV_U &&
+ !get_field(env->scounteren, field)) {
+ if (riscv_cpu_virt_enabled(env)) {
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+ } else {
+ return RISCV_EXCP_ILLEGAL_INST;
}
}
1) For any less-privileged mode under M, illegal exception is raised if matching
bit in mcounteren is zero.
2) For VS/VU mode('H' extension is supported implicitly), virtual instruction
exception is raised if matching bit in hcounteren is zero.
3) scounteren csr only works in U/VU mode when 'S' extension is supported:
For U mode, illegal exception is raised if matching bit in scounteren is zero.
For VU mode, virtual instruction exception exception is raised if matching bit
in scounteren is zero.
Regards,
Weiwei Li
>
> if (riscv_cpu_virt_enabled(env)) {
> - switch (csrno) {
> - case CSR_CYCLE:
> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> - get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_TIME:
> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> - get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_INSTRET:
> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> - get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
> - get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - }
> - if (rv32) {
> - switch (csrno) {
> - case CSR_CYCLEH:
> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> - get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_TIMEH:
> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> - get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_INSTRETH:
> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> - get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
> - get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - }
> + if (!get_field(env->mcounteren, ctr_mask)) {
> + /* The bit must be set in mcountern for HS mode access */
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + } else if (!get_field(env->hcounteren, ctr_mask)) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> }
> }
> #endif
[-- Attachment #2: Type: text/html, Size: 8925 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-06-20 23:15 ` [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable Atish Patra
@ 2022-07-04 15:26 ` Weiwei Li
2022-07-05 0:38 ` Weiwei Li
0 siblings, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-04 15:26 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Bin Meng, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv,
frank.chang
在 2022/6/21 上午7:15, Atish Patra 写道:
> The RISC-V privilege specification provides flexibility to implement
> any number of counters from 29 programmable counters. However, the QEMU
> implements all the counters.
>
> Make it configurable through pmu config parameter which now will indicate
> how many programmable counters should be implemented by the cpu.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 3 +-
> target/riscv/cpu.h | 2 +-
> target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
> 3 files changed, 63 insertions(+), 36 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1b57b3c43980..d12c6dc630ca 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
> {
> RISCVCPU *cpu = RISCV_CPU(obj);
>
> - cpu->cfg.ext_pmu = true;
> cpu->cfg.ext_ifencei = true;
> cpu->cfg.ext_icsr = true;
> cpu->cfg.mmu = true;
> @@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> - DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
> + DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
I think It's better to add a check on cfg.pmu_num to <= 29.
> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 252c30a55d78..ffee54ea5c27 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -397,7 +397,6 @@ struct RISCVCPUConfig {
> bool ext_zksed;
> bool ext_zksh;
> bool ext_zkt;
> - bool ext_pmu;
> bool ext_ifencei;
> bool ext_icsr;
> bool ext_svinval;
> @@ -421,6 +420,7 @@ struct RISCVCPUConfig {
> /* Vendor-specific custom extensions */
> bool ext_XVentanaCondOps;
>
> + uint8_t pmu_num;
> char *priv_spec;
> char *user_spec;
> char *bext_spec;
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 0ca05c77883c..b4a8e15f498f 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> + int base_csrno = CSR_HPMCOUNTER3;
> + bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> - if (!cpu->cfg.ext_pmu) {
> - /* The PMU extension is not enabled */
> + if (rv32 && csrno >= CSR_CYCLEH) {
> + /* Offset for RV32 hpmcounternh counters */
> + base_csrno += 0x80;
> + }
> + ctr_index = csrno - base_csrno;
> +
> + if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> + /* No counter is enabled in PMU or the counter is out of range */
I seems unnecessary to add '!cpu->cfg.pmu_num ' here, 'ctr_index >=
(cpu->cfg.pmu_num)' is true
when cpu->cfg.pmu_num is zero if the problem for base_csrno is fixed.
Ragards,
Weiwei Li
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> @@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> }
> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> + if (rv32) {
> switch (csrno) {
> case CSR_CYCLEH:
> if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> @@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> }
> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> + if (rv32) {
> switch (csrno) {
> case CSR_CYCLEH:
> if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> @@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env, int csrno)
> }
>
> #if !defined(CONFIG_USER_ONLY)
> +static RISCVException mctr(CPURISCVState *env, int csrno)
> +{
> + CPUState *cs = env_cpu(env);
> + RISCVCPU *cpu = RISCV_CPU(cs);
> + int ctr_index;
> + int base_csrno = CSR_MHPMCOUNTER3;
> +
> + if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
> + /* Offset for RV32 mhpmcounternh counters */
> + base_csrno += 0x80;
> + }
> + ctr_index = csrno - base_csrno;
> + if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
> + /* The PMU is not enabled or counter is out of range*/
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException any(CPURISCVState *env, int csrno)
> {
> return RISCV_EXCP_NONE;
> @@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
> [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
>
> - [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
> - [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
> - [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
> - [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
> - [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
> - [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
> - [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
> - [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
> - [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
> - [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
> - [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
> - [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
> - [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
> - [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
> - [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
> - [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
> - [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
> - [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
> - [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
> - [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
> - [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
> - [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
> - [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
> - [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
> - [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
> - [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
> - [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
> - [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
> - [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
> + [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
> + [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
> + [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
> + [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
> + [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
> + [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
> + [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
> + [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
> + [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
> + [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
> + [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
> + [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
> + [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
> + [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
> + [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
> + [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
> + [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
> + [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
> + [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
> + [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
> + [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
> + [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
> + [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
> + [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
> + [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
> + [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
> + [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
> + [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
> + [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
>
> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR
2022-06-20 23:15 ` [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR Atish Patra
@ 2022-07-04 15:31 ` Weiwei Li
2022-07-05 7:47 ` Atish Kumar Patra
0 siblings, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-04 15:31 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Bin Meng, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv,
frank.chang
在 2022/6/21 上午7:15, Atish Patra 写道:
> From: Atish Patra <atish.patra@wdc.com>
>
> As per the privilege specification v1.11, mcountinhibit allows to start/stop
> a pmu counter selectively.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.h | 2 ++
> target/riscv/cpu_bits.h | 4 ++++
> target/riscv/csr.c | 25 +++++++++++++++++++++++++
> target/riscv/machine.c | 1 +
> 4 files changed, 32 insertions(+)
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index ffee54ea5c27..0a916db9f614 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -275,6 +275,8 @@ struct CPUArchState {
> target_ulong scounteren;
> target_ulong mcounteren;
>
> + target_ulong mcountinhibit;
> +
> target_ulong sscratch;
> target_ulong mscratch;
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 4d04b20d064e..b3f7fa713000 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -367,6 +367,10 @@
> #define CSR_MHPMCOUNTER29 0xb1d
> #define CSR_MHPMCOUNTER30 0xb1e
> #define CSR_MHPMCOUNTER31 0xb1f
> +
> +/* Machine counter-inhibit register */
> +#define CSR_MCOUNTINHIBIT 0x320
> +
> #define CSR_MHPMEVENT3 0x323
> #define CSR_MHPMEVENT4 0x324
> #define CSR_MHPMEVENT5 0x325
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index b4a8e15f498f..94d39a4ce1c5 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -1475,6 +1475,28 @@ static RISCVException write_mtvec(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
> + target_ulong *val)
> +{
> + if (env->priv_ver < PRIV_VERSION_1_11_0) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
This seems can be done by add .min_priv_ver=PRIV_VERSION_1_11_0 in
csr_ops table.
Regards,
Weiwei Li
> + *val = env->mcountinhibit;
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
> + target_ulong val)
> +{
> + if (env->priv_ver < PRIV_VERSION_1_11_0) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + env->mcountinhibit = val;
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -3745,6 +3767,9 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
> [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
>
> + [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_mcountinhibit,
> + write_mcountinhibit },
> +
> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
> [CSR_MHPMEVENT5] = { "mhpmevent5", any, read_zero },
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index 2a437b29a1ce..87cd55bfd3a7 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -330,6 +330,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_UINTTL(env.siselect, RISCVCPU),
> VMSTATE_UINTTL(env.scounteren, RISCVCPU),
> VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
> + VMSTATE_UINTTL(env.mcountinhibit, RISCVCPU),
> VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> VMSTATE_UINT64(env.mfromhost, RISCVCPU),
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
@ 2022-07-05 0:31 ` Weiwei Li
2022-07-05 1:30 ` Weiwei Li
2022-07-14 9:53 ` Heiko Stübner
2 siblings, 0 replies; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 0:31 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv, frank.chang
在 2022/6/21 上午7:15, Atish Patra 写道:
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 11 ++
> target/riscv/cpu.h | 25 +++
> target/riscv/cpu_bits.h | 55 +++++++
> target/riscv/csr.c | 165 ++++++++++++++++++-
> target/riscv/machine.c | 1 +
> target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++-
> target/riscv/pmu.h | 7 +
> 7 files changed, 610 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index d12c6dc630ca..7d9e2aca12a9 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
> #include "qemu/ctype.h"
> #include "qemu/log.h"
> #include "cpu.h"
> +#include "pmu.h"
> #include "internals.h"
> #include "exec/exec-all.h"
> #include "qapi/error.h"
> @@ -775,6 +776,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> set_misa(env, env->misa_mxl, ext);
> }
>
> +#ifndef CONFIG_USER_ONLY
> + if (cpu->cfg.pmu_num) {
> + if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
> + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> + riscv_pmu_timer_cb, cpu);
> + }
> + }
> +#endif
> +
> riscv_cpu_register_gdb_regs_for_features(cs);
>
> qemu_init_vcpu(cs);
> @@ -879,6 +889,7 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> + DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
> /* Snapshort value of a counter in RV32 */
> target_ulong mhpmcounterh_prev;
> bool started;
> + /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
> + target_ulong irq_overflow_left;
> } PMUCTRState;
>
> struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
> /* PMU event selector configured values. First three are unused*/
> target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>
> + /* PMU event selector configured values for RV32*/
> + target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
> target_ulong sscratch;
> target_ulong mscratch;
>
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
> bool ext_zve32f;
> bool ext_zve64f;
> bool ext_zmmul;
> + bool ext_sscofpmf;
> bool rvv_ta_all_1s;
>
> uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>
> /* Configuration Settings */
> RISCVCPUConfig cfg;
> +
> + QEMUTimer *pmu_timer;
> + /* A bitmask of Available programmable counters */
> + uint32_t pmu_avail_ctrs;
> + /* Mapping of events to counters */
> + GHashTable *pmu_event_ctr_map;
> };
>
> static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> @@ -738,6 +750,19 @@ enum {
> CSR_TABLE_SIZE = 0x1000
> };
>
> +/**
> + * The event id are encoded based on the encoding specified in the
> + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> + RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> + RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> + RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> + RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> + RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> +};
> +
> /* CSR function table */
> extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index b3f7fa713000..d94abefdaa0f 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -400,6 +400,37 @@
> #define CSR_MHPMEVENT29 0x33d
> #define CSR_MHPMEVENT30 0x33e
> #define CSR_MHPMEVENT31 0x33f
> +
> +#define CSR_MHPMEVENT3H 0x723
> +#define CSR_MHPMEVENT4H 0x724
> +#define CSR_MHPMEVENT5H 0x725
> +#define CSR_MHPMEVENT6H 0x726
> +#define CSR_MHPMEVENT7H 0x727
> +#define CSR_MHPMEVENT8H 0x728
> +#define CSR_MHPMEVENT9H 0x729
> +#define CSR_MHPMEVENT10H 0x72a
> +#define CSR_MHPMEVENT11H 0x72b
> +#define CSR_MHPMEVENT12H 0x72c
> +#define CSR_MHPMEVENT13H 0x72d
> +#define CSR_MHPMEVENT14H 0x72e
> +#define CSR_MHPMEVENT15H 0x72f
> +#define CSR_MHPMEVENT16H 0x730
> +#define CSR_MHPMEVENT17H 0x731
> +#define CSR_MHPMEVENT18H 0x732
> +#define CSR_MHPMEVENT19H 0x733
> +#define CSR_MHPMEVENT20H 0x734
> +#define CSR_MHPMEVENT21H 0x735
> +#define CSR_MHPMEVENT22H 0x736
> +#define CSR_MHPMEVENT23H 0x737
> +#define CSR_MHPMEVENT24H 0x738
> +#define CSR_MHPMEVENT25H 0x739
> +#define CSR_MHPMEVENT26H 0x73a
> +#define CSR_MHPMEVENT27H 0x73b
> +#define CSR_MHPMEVENT28H 0x73c
> +#define CSR_MHPMEVENT29H 0x73d
> +#define CSR_MHPMEVENT30H 0x73e
> +#define CSR_MHPMEVENT31H 0x73f
> +
> #define CSR_MHPMCOUNTER3H 0xb83
> #define CSR_MHPMCOUNTER4H 0xb84
> #define CSR_MHPMCOUNTER5H 0xb85
> @@ -461,6 +492,7 @@
> #define CSR_VSMTE 0x2c0
> #define CSR_VSPMMASK 0x2c1
> #define CSR_VSPMBASE 0x2c2
> +#define CSR_SCOUNTOVF 0xda0
>
> /* Crypto Extension */
> #define CSR_SEED 0x015
> @@ -638,6 +670,7 @@ typedef enum RISCVException {
> #define IRQ_VS_EXT 10
> #define IRQ_M_EXT 11
> #define IRQ_S_GEXT 12
> +#define IRQ_PMU_OVF 13
> #define IRQ_LOCAL_MAX 16
> #define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
>
> @@ -655,11 +688,13 @@ typedef enum RISCVException {
> #define MIP_VSEIP (1 << IRQ_VS_EXT)
> #define MIP_MEIP (1 << IRQ_M_EXT)
> #define MIP_SGEIP (1 << IRQ_S_GEXT)
> +#define MIP_LCOFIP (1 << IRQ_PMU_OVF)
>
> /* sip masks */
> #define SIP_SSIP MIP_SSIP
> #define SIP_STIP MIP_STIP
> #define SIP_SEIP MIP_SEIP
> +#define SIP_LCOFIP MIP_LCOFIP
>
> /* MIE masks */
> #define MIE_SEIE (1 << IRQ_S_EXT)
> @@ -813,4 +848,24 @@ typedef enum RISCVException {
> #define SEED_OPST_WAIT (0b01 << 30)
> #define SEED_OPST_ES16 (0b10 << 30)
> #define SEED_OPST_DEAD (0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF BIT(31)
> +#define MHPMEVENT_BIT_MINH BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH BIT(30)
> +#define MHPMEVENT_BIT_SINH BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH BIT(29)
> +#define MHPMEVENT_BIT_UINH BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH BIT(28)
> +#define MHPMEVENT_BIT_VSINH BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH BIT(27)
> +#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK 0xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD 16
> +
> #endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index d65318dcc62d..2664ce265784 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> - int base_csrno = CSR_HPMCOUNTER3;
> + int base_csrno = CSR_CYCLE;
> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> if (rv32 && csrno >= CSR_CYCLEH) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> ctr_index = csrno - base_csrno;
>
> - if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> + if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> + (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> + goto skip_ext_pmu_check;
> + }
> +
> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> /* No counter is enabled in PMU or the counter is out of range */
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> +skip_ext_pmu_check:
> +
> if (env->priv == PRV_S) {
> switch (csrno) {
> case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
> return mctr(env, csrno);
> }
>
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> + CPUState *cs = env_cpu(env);
> + RISCVCPU *cpu = RISCV_CPU(cs);
> +
> + if (!cpu->cfg.ext_sscofpmf) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException any(CPURISCVState *env, int csrno)
> {
> return RISCV_EXCP_NONE;
> @@ -663,9 +678,38 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
> static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
> {
> int evt_index = csrno - CSR_MCOUNTINHIBIT;
> + uint64_t mhpmevt_val = val;
>
> env->mhpmevent_val[evt_index] = val;
>
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevt_val = mhpmevt_val | ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> + }
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +
> + *val = env->mhpmeventh_val[evt_index];
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> + uint64_t mhpmevth_val = val;
> + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> + env->mhpmeventh_val[evt_index] = val;
> +
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> return RISCV_EXCP_NONE;
> }
>
> @@ -673,12 +717,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLE;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = val;
>
> counter->mhpmcounter_val = val;
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounter_prev = get_ticks(false);
> - } else {
> + if (ctr_idx > 2) {
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmctr_val = mhpmctr_val |
> + ((uint64_t)counter->mhpmcounterh_val << 32);
> + }
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> + } else {
> /* Other counters can keep incrementing from the given value */
> counter->mhpmcounter_prev = val;
> }
> @@ -690,11 +742,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLEH;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = counter->mhpmcounter_val;
> + uint64_t mhpmctrh_val = val;
>
> counter->mhpmcounterh_val = val;
> + mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounterh_prev = get_ticks(true);
> + if (ctr_idx > 2) {
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> } else {
> counter->mhpmcounterh_prev = val;
> }
> @@ -770,6 +828,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
> return riscv_pmu_read_ctr(env, val, true, ctr_index);
> }
>
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> + int i;
> + *val = 0;
> + target_ulong *mhpm_evt_val;
> + uint64_t of_bit_mask;
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpm_evt_val = env->mhpmeventh_val;
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpm_evt_val = env->mhpmevent_val;
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> + if ((get_field(env->mcounteren, BIT(i))) &&
> + (mhpm_evt_val[i] & of_bit_mask)) {
> + *val |= BIT(i);
> + }
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException read_time(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -799,7 +883,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
> /* Machine constants */
>
> #define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> + MIP_LCOFIP))
> #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
> #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>
> @@ -840,7 +925,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
> static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
> SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
> SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> + SIP_LCOFIP;
> static const target_ulong hip_writable_mask = MIP_VSSIP;
> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
> static const target_ulong vsip_writable_mask = MIP_VSSIP;
> @@ -4005,6 +4091,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
> write_mhpmevent },
>
> + [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> +
> [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
> @@ -4093,5 +4238,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_mhpmcounterh },
> [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
> write_mhpmcounterh },
> + [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
> +
> #endif /* !CONFIG_USER_ONLY */
> };
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
> vmstate_pmu_ctr_state, PMUCTRState),
> VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> + VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
> #include "qemu/osdep.h"
> #include "cpu.h"
> #include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> + (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> + !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> + return false;
> + } else {
> + return true;
> + }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> +
> + if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> + !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return true;
> + } else {
> + return false;
> + }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + target_ulong max_val = UINT32_MAX;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + if (counter->mhpmcounterh_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + counter->mhpmcounterh_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> + env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounterh_val++;
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> +
> + return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t max_val = UINT64_MAX;
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> + env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> + return 0;
> +}
> +
Why not use a uint64_t mhpmevent_val and mhpmcounter_val to store the
total register for both RV32 and RV64?
If so, It seems that what should be affected is only the read/write
function for mhpmeventh/mhpmcounterh csrs.
The other process can taken them the same for both RV32/RV64.
Regards,
Weiwei Li
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> +{
> + uint32_t ctr_idx;
> + int ret;
> + CPURISCVState *env = &cpu->env;
> + gpointer value;
> +
> + value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx));
> + if (!value) {
> + return -1;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(value);
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> + get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return -1;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> + } else {
> + ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> + }
> +
> + return ret;
> +}
>
> bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr)
> {
> - return (target_ctr == 0) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed instret counter */
> + if (target_ctr == 2) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return target_ctr == ctr_idx ? true : false;
> }
>
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
> {
> - return (target_ctr == 2) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed mcycle counter */
> + if (target_ctr == 0) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> +
> + /* Counter zero is not used for event_ctr_map */
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return (target_ctr == ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> + gpointer udata)
> +{
> + return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> + int64_t ret = 0;
> +
> + if (icount_enabled()) {
> + ret = icount_to_ns(value);
> + } else {
> + ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> + }
> +
> + return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx)
> +{
> + uint32_t event_idx;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> + return -1;
> + }
> +
> + /**
> + * Expected mhpmevent value is zero for reset case. Remove the current
> + * mapping.
> + */
> + if (!value) {
> + g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> + pmu_remove_event_map,
> + GUINT_TO_POINTER(ctr_idx));
> + return 0;
> + }
> +
> + event_idx = value & MHPMEVENT_IDX_MASK;
> + if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx))) {
> + return 0;
> + }
> +
> + switch (event_idx) {
> + case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> + case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> + break;
> + default:
> + /* We don't support any raw events right now */
> + return -1;
> + }
> + g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
> + GUINT_TO_POINTER(ctr_idx));
> +
> + return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> + enum riscv_pmu_event_idx evt_idx)
> +{
> + uint32_t ctr_idx;
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter;
> + target_ulong *mhpmevent_val;
> + uint64_t of_bit_mask;
> + int64_t irq_trigger_at;
> +
> + if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> + evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> + return;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(evt_idx)));
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> + return;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + counter = &env->pmu_ctrs[ctr_idx];
> + if (counter->irq_overflow_left > 0) {
> + irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> + counter->irq_overflow_left;
> + timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> + counter->irq_overflow_left = 0;
> + return;
> + }
> +
> + if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> + /* Generate interrupt only if OF bit is clear */
> + if (!(*mhpmevent_val & of_bit_mask)) {
> + *mhpmevent_val |= of_bit_mask;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> + RISCVCPU *cpu = priv;
> +
> + /* Timer event was triggered only for these events */
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
> +{
> + uint64_t overflow_delta, overflow_at;
> + int64_t overflow_ns, overflow_left = 0;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
> + return -1;
> + }
> +
> + if (value) {
> + overflow_delta = UINT64_MAX - value + 1;
> + } else {
> + overflow_delta = UINT64_MAX;
> + }
> +
> + /**
> + * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
> + * Compute the leftover and save it so that it can be reprogrammed again
> + * when timer expires.
> + */
> + if (overflow_delta > INT64_MAX) {
> + overflow_left = overflow_delta - INT64_MAX;
> + }
> +
> + if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> + riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> + overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> + overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> + } else {
> + return -1;
> + }
> + overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
> +
> + if (overflow_at > INT64_MAX) {
> + overflow_left += overflow_at - INT64_MAX;
> + counter->irq_overflow_left = overflow_left;
> + overflow_at = INT64_MAX;
> + }
> + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> +
> + return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> + if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> + return -1;
> + }
> +
> + cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
> + if (!cpu->pmu_event_ctr_map) {
> + /* PMU support can not be enabled */
> + qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> + cpu->cfg.pmu_num = 0;
> + return -1;
> + }
> +
> + /* Create a bitmask of available programmable counters */
> + cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> +
> + return 0;
> }
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr);
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
> uint32_t target_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-07-04 15:26 ` Weiwei Li
@ 2022-07-05 0:38 ` Weiwei Li
2022-07-05 7:51 ` Atish Kumar Patra
0 siblings, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 0:38 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Bin Meng, Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv,
frank.chang
在 2022/7/4 下午11:26, Weiwei Li 写道:
>
> 在 2022/6/21 上午7:15, Atish Patra 写道:
>> The RISC-V privilege specification provides flexibility to implement
>> any number of counters from 29 programmable counters. However, the QEMU
>> implements all the counters.
>>
>> Make it configurable through pmu config parameter which now will
>> indicate
>> how many programmable counters should be implemented by the cpu.
>>
>> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>> Signed-off-by: Atish Patra <atish.patra@wdc.com>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> ---
>> target/riscv/cpu.c | 3 +-
>> target/riscv/cpu.h | 2 +-
>> target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
>> 3 files changed, 63 insertions(+), 36 deletions(-)
>>
>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
>> index 1b57b3c43980..d12c6dc630ca 100644
>> --- a/target/riscv/cpu.c
>> +++ b/target/riscv/cpu.c
>> @@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
>> {
>> RISCVCPU *cpu = RISCV_CPU(obj);
>> - cpu->cfg.ext_pmu = true;
>> cpu->cfg.ext_ifencei = true;
>> cpu->cfg.ext_icsr = true;
>> cpu->cfg.mmu = true;
>> @@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
>> DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
>> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
>> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
>> - DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
>> + DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
>
> I think It's better to add a check on cfg.pmu_num to <= 29.
>
OK, I find this check in the following patch.
>> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
>> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
>> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>> index 252c30a55d78..ffee54ea5c27 100644
>> --- a/target/riscv/cpu.h
>> +++ b/target/riscv/cpu.h
>> @@ -397,7 +397,6 @@ struct RISCVCPUConfig {
>> bool ext_zksed;
>> bool ext_zksh;
>> bool ext_zkt;
>> - bool ext_pmu;
>> bool ext_ifencei;
>> bool ext_icsr;
>> bool ext_svinval;
>> @@ -421,6 +420,7 @@ struct RISCVCPUConfig {
>> /* Vendor-specific custom extensions */
>> bool ext_XVentanaCondOps;
>> + uint8_t pmu_num;
>> char *priv_spec;
>> char *user_spec;
>> char *bext_spec;
>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>> index 0ca05c77883c..b4a8e15f498f 100644
>> --- a/target/riscv/csr.c
>> +++ b/target/riscv/csr.c
>> @@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int
>> csrno)
>> CPUState *cs = env_cpu(env);
>> RISCVCPU *cpu = RISCV_CPU(cs);
>> int ctr_index;
>> + int base_csrno = CSR_HPMCOUNTER3;
>> + bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>> - if (!cpu->cfg.ext_pmu) {
>> - /* The PMU extension is not enabled */
>> + if (rv32 && csrno >= CSR_CYCLEH) {
>> + /* Offset for RV32 hpmcounternh counters */
>> + base_csrno += 0x80;
>> + }
>> + ctr_index = csrno - base_csrno;
>> +
>> + if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
>> + /* No counter is enabled in PMU or the counter is out of
>> range */
>
> I seems unnecessary to add '!cpu->cfg.pmu_num ' here, 'ctr_index >=
> (cpu->cfg.pmu_num)' is true
Typo. I -> It
>
> when cpu->cfg.pmu_num is zero if the problem for base_csrno is fixed.
>
> Ragards,
>
> Weiwei Li
>
>> return RISCV_EXCP_ILLEGAL_INST;
>> }
>> @@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env,
>> int csrno)
>> }
>> break;
>> }
>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + if (rv32) {
>> switch (csrno) {
>> case CSR_CYCLEH:
>> if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>> @@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int
>> csrno)
>> }
>> break;
>> }
>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + if (rv32) {
>> switch (csrno) {
>> case CSR_CYCLEH:
>> if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>> @@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env,
>> int csrno)
>> }
>> #if !defined(CONFIG_USER_ONLY)
>> +static RISCVException mctr(CPURISCVState *env, int csrno)
>> +{
>> + CPUState *cs = env_cpu(env);
>> + RISCVCPU *cpu = RISCV_CPU(cs);
>> + int ctr_index;
>> + int base_csrno = CSR_MHPMCOUNTER3;
>> +
>> + if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
>> + /* Offset for RV32 mhpmcounternh counters */
>> + base_csrno += 0x80;
>> + }
>> + ctr_index = csrno - base_csrno;
>> + if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
>> + /* The PMU is not enabled or counter is out of range*/
>> + return RISCV_EXCP_ILLEGAL_INST;
>> + }
>> +
>> + return RISCV_EXCP_NONE;
>> +}
>> +
>> static RISCVException any(CPURISCVState *env, int csrno)
>> {
>> return RISCV_EXCP_NONE;
>> @@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>> [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
>> [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
>> - [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
>> - [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
>> - [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
>> - [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
>> - [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
>> - [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
>> - [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
>> - [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
>> - [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
>> - [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
>> - [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
>> - [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
>> - [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
>> - [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
>> - [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
>> - [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
>> - [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
>> - [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
>> - [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
>> - [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
>> - [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
>> - [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
>> - [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
>> - [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
>> - [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
>> - [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
>> - [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
>> - [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
>> - [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
>> + [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
>> + [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
>> + [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
>> + [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
>> + [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
>> + [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
>> + [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
>> + [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
>> + [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
>> + [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
>> + [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
>> + [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
>> + [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
>> + [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
>> + [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
>> + [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
>> + [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
>> + [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
>> + [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
>> + [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
>> + [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
>> + [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
>> + [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
>> + [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
>> + [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
>> + [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
>> + [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
>> + [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
>> + [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
>> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
>> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
2022-07-05 0:31 ` Weiwei Li
@ 2022-07-05 1:30 ` Weiwei Li
2022-07-05 7:36 ` Atish Kumar Patra
2022-07-14 9:53 ` Heiko Stübner
2 siblings, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 1:30 UTC (permalink / raw)
To: Atish Patra, qemu-devel
Cc: Alistair Francis, Bin Meng, Palmer Dabbelt, qemu-riscv, frank.chang
[-- Attachment #1: Type: text/plain, Size: 36880 bytes --]
在 2022/6/21 上午7:15, Atish Patra 写道:
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 11 ++
> target/riscv/cpu.h | 25 +++
> target/riscv/cpu_bits.h | 55 +++++++
> target/riscv/csr.c | 165 ++++++++++++++++++-
> target/riscv/machine.c | 1 +
> target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++-
> target/riscv/pmu.h | 7 +
> 7 files changed, 610 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index d12c6dc630ca..7d9e2aca12a9 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
> #include "qemu/ctype.h"
> #include "qemu/log.h"
> #include "cpu.h"
> +#include "pmu.h"
> #include "internals.h"
> #include "exec/exec-all.h"
> #include "qapi/error.h"
> @@ -775,6 +776,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> set_misa(env, env->misa_mxl, ext);
> }
>
> +#ifndef CONFIG_USER_ONLY
> + if (cpu->cfg.pmu_num) {
> + if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
> + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> + riscv_pmu_timer_cb, cpu);
> + }
> + }
> +#endif
> +
> riscv_cpu_register_gdb_regs_for_features(cs);
>
> qemu_init_vcpu(cs);
> @@ -879,6 +889,7 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> + DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
> /* Snapshort value of a counter in RV32 */
> target_ulong mhpmcounterh_prev;
> bool started;
> + /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
> + target_ulong irq_overflow_left;
> } PMUCTRState;
>
> struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
> /* PMU event selector configured values. First three are unused*/
> target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>
> + /* PMU event selector configured values for RV32*/
> + target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
> target_ulong sscratch;
> target_ulong mscratch;
>
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
> bool ext_zve32f;
> bool ext_zve64f;
> bool ext_zmmul;
> + bool ext_sscofpmf;
> bool rvv_ta_all_1s;
>
> uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>
> /* Configuration Settings */
> RISCVCPUConfig cfg;
> +
> + QEMUTimer *pmu_timer;
> + /* A bitmask of Available programmable counters */
> + uint32_t pmu_avail_ctrs;
> + /* Mapping of events to counters */
> + GHashTable *pmu_event_ctr_map;
> };
>
> static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> @@ -738,6 +750,19 @@ enum {
> CSR_TABLE_SIZE = 0x1000
> };
>
> +/**
> + * The event id are encoded based on the encoding specified in the
> + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> + RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> + RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> + RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> + RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> + RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> +};
> +
> /* CSR function table */
> extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index b3f7fa713000..d94abefdaa0f 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -400,6 +400,37 @@
> #define CSR_MHPMEVENT29 0x33d
> #define CSR_MHPMEVENT30 0x33e
> #define CSR_MHPMEVENT31 0x33f
> +
> +#define CSR_MHPMEVENT3H 0x723
> +#define CSR_MHPMEVENT4H 0x724
> +#define CSR_MHPMEVENT5H 0x725
> +#define CSR_MHPMEVENT6H 0x726
> +#define CSR_MHPMEVENT7H 0x727
> +#define CSR_MHPMEVENT8H 0x728
> +#define CSR_MHPMEVENT9H 0x729
> +#define CSR_MHPMEVENT10H 0x72a
> +#define CSR_MHPMEVENT11H 0x72b
> +#define CSR_MHPMEVENT12H 0x72c
> +#define CSR_MHPMEVENT13H 0x72d
> +#define CSR_MHPMEVENT14H 0x72e
> +#define CSR_MHPMEVENT15H 0x72f
> +#define CSR_MHPMEVENT16H 0x730
> +#define CSR_MHPMEVENT17H 0x731
> +#define CSR_MHPMEVENT18H 0x732
> +#define CSR_MHPMEVENT19H 0x733
> +#define CSR_MHPMEVENT20H 0x734
> +#define CSR_MHPMEVENT21H 0x735
> +#define CSR_MHPMEVENT22H 0x736
> +#define CSR_MHPMEVENT23H 0x737
> +#define CSR_MHPMEVENT24H 0x738
> +#define CSR_MHPMEVENT25H 0x739
> +#define CSR_MHPMEVENT26H 0x73a
> +#define CSR_MHPMEVENT27H 0x73b
> +#define CSR_MHPMEVENT28H 0x73c
> +#define CSR_MHPMEVENT29H 0x73d
> +#define CSR_MHPMEVENT30H 0x73e
> +#define CSR_MHPMEVENT31H 0x73f
> +
> #define CSR_MHPMCOUNTER3H 0xb83
> #define CSR_MHPMCOUNTER4H 0xb84
> #define CSR_MHPMCOUNTER5H 0xb85
> @@ -461,6 +492,7 @@
> #define CSR_VSMTE 0x2c0
> #define CSR_VSPMMASK 0x2c1
> #define CSR_VSPMBASE 0x2c2
> +#define CSR_SCOUNTOVF 0xda0
>
> /* Crypto Extension */
> #define CSR_SEED 0x015
> @@ -638,6 +670,7 @@ typedef enum RISCVException {
> #define IRQ_VS_EXT 10
> #define IRQ_M_EXT 11
> #define IRQ_S_GEXT 12
> +#define IRQ_PMU_OVF 13
> #define IRQ_LOCAL_MAX 16
> #define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
>
> @@ -655,11 +688,13 @@ typedef enum RISCVException {
> #define MIP_VSEIP (1 << IRQ_VS_EXT)
> #define MIP_MEIP (1 << IRQ_M_EXT)
> #define MIP_SGEIP (1 << IRQ_S_GEXT)
> +#define MIP_LCOFIP (1 << IRQ_PMU_OVF)
>
> /* sip masks */
> #define SIP_SSIP MIP_SSIP
> #define SIP_STIP MIP_STIP
> #define SIP_SEIP MIP_SEIP
> +#define SIP_LCOFIP MIP_LCOFIP
>
> /* MIE masks */
> #define MIE_SEIE (1 << IRQ_S_EXT)
> @@ -813,4 +848,24 @@ typedef enum RISCVException {
> #define SEED_OPST_WAIT (0b01 << 30)
> #define SEED_OPST_ES16 (0b10 << 30)
> #define SEED_OPST_DEAD (0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF BIT(31)
> +#define MHPMEVENT_BIT_MINH BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH BIT(30)
> +#define MHPMEVENT_BIT_SINH BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH BIT(29)
> +#define MHPMEVENT_BIT_UINH BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH BIT(28)
> +#define MHPMEVENT_BIT_VSINH BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH BIT(27)
> +#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK 0xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD 16
> +
> #endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index d65318dcc62d..2664ce265784 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> - int base_csrno = CSR_HPMCOUNTER3;
> + int base_csrno = CSR_CYCLE;
> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> if (rv32 && csrno >= CSR_CYCLEH) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> ctr_index = csrno - base_csrno;
>
> - if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> + if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> + (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> + goto skip_ext_pmu_check;
> + }
> +
> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> /* No counter is enabled in PMU or the counter is out of range */
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> +skip_ext_pmu_check:
> +
> if (env->priv == PRV_S) {
> switch (csrno) {
> case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
> return mctr(env, csrno);
> }
>
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> + CPUState *cs = env_cpu(env);
> + RISCVCPU *cpu = RISCV_CPU(cs);
> +
> + if (!cpu->cfg.ext_sscofpmf) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException any(CPURISCVState *env, int csrno)
> {
> return RISCV_EXCP_NONE;
> @@ -663,9 +678,38 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
> static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
> {
> int evt_index = csrno - CSR_MCOUNTINHIBIT;
> + uint64_t mhpmevt_val = val;
>
> env->mhpmevent_val[evt_index] = val;
>
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevt_val = mhpmevt_val | ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> + }
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +
> + *val = env->mhpmeventh_val[evt_index];
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> + uint64_t mhpmevth_val = val;
> + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> + env->mhpmeventh_val[evt_index] = val;
> +
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> return RISCV_EXCP_NONE;
> }
>
> @@ -673,12 +717,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLE;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = val;
>
> counter->mhpmcounter_val = val;
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounter_prev = get_ticks(false);
> - } else {
> + if (ctr_idx > 2) {
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmctr_val = mhpmctr_val |
> + ((uint64_t)counter->mhpmcounterh_val << 32);
> + }
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> + } else {
> /* Other counters can keep incrementing from the given value */
> counter->mhpmcounter_prev = val;
> }
> @@ -690,11 +742,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLEH;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = counter->mhpmcounter_val;
> + uint64_t mhpmctrh_val = val;
>
> counter->mhpmcounterh_val = val;
> + mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounterh_prev = get_ticks(true);
> + if (ctr_idx > 2) {
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> } else {
> counter->mhpmcounterh_prev = val;
> }
> @@ -770,6 +828,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
> return riscv_pmu_read_ctr(env, val, true, ctr_index);
> }
>
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> + int i;
> + *val = 0;
> + target_ulong *mhpm_evt_val;
> + uint64_t of_bit_mask;
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpm_evt_val = env->mhpmeventh_val;
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpm_evt_val = env->mhpmevent_val;
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> + if ((get_field(env->mcounteren, BIT(i))) &&
> + (mhpm_evt_val[i] & of_bit_mask)) {
> + *val |= BIT(i);
> + }
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException read_time(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -799,7 +883,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
> /* Machine constants */
>
> #define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> + MIP_LCOFIP))
It seems a problem here. S_MODE_INTERRUPTS will be used in
delegable_ints, and then be used not
only in rmw_mip64 but also in rmw_mideleg64, so if we add MIP_LCOFIP
here, this bit will also added
into mideleg which is not stated in the spec for sscofpmf.
And if MIP_LCOFIP is not a bit in mideleg, the following modification
for 'sip_writable_mask' will not work.
Regards,
Weiwei Li
> #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
> #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>
> @@ -840,7 +925,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
> static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
> SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
> SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> + SIP_LCOFIP;
> static const target_ulong hip_writable_mask = MIP_VSSIP;
> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
> static const target_ulong vsip_writable_mask = MIP_VSSIP;
> @@ -4005,6 +4091,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
> write_mhpmevent },
>
> + [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> +
> [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
> @@ -4093,5 +4238,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_mhpmcounterh },
> [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
> write_mhpmcounterh },
> + [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
> +
> #endif /* !CONFIG_USER_ONLY */
> };
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
> vmstate_pmu_ctr_state, PMUCTRState),
> VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> + VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
> #include "qemu/osdep.h"
> #include "cpu.h"
> #include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> + (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> + !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> + return false;
> + } else {
> + return true;
> + }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> +
> + if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> + !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return true;
> + } else {
> + return false;
> + }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + target_ulong max_val = UINT32_MAX;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + if (counter->mhpmcounterh_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + counter->mhpmcounterh_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> + env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounterh_val++;
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> +
> + return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t max_val = UINT64_MAX;
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> + env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> + return 0;
> +}
> +
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> +{
> + uint32_t ctr_idx;
> + int ret;
> + CPURISCVState *env = &cpu->env;
> + gpointer value;
> +
> + value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx));
> + if (!value) {
> + return -1;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(value);
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> + get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return -1;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> + } else {
> + ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> + }
> +
> + return ret;
> +}
>
> bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr)
> {
> - return (target_ctr == 0) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed instret counter */
> + if (target_ctr == 2) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return target_ctr == ctr_idx ? true : false;
> }
>
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
> {
> - return (target_ctr == 2) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed mcycle counter */
> + if (target_ctr == 0) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> +
> + /* Counter zero is not used for event_ctr_map */
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return (target_ctr == ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> + gpointer udata)
> +{
> + return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> + int64_t ret = 0;
> +
> + if (icount_enabled()) {
> + ret = icount_to_ns(value);
> + } else {
> + ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> + }
> +
> + return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx)
> +{
> + uint32_t event_idx;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> + return -1;
> + }
> +
> + /**
> + * Expected mhpmevent value is zero for reset case. Remove the current
> + * mapping.
> + */
> + if (!value) {
> + g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> + pmu_remove_event_map,
> + GUINT_TO_POINTER(ctr_idx));
> + return 0;
> + }
> +
> + event_idx = value & MHPMEVENT_IDX_MASK;
> + if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx))) {
> + return 0;
> + }
> +
> + switch (event_idx) {
> + case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> + case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> + break;
> + default:
> + /* We don't support any raw events right now */
> + return -1;
> + }
> + g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
> + GUINT_TO_POINTER(ctr_idx));
> +
> + return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> + enum riscv_pmu_event_idx evt_idx)
> +{
> + uint32_t ctr_idx;
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter;
> + target_ulong *mhpmevent_val;
> + uint64_t of_bit_mask;
> + int64_t irq_trigger_at;
> +
> + if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> + evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> + return;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(evt_idx)));
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> + return;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + counter = &env->pmu_ctrs[ctr_idx];
> + if (counter->irq_overflow_left > 0) {
> + irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> + counter->irq_overflow_left;
> + timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> + counter->irq_overflow_left = 0;
> + return;
> + }
> +
> + if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> + /* Generate interrupt only if OF bit is clear */
> + if (!(*mhpmevent_val & of_bit_mask)) {
> + *mhpmevent_val |= of_bit_mask;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> + RISCVCPU *cpu = priv;
> +
> + /* Timer event was triggered only for these events */
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
> +{
> + uint64_t overflow_delta, overflow_at;
> + int64_t overflow_ns, overflow_left = 0;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
> + return -1;
> + }
> +
> + if (value) {
> + overflow_delta = UINT64_MAX - value + 1;
> + } else {
> + overflow_delta = UINT64_MAX;
> + }
> +
> + /**
> + * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
> + * Compute the leftover and save it so that it can be reprogrammed again
> + * when timer expires.
> + */
> + if (overflow_delta > INT64_MAX) {
> + overflow_left = overflow_delta - INT64_MAX;
> + }
> +
> + if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> + riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> + overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> + overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> + } else {
> + return -1;
> + }
> + overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
> +
> + if (overflow_at > INT64_MAX) {
> + overflow_left += overflow_at - INT64_MAX;
> + counter->irq_overflow_left = overflow_left;
> + overflow_at = INT64_MAX;
> + }
> + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> +
> + return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> + if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> + return -1;
> + }
> +
> + cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
> + if (!cpu->pmu_event_ctr_map) {
> + /* PMU support can not be enabled */
> + qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> + cpu->cfg.pmu_num = 0;
> + return -1;
> + }
> +
> + /* Create a bitmask of available programmable counters */
> + cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> +
> + return 0;
> }
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr);
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
> uint32_t target_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
[-- Attachment #2: Type: text/html, Size: 36710 bytes --]
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-07-05 1:30 ` Weiwei Li
@ 2022-07-05 7:36 ` Atish Kumar Patra
2022-07-05 7:48 ` Weiwei Li
0 siblings, 1 reply; 34+ messages in thread
From: Atish Kumar Patra @ 2022-07-05 7:36 UTC (permalink / raw)
To: Weiwei Li
Cc: qemu-devel@nongnu.org Developers, Alistair Francis, Bin Meng,
Palmer Dabbelt, open list:RISC-V, Frank Chang
On Mon, Jul 4, 2022 at 6:31 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>
> 在 2022/6/21 上午7:15, Atish Patra 写道:
>
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 11 ++
> target/riscv/cpu.h | 25 +++
> target/riscv/cpu_bits.h | 55 +++++++
> target/riscv/csr.c | 165 ++++++++++++++++++-
> target/riscv/machine.c | 1 +
> target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++-
> target/riscv/pmu.h | 7 +
> 7 files changed, 610 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index d12c6dc630ca..7d9e2aca12a9 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
> #include "qemu/ctype.h"
> #include "qemu/log.h"
> #include "cpu.h"
> +#include "pmu.h"
> #include "internals.h"
> #include "exec/exec-all.h"
> #include "qapi/error.h"
> @@ -775,6 +776,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> set_misa(env, env->misa_mxl, ext);
> }
>
> +#ifndef CONFIG_USER_ONLY
> + if (cpu->cfg.pmu_num) {
> + if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
> + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> + riscv_pmu_timer_cb, cpu);
> + }
> + }
> +#endif
> +
> riscv_cpu_register_gdb_regs_for_features(cs);
>
> qemu_init_vcpu(cs);
> @@ -879,6 +889,7 @@ static Property riscv_cpu_extensions[] = {
> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> + DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
> /* Snapshort value of a counter in RV32 */
> target_ulong mhpmcounterh_prev;
> bool started;
> + /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
> + target_ulong irq_overflow_left;
> } PMUCTRState;
>
> struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
> /* PMU event selector configured values. First three are unused*/
> target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>
> + /* PMU event selector configured values for RV32*/
> + target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
> target_ulong sscratch;
> target_ulong mscratch;
>
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
> bool ext_zve32f;
> bool ext_zve64f;
> bool ext_zmmul;
> + bool ext_sscofpmf;
> bool rvv_ta_all_1s;
>
> uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>
> /* Configuration Settings */
> RISCVCPUConfig cfg;
> +
> + QEMUTimer *pmu_timer;
> + /* A bitmask of Available programmable counters */
> + uint32_t pmu_avail_ctrs;
> + /* Mapping of events to counters */
> + GHashTable *pmu_event_ctr_map;
> };
>
> static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
> @@ -738,6 +750,19 @@ enum {
> CSR_TABLE_SIZE = 0x1000
> };
>
> +/**
> + * The event id are encoded based on the encoding specified in the
> + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> + RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
> + RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
> + RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
> + RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
> + RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
> +};
> +
> /* CSR function table */
> extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index b3f7fa713000..d94abefdaa0f 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -400,6 +400,37 @@
> #define CSR_MHPMEVENT29 0x33d
> #define CSR_MHPMEVENT30 0x33e
> #define CSR_MHPMEVENT31 0x33f
> +
> +#define CSR_MHPMEVENT3H 0x723
> +#define CSR_MHPMEVENT4H 0x724
> +#define CSR_MHPMEVENT5H 0x725
> +#define CSR_MHPMEVENT6H 0x726
> +#define CSR_MHPMEVENT7H 0x727
> +#define CSR_MHPMEVENT8H 0x728
> +#define CSR_MHPMEVENT9H 0x729
> +#define CSR_MHPMEVENT10H 0x72a
> +#define CSR_MHPMEVENT11H 0x72b
> +#define CSR_MHPMEVENT12H 0x72c
> +#define CSR_MHPMEVENT13H 0x72d
> +#define CSR_MHPMEVENT14H 0x72e
> +#define CSR_MHPMEVENT15H 0x72f
> +#define CSR_MHPMEVENT16H 0x730
> +#define CSR_MHPMEVENT17H 0x731
> +#define CSR_MHPMEVENT18H 0x732
> +#define CSR_MHPMEVENT19H 0x733
> +#define CSR_MHPMEVENT20H 0x734
> +#define CSR_MHPMEVENT21H 0x735
> +#define CSR_MHPMEVENT22H 0x736
> +#define CSR_MHPMEVENT23H 0x737
> +#define CSR_MHPMEVENT24H 0x738
> +#define CSR_MHPMEVENT25H 0x739
> +#define CSR_MHPMEVENT26H 0x73a
> +#define CSR_MHPMEVENT27H 0x73b
> +#define CSR_MHPMEVENT28H 0x73c
> +#define CSR_MHPMEVENT29H 0x73d
> +#define CSR_MHPMEVENT30H 0x73e
> +#define CSR_MHPMEVENT31H 0x73f
> +
> #define CSR_MHPMCOUNTER3H 0xb83
> #define CSR_MHPMCOUNTER4H 0xb84
> #define CSR_MHPMCOUNTER5H 0xb85
> @@ -461,6 +492,7 @@
> #define CSR_VSMTE 0x2c0
> #define CSR_VSPMMASK 0x2c1
> #define CSR_VSPMBASE 0x2c2
> +#define CSR_SCOUNTOVF 0xda0
>
> /* Crypto Extension */
> #define CSR_SEED 0x015
> @@ -638,6 +670,7 @@ typedef enum RISCVException {
> #define IRQ_VS_EXT 10
> #define IRQ_M_EXT 11
> #define IRQ_S_GEXT 12
> +#define IRQ_PMU_OVF 13
> #define IRQ_LOCAL_MAX 16
> #define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
>
> @@ -655,11 +688,13 @@ typedef enum RISCVException {
> #define MIP_VSEIP (1 << IRQ_VS_EXT)
> #define MIP_MEIP (1 << IRQ_M_EXT)
> #define MIP_SGEIP (1 << IRQ_S_GEXT)
> +#define MIP_LCOFIP (1 << IRQ_PMU_OVF)
>
> /* sip masks */
> #define SIP_SSIP MIP_SSIP
> #define SIP_STIP MIP_STIP
> #define SIP_SEIP MIP_SEIP
> +#define SIP_LCOFIP MIP_LCOFIP
>
> /* MIE masks */
> #define MIE_SEIE (1 << IRQ_S_EXT)
> @@ -813,4 +848,24 @@ typedef enum RISCVException {
> #define SEED_OPST_WAIT (0b01 << 30)
> #define SEED_OPST_ES16 (0b10 << 30)
> #define SEED_OPST_DEAD (0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF BIT(31)
> +#define MHPMEVENT_BIT_MINH BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH BIT(30)
> +#define MHPMEVENT_BIT_SINH BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH BIT(29)
> +#define MHPMEVENT_BIT_UINH BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH BIT(28)
> +#define MHPMEVENT_BIT_VSINH BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH BIT(27)
> +#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK 0xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD 16
> +
> #endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index d65318dcc62d..2664ce265784 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> - int base_csrno = CSR_HPMCOUNTER3;
> + int base_csrno = CSR_CYCLE;
> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> if (rv32 && csrno >= CSR_CYCLEH) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> ctr_index = csrno - base_csrno;
>
> - if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> + if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> + (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> + goto skip_ext_pmu_check;
> + }
> +
> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> /* No counter is enabled in PMU or the counter is out of range */
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> +skip_ext_pmu_check:
> +
> if (env->priv == PRV_S) {
> switch (csrno) {
> case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - ctr_index = csrno - CSR_CYCLE;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> }
> break;
> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - ctr_index = csrno - CSR_CYCLEH;
> if (!get_field(env->hcounteren, 1 << ctr_index) &&
> get_field(env->mcounteren, 1 << ctr_index)) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
> return mctr(env, csrno);
> }
>
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> + CPUState *cs = env_cpu(env);
> + RISCVCPU *cpu = RISCV_CPU(cs);
> +
> + if (!cpu->cfg.ext_sscofpmf) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException any(CPURISCVState *env, int csrno)
> {
> return RISCV_EXCP_NONE;
> @@ -663,9 +678,38 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
> static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
> {
> int evt_index = csrno - CSR_MCOUNTINHIBIT;
> + uint64_t mhpmevt_val = val;
>
> env->mhpmevent_val[evt_index] = val;
>
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevt_val = mhpmevt_val | ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> + }
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> +
> + *val = env->mhpmeventh_val[evt_index];
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
> + uint64_t mhpmevth_val = val;
> + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> + env->mhpmeventh_val[evt_index] = val;
> +
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> +
> return RISCV_EXCP_NONE;
> }
>
> @@ -673,12 +717,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLE;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = val;
>
> counter->mhpmcounter_val = val;
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounter_prev = get_ticks(false);
> - } else {
> + if (ctr_idx > 2) {
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmctr_val = mhpmctr_val |
> + ((uint64_t)counter->mhpmcounterh_val << 32);
> + }
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> + } else {
> /* Other counters can keep incrementing from the given value */
> counter->mhpmcounter_prev = val;
> }
> @@ -690,11 +742,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
> {
> int ctr_idx = csrno - CSR_MCYCLEH;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t mhpmctr_val = counter->mhpmcounter_val;
> + uint64_t mhpmctrh_val = val;
>
> counter->mhpmcounterh_val = val;
> + mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> counter->mhpmcounterh_prev = get_ticks(true);
> + if (ctr_idx > 2) {
> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> + }
> } else {
> counter->mhpmcounterh_prev = val;
> }
> @@ -770,6 +828,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
> return riscv_pmu_read_ctr(env, val, true, ctr_index);
> }
>
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
> +{
> + int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
> + int i;
> + *val = 0;
> + target_ulong *mhpm_evt_val;
> + uint64_t of_bit_mask;
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpm_evt_val = env->mhpmeventh_val;
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpm_evt_val = env->mhpmevent_val;
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
> + if ((get_field(env->mcounteren, BIT(i))) &&
> + (mhpm_evt_val[i] & of_bit_mask)) {
> + *val |= BIT(i);
> + }
> + }
> +
> + return RISCV_EXCP_NONE;o
> +}
> +
> static RISCVException read_time(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -799,7 +883,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
> /* Machine constants */
>
> #define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> + MIP_LCOFIP))
>
> It seems a problem here. S_MODE_INTERRUPTS will be used in delegable_ints, and then be used not
>
> only in rmw_mip64 but also in rmw_mideleg64, so if we add MIP_LCOFIP here, this bit will also added
>
> into mideleg which is not stated in the spec for sscofpmf.
>
Here is the snippet from the sscofpmf spec which says counter overflow
interrupt can be delegated to S-mode.
"Generation of a "count overflow interrupt request" by an hpmcounter
sets the LCOFIP bit in the
mip/sip registers and sets the associated OF bit. The mideleg register
controls the delegation of
this interrupt to S-mode versus M-mode."
> And if MIP_LCOFIP is not a bit in mideleg, the following modification for 'sip_writable_mask' will not work.
>
> Regards,
>
> Weiwei Li
>
> #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
> #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>
> @@ -840,7 +925,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
> static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
> SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
> SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> + SIP_LCOFIP;
> static const target_ulong hip_writable_mask = MIP_VSSIP;
> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
> static const target_ulong vsip_writable_mask = MIP_VSSIP;
> @@ -4005,6 +4091,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
> write_mhpmevent },
>
> + [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> + [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
> + write_mhpmeventh},
> +
> [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
> @@ -4093,5 +4238,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_mhpmcounterh },
> [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
> write_mhpmcounterh },
> + [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
> +
> #endif /* !CONFIG_USER_ONLY */
> };
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
> vmstate_pmu_ctr_state, PMUCTRState),
> VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> + VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> VMSTATE_UINT64(env.mfromhost, RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
> #include "qemu/osdep.h"
> #include "cpu.h"
> #include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> + (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> + !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
> + return false;
> + } else {
> + return true;
> + }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> +
> + if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
> + !get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return true;
> + } else {
> + return false;
> + }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + target_ulong max_val = UINT32_MAX;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + if (counter->mhpmcounterh_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + counter->mhpmcounterh_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> + env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounterh_val++;
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> +
> + return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> +{
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> + uint64_t max_val = UINT64_MAX;
> + bool virt_on = riscv_cpu_virt_enabled(env);
> +
> + /* Privilege mode filtering */
> + if ((env->priv == PRV_M &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
> + (env->priv == PRV_S && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
> + (env->priv == PRV_U && virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
> + (env->priv == PRV_S && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
> + (env->priv == PRV_U && !virt_on &&
> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
> + return 0;
> + }
> +
> + /* Handle the overflow scenario */
> + if (counter->mhpmcounter_val == max_val) {
> + counter->mhpmcounter_val = 0;
> + /* Generate interrupt only if OF bit is clear */
> + if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
> + env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + } else {
> + counter->mhpmcounter_val++;
> + }
> + return 0;
> +}
> +
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> +{
> + uint32_t ctr_idx;
> + int ret;
> + CPURISCVState *env = &cpu->env;
> + gpointer value;
> +
> + value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx));
> + if (!value) {
> + return -1;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(value);
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> + get_field(env->mcountinhibit, BIT(ctr_idx))) {
> + return -1;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
> + } else {
> + ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
> + }
> +
> + return ret;
> +}
>
> bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr)
> {
> - return (target_ctr == 0) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed instret counter */
> + if (target_ctr == 2) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return target_ctr == ctr_idx ? true : false;
> }
>
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
> {
> - return (target_ctr == 2) ? true : false;
> + RISCVCPU *cpu;
> + uint32_t event_idx;
> + uint32_t ctr_idx;
> +
> + /* Fixed mcycle counter */
> + if (target_ctr == 0) {
> + return true;
> + }
> +
> + cpu = RISCV_CPU(env_cpu(env));
> + event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx)));
> +
> + /* Counter zero is not used for event_ctr_map */
> + if (!ctr_idx) {
> + return false;
> + }
> +
> + return (target_ctr == ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
> + gpointer udata)
> +{
> + return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> + int64_t ret = 0;
> +
> + if (icount_enabled()) {
> + ret = icount_to_ns(value);
> + } else {
> + ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
> + }
> +
> + return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx)
> +{
> + uint32_t event_idx;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> + return -1;
> + }
> +
> + /**
> + * Expected mhpmevent value is zero for reset case. Remove the current
> + * mapping.
> + */
> + if (!value) {
> + g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
> + pmu_remove_event_map,
> + GUINT_TO_POINTER(ctr_idx));
> + return 0;
> + }
> +
> + event_idx = value & MHPMEVENT_IDX_MASK;
> + if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(event_idx))) {
> + return 0;
> + }
> +
> + switch (event_idx) {
> + case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> + case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> + break;
> + default:
> + /* We don't support any raw events right now */
> + return -1;
> + }
> + g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
> + GUINT_TO_POINTER(ctr_idx));
> +
> + return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> + enum riscv_pmu_event_idx evt_idx)
> +{
> + uint32_t ctr_idx;
> + CPURISCVState *env = &cpu->env;
> + PMUCTRState *counter;
> + target_ulong *mhpmevent_val;
> + uint64_t of_bit_mask;
> + int64_t irq_trigger_at;
> +
> + if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
> + evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
> + return;
> + }
> +
> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
> + GUINT_TO_POINTER(evt_idx)));
> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> + return;
> + }
> +
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
> + of_bit_mask = MHPMEVENTH_BIT_OF;
> + } else {
> + mhpmevent_val = &env->mhpmevent_val[ctr_idx];
> + of_bit_mask = MHPMEVENT_BIT_OF;
> + }
> +
> + counter = &env->pmu_ctrs[ctr_idx];
> + if (counter->irq_overflow_left > 0) {
> + irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
> + counter->irq_overflow_left;
> + timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
> + counter->irq_overflow_left = 0;
> + return;
> + }
> +
> + if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> + /* Generate interrupt only if OF bit is clear */
> + if (!(*mhpmevent_val & of_bit_mask)) {
> + *mhpmevent_val |= of_bit_mask;
> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> + }
> + }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> + RISCVCPU *cpu = priv;
> +
> + /* Timer event was triggered only for these events */
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
> +{
> + uint64_t overflow_delta, overflow_at;
> + int64_t overflow_ns, overflow_left = 0;
> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> +
> + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
> + return -1;
> + }
> +
> + if (value) {
> + overflow_delta = UINT64_MAX - value + 1;
> + } else {
> + overflow_delta = UINT64_MAX;
> + }
> +
> + /**
> + * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
> + * Compute the leftover and save it so that it can be reprogrammed again
> + * when timer expires.
> + */
> + if (overflow_delta > INT64_MAX) {
> + overflow_left = overflow_delta - INT64_MAX;
> + }
> +
> + if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> + riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> + overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
> + overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
> + } else {
> + return -1;
> + }
> + overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
> +
> + if (overflow_at > INT64_MAX) {
> + overflow_left += overflow_at - INT64_MAX;
> + counter->irq_overflow_left = overflow_left;
> + overflow_at = INT64_MAX;
> + }
> + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
> +
> + return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> + if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> + return -1;
> + }
> +
> + cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
> + if (!cpu->pmu_event_ctr_map) {
> + /* PMU support can not be enabled */
> + qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
> + cpu->cfg.pmu_num = 0;
> + return -1;
> + }
> +
> + /* Create a bitmask of available programmable counters */
> + cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
> +
> + return 0;
> }
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
> uint32_t target_ctr);
> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
> uint32_t target_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> + uint32_t ctr_idx);
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR
2022-07-04 15:31 ` Weiwei Li
@ 2022-07-05 7:47 ` Atish Kumar Patra
0 siblings, 0 replies; 34+ messages in thread
From: Atish Kumar Patra @ 2022-07-05 7:47 UTC (permalink / raw)
To: Weiwei Li
Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
On Mon, Jul 4, 2022 at 8:31 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>
> 在 2022/6/21 上午7:15, Atish Patra 写道:
> > From: Atish Patra <atish.patra@wdc.com>
> >
> > As per the privilege specification v1.11, mcountinhibit allows to start/stop
> > a pmu counter selectively.
> >
> > Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> > target/riscv/cpu.h | 2 ++
> > target/riscv/cpu_bits.h | 4 ++++
> > target/riscv/csr.c | 25 +++++++++++++++++++++++++
> > target/riscv/machine.c | 1 +
> > 4 files changed, 32 insertions(+)
> >
> > diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> > index ffee54ea5c27..0a916db9f614 100644
> > --- a/target/riscv/cpu.h
> > +++ b/target/riscv/cpu.h
> > @@ -275,6 +275,8 @@ struct CPUArchState {
> > target_ulong scounteren;
> > target_ulong mcounteren;
> >
> > + target_ulong mcountinhibit;
> > +
> > target_ulong sscratch;
> > target_ulong mscratch;
> >
> > diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> > index 4d04b20d064e..b3f7fa713000 100644
> > --- a/target/riscv/cpu_bits.h
> > +++ b/target/riscv/cpu_bits.h
> > @@ -367,6 +367,10 @@
> > #define CSR_MHPMCOUNTER29 0xb1d
> > #define CSR_MHPMCOUNTER30 0xb1e
> > #define CSR_MHPMCOUNTER31 0xb1f
> > +
> > +/* Machine counter-inhibit register */
> > +#define CSR_MCOUNTINHIBIT 0x320
> > +
> > #define CSR_MHPMEVENT3 0x323
> > #define CSR_MHPMEVENT4 0x324
> > #define CSR_MHPMEVENT5 0x325
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index b4a8e15f498f..94d39a4ce1c5 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -1475,6 +1475,28 @@ static RISCVException write_mtvec(CPURISCVState *env, int csrno,
> > return RISCV_EXCP_NONE;
> > }
> >
> > +static RISCVException read_mcountinhibit(CPURISCVState *env, int csrno,
> > + target_ulong *val)
> > +{
> > + if (env->priv_ver < PRIV_VERSION_1_11_0) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
>
> This seems can be done by add .min_priv_ver=PRIV_VERSION_1_11_0 in
> csr_ops table.
>
Yes. This can be dropped from both read/write_mcountihibit with min_priv_ver.
Thanks.
> Regards,
>
> Weiwei Li
>
> > + *val = env->mcountinhibit;
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > +static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
> > + target_ulong val)
> > +{
> > + if (env->priv_ver < PRIV_VERSION_1_11_0) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + env->mcountinhibit = val;
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
> > target_ulong *val)
> > {
> > @@ -3745,6 +3767,9 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> > [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
> > [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
> >
> > + [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_mcountinhibit,
> > + write_mcountinhibit },
> > +
> > [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
> > [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
> > [CSR_MHPMEVENT5] = { "mhpmevent5", any, read_zero },
> > diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> > index 2a437b29a1ce..87cd55bfd3a7 100644
> > --- a/target/riscv/machine.c
> > +++ b/target/riscv/machine.c
> > @@ -330,6 +330,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> > VMSTATE_UINTTL(env.siselect, RISCVCPU),
> > VMSTATE_UINTTL(env.scounteren, RISCVCPU),
> > VMSTATE_UINTTL(env.mcounteren, RISCVCPU),
> > + VMSTATE_UINTTL(env.mcountinhibit, RISCVCPU),
> > VMSTATE_UINTTL(env.sscratch, RISCVCPU),
> > VMSTATE_UINTTL(env.mscratch, RISCVCPU),
> > VMSTATE_UINT64(env.mfromhost, RISCVCPU),
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-07-05 7:36 ` Atish Kumar Patra
@ 2022-07-05 7:48 ` Weiwei Li
0 siblings, 0 replies; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 7:48 UTC (permalink / raw)
To: Atish Kumar Patra
Cc: qemu-devel@nongnu.org Developers, Alistair Francis, Bin Meng,
Palmer Dabbelt, open list:RISC-V, Frank Chang
在 2022/7/5 下午3:36, Atish Kumar Patra 写道:
> On Mon, Jul 4, 2022 at 6:31 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>>
>> 在 2022/6/21 上午7:15, Atish Patra 写道:
>>
>> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
>> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
>> extension allows the perf to handle overflow interrupts and filtering
>> support. This patch provides a framework for programmable
>> counters to leverage the extension. As the extension doesn't have any
>> provision for the overflow bit for fixed counters, the fixed events
>> can also be monitoring using programmable counters. The underlying
>> counters for cycle and instruction counters are always running. Thus,
>> a separate timer device is programmed to handle the overflow.
>>
>> Signed-off-by: Atish Patra <atish.patra@wdc.com>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> ---
>> target/riscv/cpu.c | 11 ++
>> target/riscv/cpu.h | 25 +++
>> target/riscv/cpu_bits.h | 55 +++++++
>> target/riscv/csr.c | 165 ++++++++++++++++++-
>> target/riscv/machine.c | 1 +
>> target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++-
>> target/riscv/pmu.h | 7 +
>> 7 files changed, 610 insertions(+), 11 deletions(-)
>>
>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
>> index d12c6dc630ca..7d9e2aca12a9 100644
>> --- a/target/riscv/cpu.c
>> +++ b/target/riscv/cpu.c
>> @@ -22,6 +22,7 @@
>> #include "qemu/ctype.h"
>> #include "qemu/log.h"
>> #include "cpu.h"
>> +#include "pmu.h"
>> #include "internals.h"
>> #include "exec/exec-all.h"
>> #include "qapi/error.h"
>> @@ -775,6 +776,15 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
>> set_misa(env, env->misa_mxl, ext);
>> }
>>
>> +#ifndef CONFIG_USER_ONLY
>> + if (cpu->cfg.pmu_num) {
>> + if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && cpu->cfg.ext_sscofpmf) {
>> + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
>> + riscv_pmu_timer_cb, cpu);
>> + }
>> + }
>> +#endif
>> +
>> riscv_cpu_register_gdb_regs_for_features(cs);
>>
>> qemu_init_vcpu(cs);
>> @@ -879,6 +889,7 @@ static Property riscv_cpu_extensions[] = {
>> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
>> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
>> DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
>> + DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false),
>> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
>> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
>> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>> index 5c7acc055ac9..2222db193c3d 100644
>> --- a/target/riscv/cpu.h
>> +++ b/target/riscv/cpu.h
>> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
>> /* Snapshort value of a counter in RV32 */
>> target_ulong mhpmcounterh_prev;
>> bool started;
>> + /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt trigger */
>> + target_ulong irq_overflow_left;
>> } PMUCTRState;
>>
>> struct CPUArchState {
>> @@ -297,6 +299,9 @@ struct CPUArchState {
>> /* PMU event selector configured values. First three are unused*/
>> target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS];
>>
>> + /* PMU event selector configured values for RV32*/
>> + target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
>> +
>> target_ulong sscratch;
>> target_ulong mscratch;
>>
>> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
>> bool ext_zve32f;
>> bool ext_zve64f;
>> bool ext_zmmul;
>> + bool ext_sscofpmf;
>> bool rvv_ta_all_1s;
>>
>> uint32_t mvendorid;
>> @@ -479,6 +485,12 @@ struct ArchCPU {
>>
>> /* Configuration Settings */
>> RISCVCPUConfig cfg;
>> +
>> + QEMUTimer *pmu_timer;
>> + /* A bitmask of Available programmable counters */
>> + uint32_t pmu_avail_ctrs;
>> + /* Mapping of events to counters */
>> + GHashTable *pmu_event_ctr_map;
>> };
>>
>> static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
>> @@ -738,6 +750,19 @@ enum {
>> CSR_TABLE_SIZE = 0x1000
>> };
>>
>> +/**
>> + * The event id are encoded based on the encoding specified in the
>> + * SBI specification v0.3
>> + */
>> +
>> +enum riscv_pmu_event_idx {
>> + RISCV_PMU_EVENT_HW_CPU_CYCLES = 0x01,
>> + RISCV_PMU_EVENT_HW_INSTRUCTIONS = 0x02,
>> + RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS = 0x10019,
>> + RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS = 0x1001B,
>> + RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS = 0x10021,
>> +};
>> +
>> /* CSR function table */
>> extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>>
>> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
>> index b3f7fa713000..d94abefdaa0f 100644
>> --- a/target/riscv/cpu_bits.h
>> +++ b/target/riscv/cpu_bits.h
>> @@ -400,6 +400,37 @@
>> #define CSR_MHPMEVENT29 0x33d
>> #define CSR_MHPMEVENT30 0x33e
>> #define CSR_MHPMEVENT31 0x33f
>> +
>> +#define CSR_MHPMEVENT3H 0x723
>> +#define CSR_MHPMEVENT4H 0x724
>> +#define CSR_MHPMEVENT5H 0x725
>> +#define CSR_MHPMEVENT6H 0x726
>> +#define CSR_MHPMEVENT7H 0x727
>> +#define CSR_MHPMEVENT8H 0x728
>> +#define CSR_MHPMEVENT9H 0x729
>> +#define CSR_MHPMEVENT10H 0x72a
>> +#define CSR_MHPMEVENT11H 0x72b
>> +#define CSR_MHPMEVENT12H 0x72c
>> +#define CSR_MHPMEVENT13H 0x72d
>> +#define CSR_MHPMEVENT14H 0x72e
>> +#define CSR_MHPMEVENT15H 0x72f
>> +#define CSR_MHPMEVENT16H 0x730
>> +#define CSR_MHPMEVENT17H 0x731
>> +#define CSR_MHPMEVENT18H 0x732
>> +#define CSR_MHPMEVENT19H 0x733
>> +#define CSR_MHPMEVENT20H 0x734
>> +#define CSR_MHPMEVENT21H 0x735
>> +#define CSR_MHPMEVENT22H 0x736
>> +#define CSR_MHPMEVENT23H 0x737
>> +#define CSR_MHPMEVENT24H 0x738
>> +#define CSR_MHPMEVENT25H 0x739
>> +#define CSR_MHPMEVENT26H 0x73a
>> +#define CSR_MHPMEVENT27H 0x73b
>> +#define CSR_MHPMEVENT28H 0x73c
>> +#define CSR_MHPMEVENT29H 0x73d
>> +#define CSR_MHPMEVENT30H 0x73e
>> +#define CSR_MHPMEVENT31H 0x73f
>> +
>> #define CSR_MHPMCOUNTER3H 0xb83
>> #define CSR_MHPMCOUNTER4H 0xb84
>> #define CSR_MHPMCOUNTER5H 0xb85
>> @@ -461,6 +492,7 @@
>> #define CSR_VSMTE 0x2c0
>> #define CSR_VSPMMASK 0x2c1
>> #define CSR_VSPMBASE 0x2c2
>> +#define CSR_SCOUNTOVF 0xda0
>>
>> /* Crypto Extension */
>> #define CSR_SEED 0x015
>> @@ -638,6 +670,7 @@ typedef enum RISCVException {
>> #define IRQ_VS_EXT 10
>> #define IRQ_M_EXT 11
>> #define IRQ_S_GEXT 12
>> +#define IRQ_PMU_OVF 13
>> #define IRQ_LOCAL_MAX 16
>> #define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1)
>>
>> @@ -655,11 +688,13 @@ typedef enum RISCVException {
>> #define MIP_VSEIP (1 << IRQ_VS_EXT)
>> #define MIP_MEIP (1 << IRQ_M_EXT)
>> #define MIP_SGEIP (1 << IRQ_S_GEXT)
>> +#define MIP_LCOFIP (1 << IRQ_PMU_OVF)
>>
>> /* sip masks */
>> #define SIP_SSIP MIP_SSIP
>> #define SIP_STIP MIP_STIP
>> #define SIP_SEIP MIP_SEIP
>> +#define SIP_LCOFIP MIP_LCOFIP
>>
>> /* MIE masks */
>> #define MIE_SEIE (1 << IRQ_S_EXT)
>> @@ -813,4 +848,24 @@ typedef enum RISCVException {
>> #define SEED_OPST_WAIT (0b01 << 30)
>> #define SEED_OPST_ES16 (0b10 << 30)
>> #define SEED_OPST_DEAD (0b11 << 30)
>> +/* PMU related bits */
>> +#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
>> +
>> +#define MHPMEVENT_BIT_OF BIT_ULL(63)
>> +#define MHPMEVENTH_BIT_OF BIT(31)
>> +#define MHPMEVENT_BIT_MINH BIT_ULL(62)
>> +#define MHPMEVENTH_BIT_MINH BIT(30)
>> +#define MHPMEVENT_BIT_SINH BIT_ULL(61)
>> +#define MHPMEVENTH_BIT_SINH BIT(29)
>> +#define MHPMEVENT_BIT_UINH BIT_ULL(60)
>> +#define MHPMEVENTH_BIT_UINH BIT(28)
>> +#define MHPMEVENT_BIT_VSINH BIT_ULL(59)
>> +#define MHPMEVENTH_BIT_VSINH BIT(27)
>> +#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
>> +#define MHPMEVENTH_BIT_VUINH BIT(26)
>> +
>> +#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
>> +#define MHPMEVENT_IDX_MASK 0xFFFFF
>> +#define MHPMEVENT_SSCOF_RESVD 16
>> +
>> #endif
>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>> index d65318dcc62d..2664ce265784 100644
>> --- a/target/riscv/csr.c
>> +++ b/target/riscv/csr.c
>> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> CPUState *cs = env_cpu(env);
>> RISCVCPU *cpu = RISCV_CPU(cs);
>> int ctr_index;
>> - int base_csrno = CSR_HPMCOUNTER3;
>> + int base_csrno = CSR_CYCLE;
>> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>>
>> if (rv32 && csrno >= CSR_CYCLEH) {
>> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> }
>> ctr_index = csrno - base_csrno;
>>
>> - if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
>> + if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
>> + (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
>> + goto skip_ext_pmu_check;
>> + }
>> +
>> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
>> /* No counter is enabled in PMU or the counter is out of range */
>> return RISCV_EXCP_ILLEGAL_INST;
>> }
>>
>> +skip_ext_pmu_check:
>> +
>> if (env->priv == PRV_S) {
>> switch (csrno) {
>> case CSR_CYCLE:
>> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> }
>> break;
>> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> - ctr_index = csrno - CSR_CYCLE;
>> if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> return RISCV_EXCP_ILLEGAL_INST;
>> }
>> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> }
>> break;
>> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> - ctr_index = csrno - CSR_CYCLEH;
>> if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> return RISCV_EXCP_ILLEGAL_INST;
>> }
>> @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> }
>> break;
>> case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> - ctr_index = csrno - CSR_CYCLE;
>> if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> get_field(env->mcounteren, 1 << ctr_index)) {
>> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> }
>> break;
>> case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> - ctr_index = csrno - CSR_CYCLEH;
>> if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> get_field(env->mcounteren, 1 << ctr_index)) {
>> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, int csrno)
>> return mctr(env, csrno);
>> }
>>
>> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
>> +{
>> + CPUState *cs = env_cpu(env);
>> + RISCVCPU *cpu = RISCV_CPU(cs);
>> +
>> + if (!cpu->cfg.ext_sscofpmf) {
>> + return RISCV_EXCP_ILLEGAL_INST;
>> + }
>> +
>> + return RISCV_EXCP_NONE;
>> +}
>> +
>> static RISCVException any(CPURISCVState *env, int csrno)
>> {
>> return RISCV_EXCP_NONE;
>> @@ -663,9 +678,38 @@ static int read_mhpmevent(CPURISCVState *env, int csrno, target_ulong *val)
>> static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulong val)
>> {
>> int evt_index = csrno - CSR_MCOUNTINHIBIT;
>> + uint64_t mhpmevt_val = val;
>>
>> env->mhpmevent_val[evt_index] = val;
>>
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + mhpmevt_val = mhpmevt_val | ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
>> + }
>> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
>> +
>> + return RISCV_EXCP_NONE;
>> +}
>> +
>> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong *val)
>> +{
>> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
>> +
>> + *val = env->mhpmeventh_val[evt_index];
>> +
>> + return RISCV_EXCP_NONE;
>> +}
>> +
>> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulong val)
>> +{
>> + int evt_index = csrno - CSR_MHPMEVENT3H + 3;
>> + uint64_t mhpmevth_val = val;
>> + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
>> +
>> + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
>> + env->mhpmeventh_val[evt_index] = val;
>> +
>> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
>> +
>> return RISCV_EXCP_NONE;
>> }
>>
>> @@ -673,12 +717,20 @@ static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
>> {
>> int ctr_idx = csrno - CSR_MCYCLE;
>> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
>> + uint64_t mhpmctr_val = val;
>>
>> counter->mhpmcounter_val = val;
>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> counter->mhpmcounter_prev = get_ticks(false);
>> - } else {
>> + if (ctr_idx > 2) {
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + mhpmctr_val = mhpmctr_val |
>> + ((uint64_t)counter->mhpmcounterh_val << 32);
>> + }
>> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
>> + }
>> + } else {
>> /* Other counters can keep incrementing from the given value */
>> counter->mhpmcounter_prev = val;
>> }
>> @@ -690,11 +742,17 @@ static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
>> {
>> int ctr_idx = csrno - CSR_MCYCLEH;
>> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
>> + uint64_t mhpmctr_val = counter->mhpmcounter_val;
>> + uint64_t mhpmctrh_val = val;
>>
>> counter->mhpmcounterh_val = val;
>> + mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> counter->mhpmcounterh_prev = get_ticks(true);
>> + if (ctr_idx > 2) {
>> + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
>> + }
>> } else {
>> counter->mhpmcounterh_prev = val;
>> }
>> @@ -770,6 +828,32 @@ static int read_hpmcounterh(CPURISCVState *env, int csrno, target_ulong *val)
>> return riscv_pmu_read_ctr(env, val, true, ctr_index);
>> }
>>
>> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val)
>> +{
>> + int mhpmevt_start = CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT;
>> + int i;
>> + *val = 0;
>> + target_ulong *mhpm_evt_val;
>> + uint64_t of_bit_mask;
>> +
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + mhpm_evt_val = env->mhpmeventh_val;
>> + of_bit_mask = MHPMEVENTH_BIT_OF;
>> + } else {
>> + mhpm_evt_val = env->mhpmevent_val;
>> + of_bit_mask = MHPMEVENT_BIT_OF;
>> + }
>> +
>> + for (i = mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) {
>> + if ((get_field(env->mcounteren, BIT(i))) &&
>> + (mhpm_evt_val[i] & of_bit_mask)) {
>> + *val |= BIT(i);
>> + }
>> + }
>> +
>> + return RISCV_EXCP_NONE;o
>> +}
>> +
>> static RISCVException read_time(CPURISCVState *env, int csrno,
>> target_ulong *val)
>> {
>> @@ -799,7 +883,8 @@ static RISCVException read_timeh(CPURISCVState *env, int csrno,
>> /* Machine constants */
>>
>> #define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
>> -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
>> +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
>> + MIP_LCOFIP))
>>
>> It seems a problem here. S_MODE_INTERRUPTS will be used in delegable_ints, and then be used not
>>
>> only in rmw_mip64 but also in rmw_mideleg64, so if we add MIP_LCOFIP here, this bit will also added
>>
>> into mideleg which is not stated in the spec for sscofpmf.
>>
> Here is the snippet from the sscofpmf spec which says counter overflow
> interrupt can be delegated to S-mode.
>
> "Generation of a "count overflow interrupt request" by an hpmcounter
> sets the LCOFIP bit in the
> mip/sip registers and sets the associated OF bit. The mideleg register
> controls the delegation of
> this interrupt to S-mode versus M-mode."
OK. Thanks.
Regards,
Weiwei Li
>> And if MIP_LCOFIP is not a bit in mideleg, the following modification for 'sip_writable_mask' will not work.
>>
>> Regards,
>>
>> Weiwei Li
>>
>> #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
>> #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
>>
>> @@ -840,7 +925,8 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
>> static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
>> SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
>> SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
>> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP;
>> +static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
>> + SIP_LCOFIP;
>> static const target_ulong hip_writable_mask = MIP_VSSIP;
>> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP;
>> static const target_ulong vsip_writable_mask = MIP_VSSIP;
>> @@ -4005,6 +4091,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>> [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
>> write_mhpmevent },
>>
>> + [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> + [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
>> + write_mhpmeventh},
>> +
>> [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
>> [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
>> [CSR_HPMCOUNTER5H] = { "hpmcounter5h", ctr32, read_hpmcounterh },
>> @@ -4093,5 +4238,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>> write_mhpmcounterh },
>> [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
>> write_mhpmcounterh },
>> + [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
>> +
>> #endif /* !CONFIG_USER_ONLY */
>> };
>> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
>> index dc182ca81119..33ef9b8e9908 100644
>> --- a/target/riscv/machine.c
>> +++ b/target/riscv/machine.c
>> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu = {
>> VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
>> vmstate_pmu_ctr_state, PMUCTRState),
>> VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
>> + VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, RV_MAX_MHPMEVENTS),
>> VMSTATE_UINTTL(env.sscratch, RISCVCPU),
>> VMSTATE_UINTTL(env.mscratch, RISCVCPU),
>> VMSTATE_UINT64(env.mfromhost, RISCVCPU),
>> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
>> index 000fe8da45ef..34096941c0ce 100644
>> --- a/target/riscv/pmu.c
>> +++ b/target/riscv/pmu.c
>> @@ -19,14 +19,367 @@
>> #include "qemu/osdep.h"
>> #include "cpu.h"
>> #include "pmu.h"
>> +#include "sysemu/cpu-timers.h"
>> +
>> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
>> +#define MAKE_32BIT_MASK(shift, length) \
>> + (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
>> +
>> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
>> +{
>> + if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
>> + !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) {
>> + return false;
>> + } else {
>> + return true;
>> + }
>> +}
>> +
>> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx)
>> +{
>> + CPURISCVState *env = &cpu->env;
>> +
>> + if (riscv_pmu_counter_valid(cpu, ctr_idx) &&
>> + !get_field(env->mcountinhibit, BIT(ctr_idx))) {
>> + return true;
>> + } else {
>> + return false;
>> + }
>> +}
>> +
>> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx)
>> +{
>> + CPURISCVState *env = &cpu->env;
>> + target_ulong max_val = UINT32_MAX;
>> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
>> + bool virt_on = riscv_cpu_virt_enabled(env);
>> +
>> + /* Privilege mode filtering */
>> + if ((env->priv == PRV_M &&
>> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) ||
>> + (env->priv == PRV_S && virt_on &&
>> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) ||
>> + (env->priv == PRV_U && virt_on &&
>> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) ||
>> + (env->priv == PRV_S && !virt_on &&
>> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) ||
>> + (env->priv == PRV_U && !virt_on &&
>> + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) {
>> + return 0;
>> + }
>> +
>> + /* Handle the overflow scenario */
>> + if (counter->mhpmcounter_val == max_val) {
>> + if (counter->mhpmcounterh_val == max_val) {
>> + counter->mhpmcounter_val = 0;
>> + counter->mhpmcounterh_val = 0;
>> + /* Generate interrupt only if OF bit is clear */
>> + if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) {
>> + env->mhpmeventh_val[ctr_idx] |= MHPMEVENTH_BIT_OF;
>> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
>> + }
>> + } else {
>> + counter->mhpmcounterh_val++;
>> + }
>> + } else {
>> + counter->mhpmcounter_val++;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
>> +{
>> + CPURISCVState *env = &cpu->env;
>> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
>> + uint64_t max_val = UINT64_MAX;
>> + bool virt_on = riscv_cpu_virt_enabled(env);
>> +
>> + /* Privilege mode filtering */
>> + if ((env->priv == PRV_M &&
>> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) ||
>> + (env->priv == PRV_S && virt_on &&
>> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) ||
>> + (env->priv == PRV_U && virt_on &&
>> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) ||
>> + (env->priv == PRV_S && !virt_on &&
>> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) ||
>> + (env->priv == PRV_U && !virt_on &&
>> + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) {
>> + return 0;
>> + }
>> +
>> + /* Handle the overflow scenario */
>> + if (counter->mhpmcounter_val == max_val) {
>> + counter->mhpmcounter_val = 0;
>> + /* Generate interrupt only if OF bit is clear */
>> + if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) {
>> + env->mhpmevent_val[ctr_idx] |= MHPMEVENT_BIT_OF;
>> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
>> + }
>> + } else {
>> + counter->mhpmcounter_val++;
>> + }
>> + return 0;
>> +}
>> +
>> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
>> +{
>> + uint32_t ctr_idx;
>> + int ret;
>> + CPURISCVState *env = &cpu->env;
>> + gpointer value;
>> +
>> + value = g_hash_table_lookup(cpu->pmu_event_ctr_map,
>> + GUINT_TO_POINTER(event_idx));
>> + if (!value) {
>> + return -1;
>> + }
>> +
>> + ctr_idx = GPOINTER_TO_UINT(value);
>> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
>> + get_field(env->mcountinhibit, BIT(ctr_idx))) {
>> + return -1;
>> + }
>> +
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + ret = riscv_pmu_incr_ctr_rv32(cpu, ctr_idx);
>> + } else {
>> + ret = riscv_pmu_incr_ctr_rv64(cpu, ctr_idx);
>> + }
>> +
>> + return ret;
>> +}
>>
>> bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>> uint32_t target_ctr)
>> {
>> - return (target_ctr == 0) ? true : false;
>> + RISCVCPU *cpu;
>> + uint32_t event_idx;
>> + uint32_t ctr_idx;
>> +
>> + /* Fixed instret counter */
>> + if (target_ctr == 2) {
>> + return true;
>> + }
>> +
>> + cpu = RISCV_CPU(env_cpu(env));
>> + event_idx = RISCV_PMU_EVENT_HW_INSTRUCTIONS;
>> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
>> + GUINT_TO_POINTER(event_idx)));
>> + if (!ctr_idx) {
>> + return false;
>> + }
>> +
>> + return target_ctr == ctr_idx ? true : false;
>> }
>>
>> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t target_ctr)
>> {
>> - return (target_ctr == 2) ? true : false;
>> + RISCVCPU *cpu;
>> + uint32_t event_idx;
>> + uint32_t ctr_idx;
>> +
>> + /* Fixed mcycle counter */
>> + if (target_ctr == 0) {
>> + return true;
>> + }
>> +
>> + cpu = RISCV_CPU(env_cpu(env));
>> + event_idx = RISCV_PMU_EVENT_HW_CPU_CYCLES;
>> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
>> + GUINT_TO_POINTER(event_idx)));
>> +
>> + /* Counter zero is not used for event_ctr_map */
>> + if (!ctr_idx) {
>> + return false;
>> + }
>> +
>> + return (target_ctr == ctr_idx) ? true : false;
>> +}
>> +
>> +static gboolean pmu_remove_event_map(gpointer key, gpointer value,
>> + gpointer udata)
>> +{
>> + return (GPOINTER_TO_UINT(value) == GPOINTER_TO_UINT(udata)) ? true : false;
>> +}
>> +
>> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
>> +{
>> + int64_t ret = 0;
>> +
>> + if (icount_enabled()) {
>> + ret = icount_to_ns(value);
>> + } else {
>> + ret = (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
>> + uint32_t ctr_idx)
>> +{
>> + uint32_t event_idx;
>> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
>> +
>> + if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
>> + return -1;
>> + }
>> +
>> + /**
>> + * Expected mhpmevent value is zero for reset case. Remove the current
>> + * mapping.
>> + */
>> + if (!value) {
>> + g_hash_table_foreach_remove(cpu->pmu_event_ctr_map,
>> + pmu_remove_event_map,
>> + GUINT_TO_POINTER(ctr_idx));
>> + return 0;
>> + }
>> +
>> + event_idx = value & MHPMEVENT_IDX_MASK;
>> + if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
>> + GUINT_TO_POINTER(event_idx))) {
>> + return 0;
>> + }
>> +
>> + switch (event_idx) {
>> + case RISCV_PMU_EVENT_HW_CPU_CYCLES:
>> + case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
>> + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
>> + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
>> + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
>> + break;
>> + default:
>> + /* We don't support any raw events right now */
>> + return -1;
>> + }
>> + g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO_POINTER(event_idx),
>> + GUINT_TO_POINTER(ctr_idx));
>> +
>> + return 0;
>> +}
>> +
>> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
>> + enum riscv_pmu_event_idx evt_idx)
>> +{
>> + uint32_t ctr_idx;
>> + CPURISCVState *env = &cpu->env;
>> + PMUCTRState *counter;
>> + target_ulong *mhpmevent_val;
>> + uint64_t of_bit_mask;
>> + int64_t irq_trigger_at;
>> +
>> + if (evt_idx != RISCV_PMU_EVENT_HW_CPU_CYCLES &&
>> + evt_idx != RISCV_PMU_EVENT_HW_INSTRUCTIONS) {
>> + return;
>> + }
>> +
>> + ctr_idx = GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map,
>> + GUINT_TO_POINTER(evt_idx)));
>> + if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
>> + return;
>> + }
>> +
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + mhpmevent_val = &env->mhpmeventh_val[ctr_idx];
>> + of_bit_mask = MHPMEVENTH_BIT_OF;
>> + } else {
>> + mhpmevent_val = &env->mhpmevent_val[ctr_idx];
>> + of_bit_mask = MHPMEVENT_BIT_OF;
>> + }
>> +
>> + counter = &env->pmu_ctrs[ctr_idx];
>> + if (counter->irq_overflow_left > 0) {
>> + irq_trigger_at = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
>> + counter->irq_overflow_left;
>> + timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at);
>> + counter->irq_overflow_left = 0;
>> + return;
>> + }
>> +
>> + if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
>> + /* Generate interrupt only if OF bit is clear */
>> + if (!(*mhpmevent_val & of_bit_mask)) {
>> + *mhpmevent_val |= of_bit_mask;
>> + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
>> + }
>> + }
>> +}
>> +
>> +/* Timer callback for instret and cycle counter overflow */
>> +void riscv_pmu_timer_cb(void *priv)
>> +{
>> + RISCVCPU *cpu = priv;
>> +
>> + /* Timer event was triggered only for these events */
>> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES);
>> + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS);
>> +}
>> +
>> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t ctr_idx)
>> +{
>> + uint64_t overflow_delta, overflow_at;
>> + int64_t overflow_ns, overflow_left = 0;
>> + RISCVCPU *cpu = RISCV_CPU(env_cpu(env));
>> + PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
>> +
>> + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->cfg.ext_sscofpmf) {
>> + return -1;
>> + }
>> +
>> + if (value) {
>> + overflow_delta = UINT64_MAX - value + 1;
>> + } else {
>> + overflow_delta = UINT64_MAX;
>> + }
>> +
>> + /**
>> + * QEMU supports only int64_t timers while RISC-V counters are uint64_t.
>> + * Compute the leftover and save it so that it can be reprogrammed again
>> + * when timer expires.
>> + */
>> + if (overflow_delta > INT64_MAX) {
>> + overflow_left = overflow_delta - INT64_MAX;
>> + }
>> +
>> + if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> + riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> + overflow_ns = pmu_icount_ticks_to_ns((int64_t)overflow_delta);
>> + overflow_left = pmu_icount_ticks_to_ns(overflow_left) ;
>> + } else {
>> + return -1;
>> + }
>> + overflow_at = (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + overflow_ns;
>> +
>> + if (overflow_at > INT64_MAX) {
>> + overflow_left += overflow_at - INT64_MAX;
>> + counter->irq_overflow_left = overflow_left;
>> + overflow_at = INT64_MAX;
>> + }
>> + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at);
>> +
>> + return 0;
>> +}
>> +
>> +
>> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
>> +{
>> + if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
>> + return -1;
>> + }
>> +
>> + cpu->pmu_event_ctr_map = g_hash_table_new(g_direct_hash, g_direct_equal);
>> + if (!cpu->pmu_event_ctr_map) {
>> + /* PMU support can not be enabled */
>> + qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n");
>> + cpu->cfg.pmu_num = 0;
>> + return -1;
>> + }
>> +
>> + /* Create a bitmask of available programmable counters */
>> + cpu->pmu_avail_ctrs = MAKE_32BIT_MASK(3, num_counters);
>> +
>> + return 0;
>> }
>> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
>> index 58a5bc3a4089..036653627f78 100644
>> --- a/target/riscv/pmu.h
>> +++ b/target/riscv/pmu.h
>> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env,
>> uint32_t target_ctr);
>> bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
>> uint32_t target_ctr);
>> +void riscv_pmu_timer_cb(void *priv);
>> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
>> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
>> + uint32_t ctr_idx);
>> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
>> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
>> + uint32_t ctr_idx);
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-07-05 0:38 ` Weiwei Li
@ 2022-07-05 7:51 ` Atish Kumar Patra
2022-07-05 8:16 ` Weiwei Li
0 siblings, 1 reply; 34+ messages in thread
From: Atish Kumar Patra @ 2022-07-05 7:51 UTC (permalink / raw)
To: Weiwei Li
Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
On Mon, Jul 4, 2022 at 5:38 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>
> 在 2022/7/4 下午11:26, Weiwei Li 写道:
> >
> > 在 2022/6/21 上午7:15, Atish Patra 写道:
> >> The RISC-V privilege specification provides flexibility to implement
> >> any number of counters from 29 programmable counters. However, the QEMU
> >> implements all the counters.
> >>
> >> Make it configurable through pmu config parameter which now will
> >> indicate
> >> how many programmable counters should be implemented by the cpu.
> >>
> >> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> >> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> >> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> >> ---
> >> target/riscv/cpu.c | 3 +-
> >> target/riscv/cpu.h | 2 +-
> >> target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
> >> 3 files changed, 63 insertions(+), 36 deletions(-)
> >>
> >> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> >> index 1b57b3c43980..d12c6dc630ca 100644
> >> --- a/target/riscv/cpu.c
> >> +++ b/target/riscv/cpu.c
> >> @@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
> >> {
> >> RISCVCPU *cpu = RISCV_CPU(obj);
> >> - cpu->cfg.ext_pmu = true;
> >> cpu->cfg.ext_ifencei = true;
> >> cpu->cfg.ext_icsr = true;
> >> cpu->cfg.mmu = true;
> >> @@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
> >> DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
> >> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> >> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> >> - DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
> >> + DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> >
> > I think It's better to add a check on cfg.pmu_num to <= 29.
> >
> OK, I find this check in the following patch.
> >> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> >> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> >> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> >> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> >> index 252c30a55d78..ffee54ea5c27 100644
> >> --- a/target/riscv/cpu.h
> >> +++ b/target/riscv/cpu.h
> >> @@ -397,7 +397,6 @@ struct RISCVCPUConfig {
> >> bool ext_zksed;
> >> bool ext_zksh;
> >> bool ext_zkt;
> >> - bool ext_pmu;
> >> bool ext_ifencei;
> >> bool ext_icsr;
> >> bool ext_svinval;
> >> @@ -421,6 +420,7 @@ struct RISCVCPUConfig {
> >> /* Vendor-specific custom extensions */
> >> bool ext_XVentanaCondOps;
> >> + uint8_t pmu_num;
> >> char *priv_spec;
> >> char *user_spec;
> >> char *bext_spec;
> >> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> >> index 0ca05c77883c..b4a8e15f498f 100644
> >> --- a/target/riscv/csr.c
> >> +++ b/target/riscv/csr.c
> >> @@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int
> >> csrno)
> >> CPUState *cs = env_cpu(env);
> >> RISCVCPU *cpu = RISCV_CPU(cs);
> >> int ctr_index;
> >> + int base_csrno = CSR_HPMCOUNTER3;
> >> + bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
> >> - if (!cpu->cfg.ext_pmu) {
> >> - /* The PMU extension is not enabled */
> >> + if (rv32 && csrno >= CSR_CYCLEH) {
> >> + /* Offset for RV32 hpmcounternh counters */
> >> + base_csrno += 0x80;
> >> + }
> >> + ctr_index = csrno - base_csrno;
> >> +
> >> + if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> >> + /* No counter is enabled in PMU or the counter is out of
> >> range */
> >
> > I seems unnecessary to add '!cpu->cfg.pmu_num ' here, 'ctr_index >=
> > (cpu->cfg.pmu_num)' is true
The check is improved in the following patches as well.
> Typo. I -> It
> >
> > when cpu->cfg.pmu_num is zero if the problem for base_csrno is fixed.
> >
> > Ragards,
> >
> > Weiwei Li
> >
> >> return RISCV_EXCP_ILLEGAL_INST;
> >> }
> >> @@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env,
> >> int csrno)
> >> }
> >> break;
> >> }
> >> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> >> + if (rv32) {
> >> switch (csrno) {
> >> case CSR_CYCLEH:
> >> if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> >> @@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int
> >> csrno)
> >> }
> >> break;
> >> }
> >> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> >> + if (rv32) {
> >> switch (csrno) {
> >> case CSR_CYCLEH:
> >> if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> >> @@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env,
> >> int csrno)
> >> }
> >> #if !defined(CONFIG_USER_ONLY)
> >> +static RISCVException mctr(CPURISCVState *env, int csrno)
> >> +{
> >> + CPUState *cs = env_cpu(env);
> >> + RISCVCPU *cpu = RISCV_CPU(cs);
> >> + int ctr_index;
> >> + int base_csrno = CSR_MHPMCOUNTER3;
> >> +
> >> + if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
> >> + /* Offset for RV32 mhpmcounternh counters */
> >> + base_csrno += 0x80;
> >> + }
> >> + ctr_index = csrno - base_csrno;
> >> + if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
> >> + /* The PMU is not enabled or counter is out of range*/
> >> + return RISCV_EXCP_ILLEGAL_INST;
> >> + }
> >> +
> >> + return RISCV_EXCP_NONE;
> >> +}
> >> +
> >> static RISCVException any(CPURISCVState *env, int csrno)
> >> {
> >> return RISCV_EXCP_NONE;
> >> @@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >> [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
> >> [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
> >> - [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
> >> - [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
> >> - [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
> >> - [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
> >> - [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
> >> - [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
> >> - [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
> >> - [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
> >> - [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
> >> - [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
> >> - [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
> >> - [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
> >> - [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
> >> - [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
> >> - [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
> >> - [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
> >> - [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
> >> - [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
> >> - [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
> >> - [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
> >> - [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
> >> - [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
> >> - [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
> >> - [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
> >> - [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
> >> - [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
> >> - [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
> >> - [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
> >> - [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
> >> + [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
> >> + [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
> >> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
> >> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
> >
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 09/12] target/riscv: Simplify counter predicate function
2022-07-04 15:19 ` Weiwei Li
@ 2022-07-05 8:00 ` Atish Kumar Patra
2022-07-05 8:41 ` Weiwei Li
0 siblings, 1 reply; 34+ messages in thread
From: Atish Kumar Patra @ 2022-07-05 8:00 UTC (permalink / raw)
To: Weiwei Li
Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
On Mon, Jul 4, 2022 at 8:19 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>
> 在 2022/6/21 上午7:15, Atish Patra 写道:
>
> All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
> as a unified counter. Thus, the predicate function doesn't need handle each
> case separately.
>
> Simplify the predicate function so that we just handle things differently
> between RV32/RV64 and S/HS mode.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/csr.c | 112 +++++----------------------------------------
> 1 file changed, 11 insertions(+), 101 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 2664ce265784..9367e2af9b90 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> CPUState *cs = env_cpu(env);
> RISCVCPU *cpu = RISCV_CPU(cs);
> int ctr_index;
> + target_ulong ctr_mask;
> int base_csrno = CSR_CYCLE;
> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>
> @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
> base_csrno += 0x80;
> }
> ctr_index = csrno - base_csrno;
> + ctr_mask = BIT(ctr_index);
>
> if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
> (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
> goto skip_ext_pmu_check;
> }
>
> - if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & ctr_mask))) {
> /* No counter is enabled in PMU or the counter is out of range */
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> skip_ext_pmu_check:
>
> - if (env->priv == PRV_S) {
> - switch (csrno) {
> - case CSR_CYCLE:
> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_TIME:
> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_INSTRET:
> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - }
> - if (rv32) {
> - switch (csrno) {
> - case CSR_CYCLEH:
> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_TIMEH:
> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_INSTRETH:
> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_ILLEGAL_INST;
> - }
> - break;
> - }
> - }
> + if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
> + ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
> + return RISCV_EXCP_ILLEGAL_INST;
> }
>
> Sorry. I didn't realize this simplification and sent a similar patch to fix the problems in Xcounteren
>
> related check I found when I tried to learn the patchset for state enable extension two days ago.
>
> I think there are several difference between our understanding, following is my modifications:
>
> + if (csrno <= CSR_HPMCOUNTER31 && csrno >= CSR_CYCLE) {
> + field = 1 << (csrno - CSR_CYCLE);
> + } else if (riscv_cpu_mxl(env) == MXL_RV32 && csrno <= CSR_HPMCOUNTER31H &&
> + csrno >= CSR_CYCLEH) {
> + field = 1 << (csrno - CSR_CYCLEH);
> + }
> +
> + if (env->priv < PRV_M && !get_field(env->mcounteren, field)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + if (riscv_cpu_virt_enabled(env) && !get_field(env->hcounteren, field)) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + }
> +
> + if (riscv_has_ext(env, RVS) && env->priv == PRV_U &&
> + !get_field(env->scounteren, field)) {
> + if (riscv_cpu_virt_enabled(env)) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + } else {
> + return RISCV_EXCP_ILLEGAL_INST;
> }
> }
>
>
> 1) For any less-privileged mode under M, illegal exception is raised if matching
> bit in mcounteren is zero.
>
As per the priv spec, in the section 3.1.11
"When one of these bits is set, access to the corresponding register
is permitted in the next implemented privilege mode (S-mode if
implemented, otherwise U-mode)."
mcounteren controls the access for U-mode only if the next implemented
mode is U (riscv_has_ext(env, RVS) must be false).
I did not add the additional check as the ctr is defined only for
!CONFIG_USER_ONLY.
> 2) For VS/VU mode('H' extension is supported implicitly), virtual instruction
> exception is raised if matching bit in hcounteren is zero.
>
> 3) scounteren csr only works in U/VU mode when 'S' extension is supported:
Yes. But we don't need additional check for 'S' extension as it will
be done by the predicate function "smode"
> For U mode, illegal exception is raised if matching bit in scounteren is zero.
> For VU mode, virtual instruction exception exception is raised if matching bit
> in scounteren is zero.
>
> Regards,
> Weiwei Li
>
>
> if (riscv_cpu_virt_enabled(env)) {
> - switch (csrno) {
> - case CSR_CYCLE:
> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> - get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_TIME:
> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> - get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_INSTRET:
> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> - get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
> - get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - }
> - if (rv32) {
> - switch (csrno) {
> - case CSR_CYCLEH:
> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> - get_field(env->mcounteren, COUNTEREN_CY)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_TIMEH:
> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
> - get_field(env->mcounteren, COUNTEREN_TM)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_INSTRETH:
> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
> - get_field(env->mcounteren, COUNTEREN_IR)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
> - get_field(env->mcounteren, 1 << ctr_index)) {
> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> - }
> - break;
> - }
> + if (!get_field(env->mcounteren, ctr_mask)) {
> + /* The bit must be set in mcountern for HS mode access */
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + } else if (!get_field(env->hcounteren, ctr_mask)) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> }
> }
> #endif
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-07-05 7:51 ` Atish Kumar Patra
@ 2022-07-05 8:16 ` Weiwei Li
2022-07-26 22:19 ` Atish Patra
0 siblings, 1 reply; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 8:16 UTC (permalink / raw)
To: Atish Kumar Patra, Weiwei Li
Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
在 2022/7/5 下午3:51, Atish Kumar Patra 写道:
> On Mon, Jul 4, 2022 at 5:38 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>>
>> 在 2022/7/4 下午11:26, Weiwei Li 写道:
>>> 在 2022/6/21 上午7:15, Atish Patra 写道:
>>>> The RISC-V privilege specification provides flexibility to implement
>>>> any number of counters from 29 programmable counters. However, the QEMU
>>>> implements all the counters.
>>>>
>>>> Make it configurable through pmu config parameter which now will
>>>> indicate
>>>> how many programmable counters should be implemented by the cpu.
>>>>
>>>> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
>>>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>>>> Signed-off-by: Atish Patra <atish.patra@wdc.com>
>>>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>>>> ---
>>>> target/riscv/cpu.c | 3 +-
>>>> target/riscv/cpu.h | 2 +-
>>>> target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
>>>> 3 files changed, 63 insertions(+), 36 deletions(-)
>>>>
>>>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
>>>> index 1b57b3c43980..d12c6dc630ca 100644
>>>> --- a/target/riscv/cpu.c
>>>> +++ b/target/riscv/cpu.c
>>>> @@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
>>>> {
>>>> RISCVCPU *cpu = RISCV_CPU(obj);
>>>> - cpu->cfg.ext_pmu = true;
>>>> cpu->cfg.ext_ifencei = true;
>>>> cpu->cfg.ext_icsr = true;
>>>> cpu->cfg.mmu = true;
>>>> @@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
>>>> DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
>>>> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
>>>> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
>>>> - DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
>>>> + DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
>>> I think It's better to add a check on cfg.pmu_num to <= 29.
>>>
>> OK, I find this check in the following patch.
>>>> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
>>>> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
>>>> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
>>>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>>>> index 252c30a55d78..ffee54ea5c27 100644
>>>> --- a/target/riscv/cpu.h
>>>> +++ b/target/riscv/cpu.h
>>>> @@ -397,7 +397,6 @@ struct RISCVCPUConfig {
>>>> bool ext_zksed;
>>>> bool ext_zksh;
>>>> bool ext_zkt;
>>>> - bool ext_pmu;
>>>> bool ext_ifencei;
>>>> bool ext_icsr;
>>>> bool ext_svinval;
>>>> @@ -421,6 +420,7 @@ struct RISCVCPUConfig {
>>>> /* Vendor-specific custom extensions */
>>>> bool ext_XVentanaCondOps;
>>>> + uint8_t pmu_num;
>>>> char *priv_spec;
>>>> char *user_spec;
>>>> char *bext_spec;
>>>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>>>> index 0ca05c77883c..b4a8e15f498f 100644
>>>> --- a/target/riscv/csr.c
>>>> +++ b/target/riscv/csr.c
>>>> @@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int
>>>> csrno)
>>>> CPUState *cs = env_cpu(env);
>>>> RISCVCPU *cpu = RISCV_CPU(cs);
>>>> int ctr_index;
>>>> + int base_csrno = CSR_HPMCOUNTER3;
>>>> + bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>>>> - if (!cpu->cfg.ext_pmu) {
>>>> - /* The PMU extension is not enabled */
>>>> + if (rv32 && csrno >= CSR_CYCLEH) {
>>>> + /* Offset for RV32 hpmcounternh counters */
>>>> + base_csrno += 0x80;
>>>> + }
>>>> + ctr_index = csrno - base_csrno;
>>>> +
>>>> + if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
>>>> + /* No counter is enabled in PMU or the counter is out of
>>>> range */
>>> I seems unnecessary to add '!cpu->cfg.pmu_num ' here, 'ctr_index >=
>>> (cpu->cfg.pmu_num)' is true
> The check is improved in the following patches as well.
>
Do you mean 'if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs &
ctr_mask)))' in patch 9 ?
In this condition, '!cpu->cfg.pmu_num' seems unnecessary too.
Regards,
Weiwei Li
>> Typo. I -> It
>>> when cpu->cfg.pmu_num is zero if the problem for base_csrno is fixed.
>>>
>>> Ragards,
>>>
>>> Weiwei Li
>>>
>>>> return RISCV_EXCP_ILLEGAL_INST;
>>>> }
>>>> @@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env,
>>>> int csrno)
>>>> }
>>>> break;
>>>> }
>>>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
>>>> + if (rv32) {
>>>> switch (csrno) {
>>>> case CSR_CYCLEH:
>>>> if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>>>> @@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int
>>>> csrno)
>>>> }
>>>> break;
>>>> }
>>>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
>>>> + if (rv32) {
>>>> switch (csrno) {
>>>> case CSR_CYCLEH:
>>>> if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>>>> @@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env,
>>>> int csrno)
>>>> }
>>>> #if !defined(CONFIG_USER_ONLY)
>>>> +static RISCVException mctr(CPURISCVState *env, int csrno)
>>>> +{
>>>> + CPUState *cs = env_cpu(env);
>>>> + RISCVCPU *cpu = RISCV_CPU(cs);
>>>> + int ctr_index;
>>>> + int base_csrno = CSR_MHPMCOUNTER3;
>>>> +
>>>> + if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
>>>> + /* Offset for RV32 mhpmcounternh counters */
>>>> + base_csrno += 0x80;
>>>> + }
>>>> + ctr_index = csrno - base_csrno;
>>>> + if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
>>>> + /* The PMU is not enabled or counter is out of range*/
>>>> + return RISCV_EXCP_ILLEGAL_INST;
>>>> + }
>>>> +
>>>> + return RISCV_EXCP_NONE;
>>>> +}
>>>> +
>>>> static RISCVException any(CPURISCVState *env, int csrno)
>>>> {
>>>> return RISCV_EXCP_NONE;
>>>> @@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
>>>> [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
>>>> [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
>>>> - [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
>>>> - [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
>>>> - [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
>>>> - [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
>>>> - [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
>>>> - [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
>>>> - [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
>>>> - [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
>>>> - [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
>>>> - [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
>>>> - [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
>>>> - [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
>>>> - [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
>>>> - [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
>>>> - [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
>>>> - [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
>>>> - [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
>>>> - [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
>>>> - [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
>>>> - [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
>>>> - [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
>>>> - [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
>>>> - [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
>>>> - [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
>>>> - [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
>>>> - [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
>>>> - [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
>>>> - [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
>>>> - [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
>>>> + [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
>>>> + [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
>>>> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
>>>> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 09/12] target/riscv: Simplify counter predicate function
2022-07-05 8:00 ` Atish Kumar Patra
@ 2022-07-05 8:41 ` Weiwei Li
0 siblings, 0 replies; 34+ messages in thread
From: Weiwei Li @ 2022-07-05 8:41 UTC (permalink / raw)
To: Atish Kumar Patra
Cc: qemu-devel@nongnu.org Developers, Bin Meng, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
在 2022/7/5 下午4:00, Atish Kumar Patra 写道:
> On Mon, Jul 4, 2022 at 8:19 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>>
>> 在 2022/6/21 上午7:15, Atish Patra 写道:
>>
>> All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
>> as a unified counter. Thus, the predicate function doesn't need handle each
>> case separately.
>>
>> Simplify the predicate function so that we just handle things differently
>> between RV32/RV64 and S/HS mode.
>>
>> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
>> Acked-by: Alistair Francis <alistair.francis@wdc.com>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> ---
>> target/riscv/csr.c | 112 +++++----------------------------------------
>> 1 file changed, 11 insertions(+), 101 deletions(-)
>>
>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>> index 2664ce265784..9367e2af9b90 100644
>> --- a/target/riscv/csr.c
>> +++ b/target/riscv/csr.c
>> @@ -74,6 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> CPUState *cs = env_cpu(env);
>> RISCVCPU *cpu = RISCV_CPU(cs);
>> int ctr_index;
>> + target_ulong ctr_mask;
>> int base_csrno = CSR_CYCLE;
>> bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
>>
>> @@ -82,122 +83,31 @@ static RISCVException ctr(CPURISCVState *env, int csrno)
>> base_csrno += 0x80;
>> }
>> ctr_index = csrno - base_csrno;
>> + ctr_mask = BIT(ctr_index);
>>
>> if ((csrno >= CSR_CYCLE && csrno <= CSR_INSTRET) ||
>> (csrno >= CSR_CYCLEH && csrno <= CSR_INSTRETH)) {
>> goto skip_ext_pmu_check;
>> }
>>
>> - if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))) {
>> + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & ctr_mask))) {
>> /* No counter is enabled in PMU or the counter is out of range */
>> return RISCV_EXCP_ILLEGAL_INST;
>> }
>>
>> skip_ext_pmu_check:
>>
>> - if (env->priv == PRV_S) {
>> - switch (csrno) {
>> - case CSR_CYCLE:
>> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_TIME:
>> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_INSTRET:
>> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - }
>> - if (rv32) {
>> - switch (csrno) {
>> - case CSR_CYCLEH:
>> - if (!get_field(env->mcounteren, COUNTEREN_CY)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_TIMEH:
>> - if (!get_field(env->mcounteren, COUNTEREN_TM)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_INSTRETH:
>> - if (!get_field(env->mcounteren, COUNTEREN_IR)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> - if (!get_field(env->mcounteren, 1 << ctr_index)) {
>> - return RISCV_EXCP_ILLEGAL_INST;
>> - }
>> - break;
>> - }
>> - }
>> + if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
>> + ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
>> + return RISCV_EXCP_ILLEGAL_INST;
>> }
>>
>> Sorry. I didn't realize this simplification and sent a similar patch to fix the problems in Xcounteren
>>
>> related check I found when I tried to learn the patchset for state enable extension two days ago.
>>
>> I think there are several difference between our understanding, following is my modifications:
>>
>> + if (csrno <= CSR_HPMCOUNTER31 && csrno >= CSR_CYCLE) {
>> + field = 1 << (csrno - CSR_CYCLE);
>> + } else if (riscv_cpu_mxl(env) == MXL_RV32 && csrno <= CSR_HPMCOUNTER31H &&
>> + csrno >= CSR_CYCLEH) {
>> + field = 1 << (csrno - CSR_CYCLEH);
>> + }
>> +
>> + if (env->priv < PRV_M && !get_field(env->mcounteren, field)) {
>> + return RISCV_EXCP_ILLEGAL_INST;
>> + }
>> +
>> + if (riscv_cpu_virt_enabled(env) && !get_field(env->hcounteren, field)) {
>> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> + }
>> +
>> + if (riscv_has_ext(env, RVS) && env->priv == PRV_U &&
>> + !get_field(env->scounteren, field)) {
>> + if (riscv_cpu_virt_enabled(env)) {
>> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> + } else {
>> + return RISCV_EXCP_ILLEGAL_INST;
>> }
>> }
>>
>>
>> 1) For any less-privileged mode under M, illegal exception is raised if matching
>> bit in mcounteren is zero.
>>
> As per the priv spec, in the section 3.1.11
> "When one of these bits is set, access to the corresponding register
> is permitted in the next implemented privilege mode (S-mode if
> implemented, otherwise U-mode)."
>
> mcounteren controls the access for U-mode only if the next implemented
> mode is U (riscv_has_ext(env, RVS) must be false).
> I did not add the additional check as the ctr is defined only for
> !CONFIG_USER_ONLY.
In section 3.1.11, It also have description like this:
"In systems with U-mode, the mcounteren must be implemented, but all
fields are WARL and may
be read-only zero, indicating reads to the corresponding counter will
cause an illegal instruction
exception when executing in a less-privileged mode."
And !CONFIG_USER_ONLY is defined for QEMU system emulation, it doesn't
means current priv
cannot be PRV_U mode.
>
>> 2) For VS/VU mode('H' extension is supported implicitly), virtual instruction
>> exception is raised if matching bit in hcounteren is zero.
>>
>> 3) scounteren csr only works in U/VU mode when 'S' extension is supported:
> Yes. But we don't need additional check for 'S' extension as it will
> be done by the predicate function "smode"
This is the question, smode can only guard the read/write of scounteren.
If 'S' extension is not implemented,
scounteren will be zero. and If check is done as following:
+ if (((env->priv == PRV_S) && (!get_field(env->mcounteren, ctr_mask))) ||
+ ((env->priv == PRV_U) && (!get_field(env->scounteren, ctr_mask)))) {
+ return RISCV_EXCP_ILLEGAL_INST;
}
any access from PRV_U will trigger illegal instruction exception. From above spec, this kind of
access will be controlled by mcounteren, and will be legal if matching bit in mcounteren is 1.
Regards,
Weiwei Li
>
>> For U mode, illegal exception is raised if matching bit in scounteren is zero.
>> For VU mode, virtual instruction exception exception is raised if matching bit
>> in scounteren is zero.
>>
>> Regards,
>> Weiwei Li
>>
>>
>> if (riscv_cpu_virt_enabled(env)) {
>> - switch (csrno) {
>> - case CSR_CYCLE:
>> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>> - get_field(env->mcounteren, COUNTEREN_CY)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_TIME:
>> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>> - get_field(env->mcounteren, COUNTEREN_TM)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_INSTRET:
>> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>> - get_field(env->mcounteren, COUNTEREN_IR)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31:
>> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> - get_field(env->mcounteren, 1 << ctr_index)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - }
>> - if (rv32) {
>> - switch (csrno) {
>> - case CSR_CYCLEH:
>> - if (!get_field(env->hcounteren, COUNTEREN_CY) &&
>> - get_field(env->mcounteren, COUNTEREN_CY)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_TIMEH:
>> - if (!get_field(env->hcounteren, COUNTEREN_TM) &&
>> - get_field(env->mcounteren, COUNTEREN_TM)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_INSTRETH:
>> - if (!get_field(env->hcounteren, COUNTEREN_IR) &&
>> - get_field(env->mcounteren, COUNTEREN_IR)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H:
>> - if (!get_field(env->hcounteren, 1 << ctr_index) &&
>> - get_field(env->mcounteren, 1 << ctr_index)) {
>> - return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> - }
>> - break;
>> - }
>> + if (!get_field(env->mcounteren, ctr_mask)) {
>> + /* The bit must be set in mcountern for HS mode access */
>> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> + } else if (!get_field(env->hcounteren, ctr_mask)) {
>> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
>> }
>> }
>> #endif
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
2022-07-05 0:31 ` Weiwei Li
2022-07-05 1:30 ` Weiwei Li
@ 2022-07-14 9:53 ` Heiko Stübner
2022-07-18 1:23 ` Alistair Francis
2 siblings, 1 reply; 34+ messages in thread
From: Heiko Stübner @ 2022-07-14 9:53 UTC (permalink / raw)
To: qemu-devel
Cc: Atish Patra, Alistair Francis, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang, Atish Patra
Am Dienstag, 21. Juni 2022, 01:15:58 CEST schrieb Atish Patra:
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> extension allows the perf to handle overflow interrupts and filtering
> support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have any
> provision for the overflow bit for fixed counters, the fixed events
> can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,
> a separate timer device is programmed to handle the overflow.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
it looks like patches 1-7 already landed in Qemu though I didn't
see any "applied" messages, so it took me a bit to realize that :-) .
In any case, I ran Atish's sample from the cover-letter with a matching
kernel nad got similar results as shown in the cover-letter.
Tested-by: Heiko Stuebner <heiko@sntech.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 09/12] target/riscv: Simplify counter predicate function
2022-06-20 23:15 ` [PATCH v10 09/12] target/riscv: Simplify counter predicate function Atish Patra
2022-07-04 15:19 ` Weiwei Li
@ 2022-07-14 9:54 ` Heiko Stübner
1 sibling, 0 replies; 34+ messages in thread
From: Heiko Stübner @ 2022-07-14 9:54 UTC (permalink / raw)
To: qemu-devel
Cc: Bin Meng, Alistair Francis, Atish Patra, Bin Meng,
Palmer Dabbelt, qemu-riscv, frank.chang, Atish Patra
Am Dienstag, 21. Juni 2022, 01:15:59 CEST schrieb Atish Patra:
> All the hpmcounters and the fixed counters (CY, IR, TM) can be represented
> as a unified counter. Thus, the predicate function doesn't need handle each
> case separately.
>
> Simplify the predicate function so that we just handle things differently
> between RV32/RV64 and S/HS mode.
>
> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 10/12] target/riscv: Add few cache related PMU events
2022-06-20 23:16 ` [PATCH v10 10/12] target/riscv: Add few cache related PMU events Atish Patra
@ 2022-07-14 9:55 ` Heiko Stübner
0 siblings, 0 replies; 34+ messages in thread
From: Heiko Stübner @ 2022-07-14 9:55 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang, Atish Patra
Am Dienstag, 21. Juni 2022, 01:16:00 CEST schrieb Atish Patra:
> From: Atish Patra <atish.patra@wdc.com>
>
> Qemu can monitor the following cache related PMU events through
> tlb_fill functions.
>
> 1. DTLB load/store miss
> 3. ITLB prefetch miss
>
> Increment the PMU counter in tlb_fill function.
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree
2022-06-20 23:16 ` [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
@ 2022-07-14 10:27 ` Heiko Stübner
2022-07-26 21:51 ` Atish Patra
0 siblings, 1 reply; 34+ messages in thread
From: Heiko Stübner @ 2022-07-14 10:27 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang, Atish Patra
Hi Atish,
Am Dienstag, 21. Juni 2022, 01:16:01 CEST schrieb Atish Patra:
> Qemu virt machine can support few cache events and cycle/instret counters.
> It also supports counter overflow for these events.
>
> Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
> capabilities. There are some dummy nodes added for testing as well.
>
> Acked-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> +static void create_fdt_socket_pmu(RISCVVirtState *s,
> + int socket, uint32_t *phandle,
> + uint32_t *intc_phandles)
> +{
> + int cpu;
> + char *pmu_name;
> + uint32_t *pmu_cells;
> + MachineState *mc = MACHINE(s);
> + RISCVCPU hart = s->soc[socket].harts[0];
> +
> + pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
> +
> + for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
> + pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
> + pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
> + }
> +
> + pmu_name = g_strdup_printf("/soc/pmu");
> + qemu_fdt_add_subnode(mc->fdt, pmu_name);
> + qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible", "riscv,pmu");
Where is the binding document for this?
As the comment below states the "riscv,event-to-mhpmcounters" property
is opensbi-specific and gets removed in the opensbi stage, but that still
leaves the pmu node in it and from the version I found, Rob wasn't overly
happy with the compatible [0]. Did this get addressed?
Thanks
Heiko
[0] https://lore.kernel.org/all/YXhPqfpXh1VZN07T@robh.at.kernel.org/
> + riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
> +
> + g_free(pmu_name);
> + g_free(pmu_cells);
> +}
> +
> static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
> bool is_32_bit, uint32_t *phandle,
> uint32_t *irq_mmio_phandle,
> @@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
> &intc_phandles[phandle_pos]);
> }
> }
> + create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
> }
>
> if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 7d9e2aca12a9..69bbd9fff4e1 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -1110,6 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> ISA_EDATA_ENTRY(zve64f, ext_zve64f),
> ISA_EDATA_ENTRY(zhinx, ext_zhinx),
> ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
> + ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
> ISA_EDATA_ENTRY(svinval, ext_svinval),
> ISA_EDATA_ENTRY(svnapot, ext_svnapot),
> ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 34096941c0ce..59feb3c243dd 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -20,11 +20,68 @@
> #include "cpu.h"
> #include "pmu.h"
> #include "sysemu/cpu-timers.h"
> +#include "sysemu/device_tree.h"
>
> #define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> #define MAKE_32BIT_MASK(shift, length) \
> (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
>
> +/**
> + * To keep it simple, any event can be mapped to any programmable counters in
> + * QEMU. The generic cycle & instruction count events can also be monitored
> + * using programmable counters. In that case, mcycle & minstret must continue
> + * to provide the correct value as well. Heterogeneous PMU per hart is not
> + * supported yet. Thus, number of counters are same across all harts.
> + */
> +void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char *pmu_name)
> +{
> + uint32_t fdt_event_ctr_map[20] = {};
> + uint32_t cmask;
> +
> + /* All the programmable counters can map to any event */
> + cmask = MAKE_32BIT_MASK(3, num_ctrs);
> +
> + /**
> + * The event encoding is specified in the SBI specification
> + * Event idx is a 20bits wide number encoded as follows:
> + * event_idx[19:16] = type
> + * event_idx[15:0] = code
> + * The code field in cache events are encoded as follows:
> + * event_idx.code[15:3] = cache_id
> + * event_idx.code[2:1] = op_id
> + * event_idx.code[0:0] = result_id
> + */
> +
> + /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
> + fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
> + fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
> + fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
> +
> + /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
> + fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
> + fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
> + fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
> +
> + /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
> + fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
> + fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
> + fdt_event_ctr_map[8] = cpu_to_be32(cmask);
> +
> + /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01) */
> + fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
> + fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
> + fdt_event_ctr_map[11] = cpu_to_be32(cmask);
> +
> + /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
> + fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
> + fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
> + fdt_event_ctr_map[14] = cpu_to_be32(cmask);
> +
> + /* This a OpenSBI specific DT property documented in OpenSBI docs */
> + qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
> + fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
> +}
> +
> static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> {
> if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 036653627f78..3004ce37b636 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> uint32_t ctr_idx);
> int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> +void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char *pmu_name);
> int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> uint32_t ctr_idx);
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs
2022-06-20 23:16 ` [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
@ 2022-07-14 10:29 ` Heiko Stübner
0 siblings, 0 replies; 34+ messages in thread
From: Heiko Stübner @ 2022-07-14 10:29 UTC (permalink / raw)
To: qemu-devel
Cc: Alistair Francis, Atish Patra, Bin Meng, Palmer Dabbelt,
qemu-riscv, frank.chang, Atish Patra
Am Dienstag, 21. Juni 2022, 01:16:02 CEST schrieb Atish Patra:
> The sscofpmf extension was ratified as a part of priv spec v1.12.
> Mark the csr_ops accordingly.
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Heiko Stuebner <heiko@sntech.de>
> ---
> target/riscv/csr.c | 90 ++++++++++++++++++++++++++++++----------------
> 1 file changed, 60 insertions(+), 30 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 9367e2af9b90..dabd531e0355 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -4002,63 +4002,92 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_mhpmevent },
>
> [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
> - write_mhpmeventh},
> + write_mhpmeventh,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
>
> [CSR_HPMCOUNTER3H] = { "hpmcounter3h", ctr32, read_hpmcounterh },
> [CSR_HPMCOUNTER4H] = { "hpmcounter4h", ctr32, read_hpmcounterh },
> @@ -4148,7 +4177,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_mhpmcounterh },
> [CSR_MHPMCOUNTER31H] = { "mhpmcounter31h", mctr32, read_hpmcounterh,
> write_mhpmcounterh },
> - [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf },
> + [CSR_SCOUNTOVF] = { "scountovf", sscofpmf, read_scountovf,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
>
> #endif /* !CONFIG_USER_ONLY */
> };
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 08/12] target/riscv: Add sscofpmf extension support
2022-07-14 9:53 ` Heiko Stübner
@ 2022-07-18 1:23 ` Alistair Francis
0 siblings, 0 replies; 34+ messages in thread
From: Alistair Francis @ 2022-07-18 1:23 UTC (permalink / raw)
To: Heiko Stübner
Cc: qemu-devel@nongnu.org Developers, Atish Patra, Alistair Francis,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
On Thu, Jul 14, 2022 at 7:54 PM Heiko Stübner <heiko@sntech.de> wrote:
>
> Am Dienstag, 21. Juni 2022, 01:15:58 CEST schrieb Atish Patra:
> > The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions,
> > and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)
> > extension allows the perf to handle overflow interrupts and filtering
> > support. This patch provides a framework for programmable
> > counters to leverage the extension. As the extension doesn't have any
> > provision for the overflow bit for fixed counters, the fixed events
> > can also be monitoring using programmable counters. The underlying
> > counters for cycle and instruction counters are always running. Thus,
> > a separate timer device is programmed to handle the overflow.
> >
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
>
> it looks like patches 1-7 already landed in Qemu though I didn't
> see any "applied" messages, so it took me a bit to realize that :-) .
Argh, sorry! I must have forgotten to mention that.
Alistair
>
>
> In any case, I ran Atish's sample from the cover-letter with a matching
> kernel nad got similar results as shown in the cover-letter.
>
> Tested-by: Heiko Stuebner <heiko@sntech.de>
>
>
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree
2022-07-14 10:27 ` Heiko Stübner
@ 2022-07-26 21:51 ` Atish Patra
0 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-07-26 21:51 UTC (permalink / raw)
To: Heiko Stübner
Cc: qemu-devel@nongnu.org Developers, Alistair Francis, Atish Patra,
Bin Meng, Palmer Dabbelt, open list:RISC-V, Frank Chang
On Thu, Jul 14, 2022 at 3:27 AM Heiko Stübner <heiko@sntech.de> wrote:
>
> Hi Atish,
>
> Am Dienstag, 21. Juni 2022, 01:16:01 CEST schrieb Atish Patra:
> > Qemu virt machine can support few cache events and cycle/instret counters.
> > It also supports counter overflow for these events.
> >
> > Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine
> > capabilities. There are some dummy nodes added for testing as well.
> >
> > Acked-by: Alistair Francis <alistair.francis@wdc.com>
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
>
> > +static void create_fdt_socket_pmu(RISCVVirtState *s,
> > + int socket, uint32_t *phandle,
> > + uint32_t *intc_phandles)
> > +{
> > + int cpu;
> > + char *pmu_name;
> > + uint32_t *pmu_cells;
> > + MachineState *mc = MACHINE(s);
> > + RISCVCPU hart = s->soc[socket].harts[0];
> > +
> > + pmu_cells = g_new0(uint32_t, s->soc[socket].num_harts * 2);
> > +
> > + for (cpu = 0; cpu < s->soc[socket].num_harts; cpu++) {
> > + pmu_cells[cpu * 2 + 0] = cpu_to_be32(intc_phandles[cpu]);
> > + pmu_cells[cpu * 2 + 1] = cpu_to_be32(IRQ_PMU_OVF);
> > + }
> > +
> > + pmu_name = g_strdup_printf("/soc/pmu");
> > + qemu_fdt_add_subnode(mc->fdt, pmu_name);
> > + qemu_fdt_setprop_string(mc->fdt, pmu_name, "compatible", "riscv,pmu");
>
> Where is the binding document for this?
>
> As the comment below states the "riscv,event-to-mhpmcounters" property
> is opensbi-specific and gets removed in the opensbi stage, but that still
> leaves the pmu node in it and from the version I found, Rob wasn't overly
> happy with the compatible [0]. Did this get addressed?
>
This is OpenSBI specific binding.
https://github.com/riscv-software-src/opensbi/blob/master/docs/pmu_support.md
Linux kernel doesn't use binding anymore. The earlier version patches
relied on the DT binding.
However, based on the feedback it was removed.
OpenSBI should delete the node and the interrupt-extended property
deletion is necessary at this point.
>
> Thanks
> Heiko
>
>
> [0] https://lore.kernel.org/all/YXhPqfpXh1VZN07T@robh.at.kernel.org/
>
>
>
> > + riscv_pmu_generate_fdt_node(mc->fdt, hart.cfg.pmu_num, pmu_name);
> > +
> > + g_free(pmu_name);
> > + g_free(pmu_cells);
> > +}
> > +
> > static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
> > bool is_32_bit, uint32_t *phandle,
> > uint32_t *irq_mmio_phandle,
> > @@ -759,6 +786,7 @@ static void create_fdt_sockets(RISCVVirtState *s, const MemMapEntry *memmap,
> > &intc_phandles[phandle_pos]);
> > }
> > }
> > + create_fdt_socket_pmu(s, socket, phandle, intc_phandles);
> > }
> >
> > if (s->aia_type == VIRT_AIA_TYPE_APLIC_IMSIC) {
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index 7d9e2aca12a9..69bbd9fff4e1 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -1110,6 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> > ISA_EDATA_ENTRY(zve64f, ext_zve64f),
> > ISA_EDATA_ENTRY(zhinx, ext_zhinx),
> > ISA_EDATA_ENTRY(zhinxmin, ext_zhinxmin),
> > + ISA_EDATA_ENTRY(sscofpmf, ext_sscofpmf),
> > ISA_EDATA_ENTRY(svinval, ext_svinval),
> > ISA_EDATA_ENTRY(svnapot, ext_svnapot),
> > ISA_EDATA_ENTRY(svpbmt, ext_svpbmt),
> > diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> > index 34096941c0ce..59feb3c243dd 100644
> > --- a/target/riscv/pmu.c
> > +++ b/target/riscv/pmu.c
> > @@ -20,11 +20,68 @@
> > #include "cpu.h"
> > #include "pmu.h"
> > #include "sysemu/cpu-timers.h"
> > +#include "sysemu/device_tree.h"
> >
> > #define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> > #define MAKE_32BIT_MASK(shift, length) \
> > (((uint32_t)(~0UL) >> (32 - (length))) << (shift))
> >
> > +/**
> > + * To keep it simple, any event can be mapped to any programmable counters in
> > + * QEMU. The generic cycle & instruction count events can also be monitored
> > + * using programmable counters. In that case, mcycle & minstret must continue
> > + * to provide the correct value as well. Heterogeneous PMU per hart is not
> > + * supported yet. Thus, number of counters are same across all harts.
> > + */
> > +void riscv_pmu_generate_fdt_node(void *fdt, int num_ctrs, char *pmu_name)
> > +{
> > + uint32_t fdt_event_ctr_map[20] = {};
> > + uint32_t cmask;
> > +
> > + /* All the programmable counters can map to any event */
> > + cmask = MAKE_32BIT_MASK(3, num_ctrs);
> > +
> > + /**
> > + * The event encoding is specified in the SBI specification
> > + * Event idx is a 20bits wide number encoded as follows:
> > + * event_idx[19:16] = type
> > + * event_idx[15:0] = code
> > + * The code field in cache events are encoded as follows:
> > + * event_idx.code[15:3] = cache_id
> > + * event_idx.code[2:1] = op_id
> > + * event_idx.code[0:0] = result_id
> > + */
> > +
> > + /* SBI_PMU_HW_CPU_CYCLES: 0x01 : type(0x00) */
> > + fdt_event_ctr_map[0] = cpu_to_be32(0x00000001);
> > + fdt_event_ctr_map[1] = cpu_to_be32(0x00000001);
> > + fdt_event_ctr_map[2] = cpu_to_be32(cmask | 1 << 0);
> > +
> > + /* SBI_PMU_HW_INSTRUCTIONS: 0x02 : type(0x00) */
> > + fdt_event_ctr_map[3] = cpu_to_be32(0x00000002);
> > + fdt_event_ctr_map[4] = cpu_to_be32(0x00000002);
> > + fdt_event_ctr_map[5] = cpu_to_be32(cmask | 1 << 2);
> > +
> > + /* SBI_PMU_HW_CACHE_DTLB : 0x03 READ : 0x00 MISS : 0x00 type(0x01) */
> > + fdt_event_ctr_map[6] = cpu_to_be32(0x00010019);
> > + fdt_event_ctr_map[7] = cpu_to_be32(0x00010019);
> > + fdt_event_ctr_map[8] = cpu_to_be32(cmask);
> > +
> > + /* SBI_PMU_HW_CACHE_DTLB : 0x03 WRITE : 0x01 MISS : 0x00 type(0x01) */
> > + fdt_event_ctr_map[9] = cpu_to_be32(0x0001001B);
> > + fdt_event_ctr_map[10] = cpu_to_be32(0x0001001B);
> > + fdt_event_ctr_map[11] = cpu_to_be32(cmask);
> > +
> > + /* SBI_PMU_HW_CACHE_ITLB : 0x04 READ : 0x00 MISS : 0x00 type(0x01) */
> > + fdt_event_ctr_map[12] = cpu_to_be32(0x00010021);
> > + fdt_event_ctr_map[13] = cpu_to_be32(0x00010021);
> > + fdt_event_ctr_map[14] = cpu_to_be32(cmask);
> > +
> > + /* This a OpenSBI specific DT property documented in OpenSBI docs */
> > + qemu_fdt_setprop(fdt, pmu_name, "riscv,event-to-mhpmcounters",
> > + fdt_event_ctr_map, sizeof(fdt_event_ctr_map));
> > +}
> > +
> > static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)
> > {
> > if (ctr_idx < 3 || ctr_idx >= RV_MAX_MHPMCOUNTERS ||
> > diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> > index 036653627f78..3004ce37b636 100644
> > --- a/target/riscv/pmu.h
> > +++ b/target/riscv/pmu.h
> > @@ -31,5 +31,6 @@ int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> > int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value,
> > uint32_t ctr_idx);
> > int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> > +void riscv_pmu_generate_fdt_node(void *fdt, int num_counters, char *pmu_name);
> > int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> > uint32_t ctr_idx);
> >
>
>
>
>
>
--
Regards,
Atish
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable
2022-07-05 8:16 ` Weiwei Li
@ 2022-07-26 22:19 ` Atish Patra
0 siblings, 0 replies; 34+ messages in thread
From: Atish Patra @ 2022-07-26 22:19 UTC (permalink / raw)
To: Weiwei Li
Cc: Atish Kumar Patra, qemu-devel@nongnu.org Developers, Bin Meng,
Alistair Francis, Bin Meng, Palmer Dabbelt, open list:RISC-V,
Frank Chang
On Tue, Jul 5, 2022 at 1:20 AM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
>
>
> 在 2022/7/5 下午3:51, Atish Kumar Patra 写道:
> > On Mon, Jul 4, 2022 at 5:38 PM Weiwei Li <liweiwei@iscas.ac.cn> wrote:
> >>
> >> 在 2022/7/4 下午11:26, Weiwei Li 写道:
> >>> 在 2022/6/21 上午7:15, Atish Patra 写道:
> >>>> The RISC-V privilege specification provides flexibility to implement
> >>>> any number of counters from 29 programmable counters. However, the QEMU
> >>>> implements all the counters.
> >>>>
> >>>> Make it configurable through pmu config parameter which now will
> >>>> indicate
> >>>> how many programmable counters should be implemented by the cpu.
> >>>>
> >>>> Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
> >>>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >>>> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> >>>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> >>>> ---
> >>>> target/riscv/cpu.c | 3 +-
> >>>> target/riscv/cpu.h | 2 +-
> >>>> target/riscv/csr.c | 94 ++++++++++++++++++++++++++++++----------------
> >>>> 3 files changed, 63 insertions(+), 36 deletions(-)
> >>>>
> >>>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> >>>> index 1b57b3c43980..d12c6dc630ca 100644
> >>>> --- a/target/riscv/cpu.c
> >>>> +++ b/target/riscv/cpu.c
> >>>> @@ -851,7 +851,6 @@ static void riscv_cpu_init(Object *obj)
> >>>> {
> >>>> RISCVCPU *cpu = RISCV_CPU(obj);
> >>>> - cpu->cfg.ext_pmu = true;
> >>>> cpu->cfg.ext_ifencei = true;
> >>>> cpu->cfg.ext_icsr = true;
> >>>> cpu->cfg.mmu = true;
> >>>> @@ -879,7 +878,7 @@ static Property riscv_cpu_extensions[] = {
> >>>> DEFINE_PROP_BOOL("u", RISCVCPU, cfg.ext_u, true),
> >>>> DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false),
> >>>> DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true),
> >>>> - DEFINE_PROP_BOOL("pmu", RISCVCPU, cfg.ext_pmu, true),
> >>>> + DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16),
> >>> I think It's better to add a check on cfg.pmu_num to <= 29.
> >>>
> >> OK, I find this check in the following patch.
> >>>> DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true),
> >>>> DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true),
> >>>> DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false),
> >>>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> >>>> index 252c30a55d78..ffee54ea5c27 100644
> >>>> --- a/target/riscv/cpu.h
> >>>> +++ b/target/riscv/cpu.h
> >>>> @@ -397,7 +397,6 @@ struct RISCVCPUConfig {
> >>>> bool ext_zksed;
> >>>> bool ext_zksh;
> >>>> bool ext_zkt;
> >>>> - bool ext_pmu;
> >>>> bool ext_ifencei;
> >>>> bool ext_icsr;
> >>>> bool ext_svinval;
> >>>> @@ -421,6 +420,7 @@ struct RISCVCPUConfig {
> >>>> /* Vendor-specific custom extensions */
> >>>> bool ext_XVentanaCondOps;
> >>>> + uint8_t pmu_num;
> >>>> char *priv_spec;
> >>>> char *user_spec;
> >>>> char *bext_spec;
> >>>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> >>>> index 0ca05c77883c..b4a8e15f498f 100644
> >>>> --- a/target/riscv/csr.c
> >>>> +++ b/target/riscv/csr.c
> >>>> @@ -73,9 +73,17 @@ static RISCVException ctr(CPURISCVState *env, int
> >>>> csrno)
> >>>> CPUState *cs = env_cpu(env);
> >>>> RISCVCPU *cpu = RISCV_CPU(cs);
> >>>> int ctr_index;
> >>>> + int base_csrno = CSR_HPMCOUNTER3;
> >>>> + bool rv32 = riscv_cpu_mxl(env) == MXL_RV32 ? true : false;
> >>>> - if (!cpu->cfg.ext_pmu) {
> >>>> - /* The PMU extension is not enabled */
> >>>> + if (rv32 && csrno >= CSR_CYCLEH) {
> >>>> + /* Offset for RV32 hpmcounternh counters */
> >>>> + base_csrno += 0x80;
> >>>> + }
> >>>> + ctr_index = csrno - base_csrno;
> >>>> +
> >>>> + if (!cpu->cfg.pmu_num || ctr_index >= (cpu->cfg.pmu_num)) {
> >>>> + /* No counter is enabled in PMU or the counter is out of
> >>>> range */
> >>> I seems unnecessary to add '!cpu->cfg.pmu_num ' here, 'ctr_index >=
> >>> (cpu->cfg.pmu_num)' is true
> > The check is improved in the following patches as well.
> >
> Do you mean 'if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs &
> ctr_mask)))' in patch 9 ?
>
> In this condition, '!cpu->cfg.pmu_num' seems unnecessary too.
>
Yes. I will remove it.
> Regards,
>
> Weiwei Li
>
> >> Typo. I -> It
> >>> when cpu->cfg.pmu_num is zero if the problem for base_csrno is fixed.
> >>>
> >>> Ragards,
> >>>
> >>> Weiwei Li
> >>>
> >>>> return RISCV_EXCP_ILLEGAL_INST;
> >>>> }
> >>>> @@ -103,7 +111,7 @@ static RISCVException ctr(CPURISCVState *env,
> >>>> int csrno)
> >>>> }
> >>>> break;
> >>>> }
> >>>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> >>>> + if (rv32) {
> >>>> switch (csrno) {
> >>>> case CSR_CYCLEH:
> >>>> if (!get_field(env->mcounteren, COUNTEREN_CY)) {
> >>>> @@ -158,7 +166,7 @@ static RISCVException ctr(CPURISCVState *env, int
> >>>> csrno)
> >>>> }
> >>>> break;
> >>>> }
> >>>> - if (riscv_cpu_mxl(env) == MXL_RV32) {
> >>>> + if (rv32) {
> >>>> switch (csrno) {
> >>>> case CSR_CYCLEH:
> >>>> if (!get_field(env->hcounteren, COUNTEREN_CY) &&
> >>>> @@ -202,6 +210,26 @@ static RISCVException ctr32(CPURISCVState *env,
> >>>> int csrno)
> >>>> }
> >>>> #if !defined(CONFIG_USER_ONLY)
> >>>> +static RISCVException mctr(CPURISCVState *env, int csrno)
> >>>> +{
> >>>> + CPUState *cs = env_cpu(env);
> >>>> + RISCVCPU *cpu = RISCV_CPU(cs);
> >>>> + int ctr_index;
> >>>> + int base_csrno = CSR_MHPMCOUNTER3;
> >>>> +
> >>>> + if ((riscv_cpu_mxl(env) == MXL_RV32) && csrno >= CSR_MCYCLEH) {
> >>>> + /* Offset for RV32 mhpmcounternh counters */
> >>>> + base_csrno += 0x80;
> >>>> + }
> >>>> + ctr_index = csrno - base_csrno;
> >>>> + if (!cpu->cfg.pmu_num || ctr_index >= cpu->cfg.pmu_num) {
> >>>> + /* The PMU is not enabled or counter is out of range*/
> >>>> + return RISCV_EXCP_ILLEGAL_INST;
> >>>> + }
> >>>> +
> >>>> + return RISCV_EXCP_NONE;
> >>>> +}
> >>>> +
> >>>> static RISCVException any(CPURISCVState *env, int csrno)
> >>>> {
> >>>> return RISCV_EXCP_NONE;
> >>>> @@ -3687,35 +3715,35 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> >>>> [CSR_HPMCOUNTER30] = { "hpmcounter30", ctr, read_zero },
> >>>> [CSR_HPMCOUNTER31] = { "hpmcounter31", ctr, read_zero },
> >>>> - [CSR_MHPMCOUNTER3] = { "mhpmcounter3", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER4] = { "mhpmcounter4", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER5] = { "mhpmcounter5", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER6] = { "mhpmcounter6", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER7] = { "mhpmcounter7", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER8] = { "mhpmcounter8", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER9] = { "mhpmcounter9", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER10] = { "mhpmcounter10", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER11] = { "mhpmcounter11", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER12] = { "mhpmcounter12", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER13] = { "mhpmcounter13", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER14] = { "mhpmcounter14", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER15] = { "mhpmcounter15", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER16] = { "mhpmcounter16", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER17] = { "mhpmcounter17", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER18] = { "mhpmcounter18", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER19] = { "mhpmcounter19", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER20] = { "mhpmcounter20", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER21] = { "mhpmcounter21", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER22] = { "mhpmcounter22", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER23] = { "mhpmcounter23", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER24] = { "mhpmcounter24", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER25] = { "mhpmcounter25", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER26] = { "mhpmcounter26", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER27] = { "mhpmcounter27", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER28] = { "mhpmcounter28", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER29] = { "mhpmcounter29", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER30] = { "mhpmcounter30", any, read_zero },
> >>>> - [CSR_MHPMCOUNTER31] = { "mhpmcounter31", any, read_zero },
> >>>> + [CSR_MHPMCOUNTER3] = { "mhpmcounter3", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER4] = { "mhpmcounter4", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER5] = { "mhpmcounter5", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER6] = { "mhpmcounter6", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER7] = { "mhpmcounter7", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER8] = { "mhpmcounter8", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER9] = { "mhpmcounter9", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER10] = { "mhpmcounter10", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER11] = { "mhpmcounter11", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER12] = { "mhpmcounter12", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER13] = { "mhpmcounter13", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER14] = { "mhpmcounter14", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER15] = { "mhpmcounter15", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER16] = { "mhpmcounter16", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER17] = { "mhpmcounter17", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER18] = { "mhpmcounter18", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER19] = { "mhpmcounter19", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER20] = { "mhpmcounter20", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER21] = { "mhpmcounter21", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER22] = { "mhpmcounter22", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER23] = { "mhpmcounter23", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER24] = { "mhpmcounter24", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER25] = { "mhpmcounter25", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER26] = { "mhpmcounter26", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER27] = { "mhpmcounter27", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER28] = { "mhpmcounter28", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER29] = { "mhpmcounter29", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER30] = { "mhpmcounter30", mctr, read_zero },
> >>>> + [CSR_MHPMCOUNTER31] = { "mhpmcounter31", mctr, read_zero },
> >>>> [CSR_MHPMEVENT3] = { "mhpmevent3", any, read_zero },
> >>>> [CSR_MHPMEVENT4] = { "mhpmevent4", any, read_zero },
>
>
--
Regards,
Atish
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2022-07-26 22:22 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-20 23:15 [PATCH v10 00/12] Improve PMU support Atish Patra
2022-06-20 23:15 ` [PATCH v10 01/12] target/riscv: Fix PMU CSR predicate function Atish Patra
2022-06-20 23:15 ` [PATCH v10 02/12] target/riscv: Implement PMU CSR predicate function for S-mode Atish Patra
2022-06-20 23:15 ` [PATCH v10 03/12] target/riscv: pmu: Rename the counters extension to pmu Atish Patra
2022-06-20 23:15 ` [PATCH v10 04/12] target/riscv: pmu: Make number of counters configurable Atish Patra
2022-07-04 15:26 ` Weiwei Li
2022-07-05 0:38 ` Weiwei Li
2022-07-05 7:51 ` Atish Kumar Patra
2022-07-05 8:16 ` Weiwei Li
2022-07-26 22:19 ` Atish Patra
2022-06-20 23:15 ` [PATCH v10 05/12] target/riscv: Implement mcountinhibit CSR Atish Patra
2022-07-04 15:31 ` Weiwei Li
2022-07-05 7:47 ` Atish Kumar Patra
2022-06-20 23:15 ` [PATCH v10 06/12] target/riscv: Add support for hpmcounters/hpmevents Atish Patra
2022-06-20 23:15 ` [PATCH v10 07/12] target/riscv: Support mcycle/minstret write operation Atish Patra
2022-06-20 23:15 ` [PATCH v10 08/12] target/riscv: Add sscofpmf extension support Atish Patra
2022-07-05 0:31 ` Weiwei Li
2022-07-05 1:30 ` Weiwei Li
2022-07-05 7:36 ` Atish Kumar Patra
2022-07-05 7:48 ` Weiwei Li
2022-07-14 9:53 ` Heiko Stübner
2022-07-18 1:23 ` Alistair Francis
2022-06-20 23:15 ` [PATCH v10 09/12] target/riscv: Simplify counter predicate function Atish Patra
2022-07-04 15:19 ` Weiwei Li
2022-07-05 8:00 ` Atish Kumar Patra
2022-07-05 8:41 ` Weiwei Li
2022-07-14 9:54 ` Heiko Stübner
2022-06-20 23:16 ` [PATCH v10 10/12] target/riscv: Add few cache related PMU events Atish Patra
2022-07-14 9:55 ` Heiko Stübner
2022-06-20 23:16 ` [PATCH v10 11/12] hw/riscv: virt: Add PMU DT node to the device tree Atish Patra
2022-07-14 10:27 ` Heiko Stübner
2022-07-26 21:51 ` Atish Patra
2022-06-20 23:16 ` [PATCH v10 12/12] target/riscv: Update the privilege field for sscofpmf CSRs Atish Patra
2022-07-14 10:29 ` Heiko Stübner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.