* [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes
@ 2022-05-26 10:05 Anup Patel
2022-05-26 10:05 ` [PATCH v3 1/4] target/riscv: Don't force update priv spec version to latest Anup Patel
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Anup Patel @ 2022-05-26 10:05 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
This series does fixes and improvements to have nested virtualization
on QEMU RISC-V.
These patches can also be found in riscv_nested_fixes_v3 branch at:
https://github.com/avpatel/qemu.git
The RISC-V nested virtualization was tested on QEMU RISC-V using
Xvisor RISC-V which has required hypervisor support to run another
hypervisor as Guest/VM.
Changes since v2:
- Dropped the patch which are already in Alistair's next branch
- Set "Addr. Offset" in the transformed instruction for PATCH3
- Print warning in riscv_cpu_realize() if we are disabling an
extension due to privilege spec verions mismatch for PATCH4
Changes since v1:
- Set write_gva to env->two_stage_lookup which ensures that for
HS-mode to HS-mode trap write_gva is true only for HLV/HSV
instructions
- Included "[PATCH 0/3] QEMU RISC-V priv spec version fixes"
patches in this series for easy review
- Re-worked PATCH7 to force disable extensions if required
priv spec version is not staisfied
- Added new PATCH8 to fix "aia=aplic-imsic" mode of virt machine
Anup Patel (4):
target/riscv: Don't force update priv spec version to latest
target/riscv: Add dummy mcountinhibit CSR for priv spec v1.11 or
higher
target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
target/riscv: Force disable extensions if priv spec version does not
match
target/riscv/cpu.c | 64 ++++++++++--
target/riscv/cpu_bits.h | 3 +
target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
target/riscv/csr.c | 2 +
target/riscv/instmap.h | 43 ++++++++
5 files changed, 310 insertions(+), 12 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/4] target/riscv: Don't force update priv spec version to latest
2022-05-26 10:05 [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes Anup Patel
@ 2022-05-26 10:05 ` Anup Patel
2022-05-26 10:05 ` [PATCH v3 2/4] target/riscv: Add dummy mcountinhibit CSR for priv spec v1.11 or higher Anup Patel
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Anup Patel @ 2022-05-26 10:05 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel,
Frank Chang, Alistair Francis, Atish Patra
The riscv_cpu_realize() sets priv spec verion to v1.12 when it is
when "env->priv_ver == 0" (i.e. default v1.10) because the enum
value of priv spec v1.10 is zero.
Due to above issue, the sifive_u machine will see priv spec v1.12
instead of priv spec v1.10.
To fix this issue, we set latest priv spec version (i.e. v1.12)
for base rv64/rv32 cpu and riscv_cpu_realize() will override priv
spec version only when "cpu->cfg.priv_spec != NULL".
Fixes: 7100fe6c2441 ("target/riscv: Enable privileged spec version 1.12")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index a91253d4bd..b086eb25da 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -169,6 +169,8 @@ static void rv64_base_cpu_init(Object *obj)
CPURISCVState *env = &RISCV_CPU(obj)->env;
/* We set this in the realise function */
set_misa(env, MXL_RV64, 0);
+ /* Set latest version of privileged specification */
+ set_priv_version(env, PRIV_VERSION_1_12_0);
}
static void rv64_sifive_u_cpu_init(Object *obj)
@@ -204,6 +206,8 @@ static void rv32_base_cpu_init(Object *obj)
CPURISCVState *env = &RISCV_CPU(obj)->env;
/* We set this in the realise function */
set_misa(env, MXL_RV32, 0);
+ /* Set latest version of privileged specification */
+ set_priv_version(env, PRIV_VERSION_1_12_0);
}
static void rv32_sifive_u_cpu_init(Object *obj)
@@ -509,7 +513,7 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
CPURISCVState *env = &cpu->env;
RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(dev);
CPUClass *cc = CPU_CLASS(mcc);
- int priv_version = 0;
+ int priv_version = -1;
Error *local_err = NULL;
cpu_exec_realizefn(cs, &local_err);
@@ -533,10 +537,8 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
}
}
- if (priv_version) {
+ if (priv_version >= PRIV_VERSION_1_10_0) {
set_priv_version(env, priv_version);
- } else if (!env->priv_ver) {
- set_priv_version(env, PRIV_VERSION_1_12_0);
}
if (cpu->cfg.mmu) {
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/4] target/riscv: Add dummy mcountinhibit CSR for priv spec v1.11 or higher
2022-05-26 10:05 [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes Anup Patel
2022-05-26 10:05 ` [PATCH v3 1/4] target/riscv: Don't force update priv spec version to latest Anup Patel
@ 2022-05-26 10:05 ` Anup Patel
2022-05-26 10:05 ` [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt() Anup Patel
2022-05-26 10:05 ` [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match Anup Patel
3 siblings, 0 replies; 11+ messages in thread
From: Anup Patel @ 2022-05-26 10:05 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel,
Frank Chang, Alistair Francis
The mcountinhibit CSR is mandatory for priv spec v1.11 or higher. For
implementation that don't want to implement can simply have a dummy
mcountinhibit which always zero.
Fixes: a4b2fa433125 ("target/riscv: Introduce privilege version field in
the CSR ops.")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
target/riscv/cpu_bits.h | 3 +++
target/riscv/csr.c | 2 ++
2 files changed, 5 insertions(+)
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 4d04b20d06..4a55c6a709 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -159,6 +159,9 @@
#define CSR_MTVEC 0x305
#define CSR_MCOUNTEREN 0x306
+/* Machine Counter Setup */
+#define CSR_MCOUNTINHIBIT 0x320
+
/* 32-bit only */
#define CSR_MSTATUSH 0x310
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 6dbe9b541f..409a209f14 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -3391,6 +3391,8 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MIE] = { "mie", any, NULL, NULL, rmw_mie },
[CSR_MTVEC] = { "mtvec", any, read_mtvec, write_mtvec },
[CSR_MCOUNTEREN] = { "mcounteren", any, read_mcounteren, write_mcounteren },
+ [CSR_MCOUNTINHIBIT] = { "mcountinhibit", any, read_zero, write_ignore,
+ .min_priv_ver = PRIV_VERSION_1_11_0 },
[CSR_MSTATUSH] = { "mstatush", any32, read_mstatush, write_mstatush },
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
2022-05-26 10:05 [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes Anup Patel
2022-05-26 10:05 ` [PATCH v3 1/4] target/riscv: Don't force update priv spec version to latest Anup Patel
2022-05-26 10:05 ` [PATCH v3 2/4] target/riscv: Add dummy mcountinhibit CSR for priv spec v1.11 or higher Anup Patel
@ 2022-05-26 10:05 ` Anup Patel
2022-06-06 1:48 ` Alistair Francis
2022-05-26 10:05 ` [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match Anup Patel
3 siblings, 1 reply; 11+ messages in thread
From: Anup Patel @ 2022-05-26 10:05 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We should write transformed instruction encoding of the trapped
instruction in [m|h]tinst CSR at time of taking trap as defined
by the RISC-V privileged specification v1.12.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
target/riscv/instmap.h | 43 ++++++++
2 files changed, 249 insertions(+), 4 deletions(-)
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index d99fac9d2d..2a2b6776fb 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -22,6 +22,7 @@
#include "qemu/main-loop.h"
#include "cpu.h"
#include "exec/exec-all.h"
+#include "instmap.h"
#include "tcg/tcg-op.h"
#include "trace.h"
#include "semihosting/common-semi.h"
@@ -1316,6 +1317,200 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
return true;
}
+
+static target_ulong riscv_transformed_insn(CPURISCVState *env,
+ target_ulong insn)
+{
+ bool xinsn_has_addr_offset = false;
+ target_ulong xinsn = 0;
+
+ /*
+ * Only Quadrant 0 and Quadrant 2 of RVC instruction space need to
+ * be uncompressed. The Quadrant 1 of RVC instruction space need
+ * not be transformed because these instructions won't generate
+ * any load/store trap.
+ */
+
+ if ((insn & 0x3) != 0x3) {
+ /* Transform 16bit instruction into 32bit instruction */
+ switch (GET_C_OP(insn)) {
+ case OPC_RISC_C_OP_QUAD0: /* Quadrant 0 */
+ switch (GET_C_FUNC(insn)) {
+ case OPC_RISC_C_FUNC_FLD_LQ:
+ if (riscv_cpu_xlen(env) != 128) { /* C.FLD (RV32/64) */
+ xinsn = OPC_RISC_FLD;
+ xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_LW: /* C.LW */
+ xinsn = OPC_RISC_LW;
+ xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
+ xinsn_has_addr_offset = true;
+ break;
+ case OPC_RISC_C_FUNC_FLW_LD:
+ if (riscv_cpu_xlen(env) == 32) { /* C.FLW (RV32) */
+ xinsn = OPC_RISC_FLW;
+ xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
+ xinsn_has_addr_offset = true;
+ } else { /* C.LD (RV64/RV128) */
+ xinsn = OPC_RISC_LD;
+ xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_FSD_SQ:
+ if (riscv_cpu_xlen(env) != 128) { /* C.FSD (RV32/64) */
+ xinsn = OPC_RISC_FSD;
+ xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_SW: /* C.SW */
+ xinsn = OPC_RISC_SW;
+ xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
+ xinsn_has_addr_offset = true;
+ break;
+ case OPC_RISC_C_FUNC_FSW_SD:
+ if (riscv_cpu_xlen(env) == 32) { /* C.FSW (RV32) */
+ xinsn = OPC_RISC_FSW;
+ xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
+ xinsn_has_addr_offset = true;
+ } else { /* C.SD (RV64/RV128) */
+ xinsn = OPC_RISC_SD;
+ xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
+ xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
+ xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ default:
+ break;
+ }
+ break;
+ case OPC_RISC_C_OP_QUAD2: /* Quadrant 2 */
+ switch (GET_C_FUNC(insn)) {
+ case OPC_RISC_C_FUNC_FLDSP_LQSP:
+ if (riscv_cpu_xlen(env) != 128) { /* C.FLDSP (RV32/64) */
+ xinsn = OPC_RISC_FLD;
+ xinsn = SET_RD(xinsn, GET_C_RD(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_LWSP: /* C.LWSP */
+ xinsn = OPC_RISC_LW;
+ xinsn = SET_RD(xinsn, GET_C_RD(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ break;
+ case OPC_RISC_C_FUNC_FLWSP_LDSP:
+ if (riscv_cpu_xlen(env) == 32) { /* C.FLWSP (RV32) */
+ xinsn = OPC_RISC_FLW;
+ xinsn = SET_RD(xinsn, GET_C_RD(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ } else { /* C.LDSP (RV64/RV128) */
+ xinsn = OPC_RISC_LD;
+ xinsn = SET_RD(xinsn, GET_C_RD(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_FSDSP_SQSP:
+ if (riscv_cpu_xlen(env) != 128) { /* C.FSDSP (RV32/64) */
+ xinsn = OPC_RISC_FSD;
+ xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ case OPC_RISC_C_FUNC_SWSP: /* C.SWSP */
+ xinsn = OPC_RISC_SW;
+ xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ break;
+ case 7:
+ if (riscv_cpu_xlen(env) == 32) { /* C.FSWSP (RV32) */
+ xinsn = OPC_RISC_FSW;
+ xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ } else { /* C.SDSP (RV64/RV128) */
+ xinsn = OPC_RISC_SD;
+ xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
+ xinsn = SET_RS1(xinsn, 2);
+ xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+
+ /*
+ * Clear Bit1 of transformed instruction to indicate that
+ * original insruction was a 16bit instruction
+ */
+ xinsn &= ~((target_ulong)0x2);
+ } else {
+ /* No need to transform 32bit (or wider) instructions */
+ xinsn = insn;
+
+ /* Check for instructions which need address offset */
+ switch (MASK_OP_MAJOR(insn)) {
+ case OPC_RISC_LOAD:
+ case OPC_RISC_STORE:
+ case OPC_RISC_ATOMIC:
+ case OPC_RISC_FP_LOAD:
+ case OPC_RISC_FP_STORE:
+ xinsn_has_addr_offset = true;
+ break;
+ case OPC_RISC_SYSTEM:
+ if (MASK_OP_SYSTEM(insn) == OPC_RISC_HLVHSV) {
+ xinsn_has_addr_offset = true;
+ }
+ break;
+ }
+ }
+
+ if (xinsn_has_addr_offset) {
+ /*
+ * The "Addr. Offset" field in transformed instruction is non-zero
+ * only for misaligned load/store traps which are very unlikely on
+ * QEMU so for now always set "Addr. Offset" to zero.
+ */
+ xinsn = SET_RS1(xinsn, 0);
+ }
+
+ return xinsn;
+}
#endif /* !CONFIG_USER_ONLY */
/*
@@ -1340,6 +1535,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
uint64_t deleg = async ? env->mideleg : env->medeleg;
target_ulong tval = 0;
+ target_ulong tinst = 0;
target_ulong htval = 0;
target_ulong mtval2 = 0;
@@ -1355,18 +1551,22 @@ void riscv_cpu_do_interrupt(CPUState *cs)
if (!async) {
/* set tval to badaddr for traps with address information */
switch (cause) {
- case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
- case RISCV_EXCP_INST_ADDR_MIS:
- case RISCV_EXCP_INST_ACCESS_FAULT:
case RISCV_EXCP_LOAD_ADDR_MIS:
case RISCV_EXCP_STORE_AMO_ADDR_MIS:
case RISCV_EXCP_LOAD_ACCESS_FAULT:
case RISCV_EXCP_STORE_AMO_ACCESS_FAULT:
- case RISCV_EXCP_INST_PAGE_FAULT:
case RISCV_EXCP_LOAD_PAGE_FAULT:
case RISCV_EXCP_STORE_PAGE_FAULT:
+ write_gva = env->two_stage_lookup;
+ tval = env->badaddr;
+ tinst = riscv_transformed_insn(env, env->bins);
+ break;
+ case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
+ case RISCV_EXCP_INST_ADDR_MIS:
+ case RISCV_EXCP_INST_ACCESS_FAULT:
+ case RISCV_EXCP_INST_PAGE_FAULT:
write_gva = env->two_stage_lookup;
tval = env->badaddr;
break;
@@ -1448,6 +1648,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
env->sepc = env->pc;
env->stval = tval;
env->htval = htval;
+ env->htinst = tinst;
env->pc = (env->stvec >> 2 << 2) +
((async && (env->stvec & 3) == 1) ? cause * 4 : 0);
riscv_cpu_set_mode(env, PRV_S);
@@ -1478,6 +1679,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
env->mepc = env->pc;
env->mtval = tval;
env->mtval2 = mtval2;
+ env->mtinst = tinst;
env->pc = (env->mtvec >> 2 << 2) +
((async && (env->mtvec & 3) == 1) ? cause * 4 : 0);
riscv_cpu_set_mode(env, PRV_M);
diff --git a/target/riscv/instmap.h b/target/riscv/instmap.h
index 40b6d2b64d..f564a69d90 100644
--- a/target/riscv/instmap.h
+++ b/target/riscv/instmap.h
@@ -184,6 +184,8 @@ enum {
OPC_RISC_CSRRWI = OPC_RISC_SYSTEM | (0x5 << 12),
OPC_RISC_CSRRSI = OPC_RISC_SYSTEM | (0x6 << 12),
OPC_RISC_CSRRCI = OPC_RISC_SYSTEM | (0x7 << 12),
+
+ OPC_RISC_HLVHSV = OPC_RISC_SYSTEM | (0x4 << 12),
};
#define MASK_OP_FP_LOAD(op) (MASK_OP_MAJOR(op) | (op & (0x7 << 12)))
@@ -316,6 +318,12 @@ enum {
#define GET_RS2(inst) extract32(inst, 20, 5)
#define GET_RD(inst) extract32(inst, 7, 5)
#define GET_IMM(inst) sextract64(inst, 20, 12)
+#define SET_RS1(inst, val) deposit32(inst, 15, 5, val)
+#define SET_RS2(inst, val) deposit32(inst, 20, 5, val)
+#define SET_RD(inst, val) deposit32(inst, 7, 5, val)
+#define SET_I_IMM(inst, val) deposit32(inst, 20, 12, val)
+#define SET_S_IMM(inst, val) \
+ deposit32(deposit32(inst, 7, 5, val), 25, 7, (val) >> 5)
/* RVC decoding macros */
#define GET_C_IMM(inst) (extract32(inst, 2, 5) \
@@ -346,6 +354,8 @@ enum {
| (extract32(inst, 5, 1) << 6))
#define GET_C_LD_IMM(inst) ((extract16(inst, 10, 3) << 3) \
| (extract16(inst, 5, 2) << 6))
+#define GET_C_SW_IMM(inst) GET_C_LW_IMM(inst)
+#define GET_C_SD_IMM(inst) GET_C_LD_IMM(inst)
#define GET_C_J_IMM(inst) ((extract32(inst, 3, 3) << 1) \
| (extract32(inst, 11, 1) << 4) \
| (extract32(inst, 2, 1) << 5) \
@@ -366,4 +376,37 @@ enum {
#define GET_C_RS1S(inst) (8 + extract16(inst, 7, 3))
#define GET_C_RS2S(inst) (8 + extract16(inst, 2, 3))
+#define GET_C_FUNC(inst) extract32(inst, 13, 3)
+#define GET_C_OP(inst) extract32(inst, 0, 2)
+
+enum {
+ /* RVC Quadrants */
+ OPC_RISC_C_OP_QUAD0 = 0x0,
+ OPC_RISC_C_OP_QUAD1 = 0x1,
+ OPC_RISC_C_OP_QUAD2 = 0x2
+};
+
+enum {
+ /* RVC Quadrant 0 */
+ OPC_RISC_C_FUNC_ADDI4SPN = 0x0,
+ OPC_RISC_C_FUNC_FLD_LQ = 0x1,
+ OPC_RISC_C_FUNC_LW = 0x2,
+ OPC_RISC_C_FUNC_FLW_LD = 0x3,
+ OPC_RISC_C_FUNC_FSD_SQ = 0x5,
+ OPC_RISC_C_FUNC_SW = 0x6,
+ OPC_RISC_C_FUNC_FSW_SD = 0x7
+};
+
+enum {
+ /* RVC Quadrant 2 */
+ OPC_RISC_C_FUNC_SLLI_SLLI64 = 0x0,
+ OPC_RISC_C_FUNC_FLDSP_LQSP = 0x1,
+ OPC_RISC_C_FUNC_LWSP = 0x2,
+ OPC_RISC_C_FUNC_FLWSP_LDSP = 0x3,
+ OPC_RISC_C_FUNC_JR_MV_EBREAK_JALR_ADD = 0x4,
+ OPC_RISC_C_FUNC_FSDSP_SQSP = 0x5,
+ OPC_RISC_C_FUNC_SWSP = 0x6,
+ OPC_RISC_C_FUNC_FSWSP_SDSP = 0x7
+};
+
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match
2022-05-26 10:05 [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes Anup Patel
` (2 preceding siblings ...)
2022-05-26 10:05 ` [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt() Anup Patel
@ 2022-05-26 10:05 ` Anup Patel
2022-06-06 1:55 ` Alistair Francis
3 siblings, 1 reply; 11+ messages in thread
From: Anup Patel @ 2022-05-26 10:05 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We should disable extensions in riscv_cpu_realize() if minimum required
priv spec version is not satisfied. This also ensures that machines with
priv spec v1.11 (or lower) cannot enable H, V, and various multi-letter
extensions.
Fixes: a775398be2e ("target/riscv: Add isa extenstion strings to the
device tree")
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
target/riscv/cpu.c | 56 +++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 51 insertions(+), 5 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index b086eb25da..e6e878ceb3 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -43,9 +43,13 @@ static const char riscv_single_letter_exts[] = "IEMAFDQCPVH";
struct isa_ext_data {
const char *name;
- bool enabled;
+ int min_version;
+ bool *enabled;
};
+#define ISA_EDATA_ENTRY(name, prop) {#name, PRIV_VERSION_1_10_0, &cpu->cfg.prop}
+#define ISA_EDATA_ENTRY2(name, min_ver, prop) {#name, min_ver, &cpu->cfg.prop}
+
const char * const riscv_int_regnames[] = {
"x0/zero", "x1/ra", "x2/sp", "x3/gp", "x4/tp", "x5/t0", "x6/t1",
"x7/t2", "x8/s0", "x9/s1", "x10/a0", "x11/a1", "x12/a2", "x13/a3",
@@ -513,8 +517,42 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
CPURISCVState *env = &cpu->env;
RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(dev);
CPUClass *cc = CPU_CLASS(mcc);
- int priv_version = -1;
+ int i, priv_version = -1;
Error *local_err = NULL;
+ struct isa_ext_data isa_edata_arr[] = {
+ ISA_EDATA_ENTRY2(h, PRIV_VERSION_1_12_0, ext_h),
+ ISA_EDATA_ENTRY2(v, PRIV_VERSION_1_12_0, ext_v),
+ ISA_EDATA_ENTRY2(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
+ ISA_EDATA_ENTRY2(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
+ ISA_EDATA_ENTRY2(zfh, PRIV_VERSION_1_12_0, ext_zfh),
+ ISA_EDATA_ENTRY2(zfhmin, PRIV_VERSION_1_12_0, ext_zfhmin),
+ ISA_EDATA_ENTRY2(zfinx, PRIV_VERSION_1_12_0, ext_zfinx),
+ ISA_EDATA_ENTRY2(zdinx, PRIV_VERSION_1_12_0, ext_zdinx),
+ ISA_EDATA_ENTRY2(zba, PRIV_VERSION_1_12_0, ext_zba),
+ ISA_EDATA_ENTRY2(zbb, PRIV_VERSION_1_12_0, ext_zbb),
+ ISA_EDATA_ENTRY2(zbc, PRIV_VERSION_1_12_0, ext_zbc),
+ ISA_EDATA_ENTRY2(zbkb, PRIV_VERSION_1_12_0, ext_zbkb),
+ ISA_EDATA_ENTRY2(zbkc, PRIV_VERSION_1_12_0, ext_zbkc),
+ ISA_EDATA_ENTRY2(zbkx, PRIV_VERSION_1_12_0, ext_zbkx),
+ ISA_EDATA_ENTRY2(zbs, PRIV_VERSION_1_12_0, ext_zbs),
+ ISA_EDATA_ENTRY2(zk, PRIV_VERSION_1_12_0, ext_zk),
+ ISA_EDATA_ENTRY2(zkn, PRIV_VERSION_1_12_0, ext_zkn),
+ ISA_EDATA_ENTRY2(zknd, PRIV_VERSION_1_12_0, ext_zknd),
+ ISA_EDATA_ENTRY2(zkne, PRIV_VERSION_1_12_0, ext_zkne),
+ ISA_EDATA_ENTRY2(zknh, PRIV_VERSION_1_12_0, ext_zknh),
+ ISA_EDATA_ENTRY2(zkr, PRIV_VERSION_1_12_0, ext_zkr),
+ ISA_EDATA_ENTRY2(zks, PRIV_VERSION_1_12_0, ext_zks),
+ ISA_EDATA_ENTRY2(zksed, PRIV_VERSION_1_12_0, ext_zksed),
+ ISA_EDATA_ENTRY2(zksh, PRIV_VERSION_1_12_0, ext_zksh),
+ ISA_EDATA_ENTRY2(zkt, PRIV_VERSION_1_12_0, ext_zkt),
+ ISA_EDATA_ENTRY2(zve32f, PRIV_VERSION_1_12_0, ext_zve32f),
+ ISA_EDATA_ENTRY2(zve64f, PRIV_VERSION_1_12_0, ext_zve64f),
+ ISA_EDATA_ENTRY2(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
+ ISA_EDATA_ENTRY2(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
+ ISA_EDATA_ENTRY2(svinval, PRIV_VERSION_1_12_0, ext_svinval),
+ ISA_EDATA_ENTRY2(svnapot, PRIV_VERSION_1_12_0, ext_svnapot),
+ ISA_EDATA_ENTRY2(svpbmt, PRIV_VERSION_1_12_0, ext_svpbmt),
+ };
cpu_exec_realizefn(cs, &local_err);
if (local_err != NULL) {
@@ -541,6 +579,16 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
set_priv_version(env, priv_version);
}
+ /* Force disable extensions if priv spec version does not match */
+ for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
+ if (*isa_edata_arr[i].enabled &&
+ (env->priv_ver < isa_edata_arr[i].min_version)) {
+ *isa_edata_arr[i].enabled = false;
+ warn_report("privilege spec version does not match for %s extension",
+ isa_edata_arr[i].name);
+ }
+ }
+
if (cpu->cfg.mmu) {
riscv_set_feature(env, RISCV_FEATURE_MMU);
}
@@ -1005,8 +1053,6 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
device_class_set_props(dc, riscv_cpu_properties);
}
-#define ISA_EDATA_ENTRY(name, prop) {#name, cpu->cfg.prop}
-
static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
{
char *old = *isa_str;
@@ -1064,7 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
};
for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
- if (isa_edata_arr[i].enabled) {
+ if (*isa_edata_arr[i].enabled) {
new = g_strconcat(old, "_", isa_edata_arr[i].name, NULL);
g_free(old);
old = new;
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
2022-05-26 10:05 ` [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt() Anup Patel
@ 2022-06-06 1:48 ` Alistair Francis
2022-06-06 1:53 ` Alistair Francis
0 siblings, 1 reply; 11+ messages in thread
From: Alistair Francis @ 2022-06-06 1:48 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, Anup Patel, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Thu, May 26, 2022 at 8:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> We should write transformed instruction encoding of the trapped
> instruction in [m|h]tinst CSR at time of taking trap as defined
> by the RISC-V privileged specification v1.12.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
> target/riscv/instmap.h | 43 ++++++++
> 2 files changed, 249 insertions(+), 4 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index d99fac9d2d..2a2b6776fb 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -22,6 +22,7 @@
> #include "qemu/main-loop.h"
> #include "cpu.h"
> #include "exec/exec-all.h"
> +#include "instmap.h"
> #include "tcg/tcg-op.h"
> #include "trace.h"
> #include "semihosting/common-semi.h"
> @@ -1316,6 +1317,200 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
>
> return true;
> }
> +
> +static target_ulong riscv_transformed_insn(CPURISCVState *env,
> + target_ulong insn)
> +{
> + bool xinsn_has_addr_offset = false;
> + target_ulong xinsn = 0;
> +
> + /*
> + * Only Quadrant 0 and Quadrant 2 of RVC instruction space need to
> + * be uncompressed. The Quadrant 1 of RVC instruction space need
> + * not be transformed because these instructions won't generate
> + * any load/store trap.
> + */
> +
> + if ((insn & 0x3) != 0x3) {
> + /* Transform 16bit instruction into 32bit instruction */
> + switch (GET_C_OP(insn)) {
> + case OPC_RISC_C_OP_QUAD0: /* Quadrant 0 */
> + switch (GET_C_FUNC(insn)) {
> + case OPC_RISC_C_FUNC_FLD_LQ:
> + if (riscv_cpu_xlen(env) != 128) { /* C.FLD (RV32/64) */
> + xinsn = OPC_RISC_FLD;
> + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_LW: /* C.LW */
> + xinsn = OPC_RISC_LW;
> + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> + xinsn_has_addr_offset = true;
> + break;
> + case OPC_RISC_C_FUNC_FLW_LD:
> + if (riscv_cpu_xlen(env) == 32) { /* C.FLW (RV32) */
> + xinsn = OPC_RISC_FLW;
> + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> + xinsn_has_addr_offset = true;
> + } else { /* C.LD (RV64/RV128) */
> + xinsn = OPC_RISC_LD;
> + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_FSD_SQ:
> + if (riscv_cpu_xlen(env) != 128) { /* C.FSD (RV32/64) */
> + xinsn = OPC_RISC_FSD;
> + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_SW: /* C.SW */
> + xinsn = OPC_RISC_SW;
> + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> + xinsn_has_addr_offset = true;
> + break;
> + case OPC_RISC_C_FUNC_FSW_SD:
> + if (riscv_cpu_xlen(env) == 32) { /* C.FSW (RV32) */
> + xinsn = OPC_RISC_FSW;
> + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> + xinsn_has_addr_offset = true;
> + } else { /* C.SD (RV64/RV128) */
> + xinsn = OPC_RISC_SD;
> + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + default:
> + break;
> + }
> + break;
> + case OPC_RISC_C_OP_QUAD2: /* Quadrant 2 */
> + switch (GET_C_FUNC(insn)) {
> + case OPC_RISC_C_FUNC_FLDSP_LQSP:
> + if (riscv_cpu_xlen(env) != 128) { /* C.FLDSP (RV32/64) */
> + xinsn = OPC_RISC_FLD;
> + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_LWSP: /* C.LWSP */
> + xinsn = OPC_RISC_LW;
> + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + break;
> + case OPC_RISC_C_FUNC_FLWSP_LDSP:
> + if (riscv_cpu_xlen(env) == 32) { /* C.FLWSP (RV32) */
> + xinsn = OPC_RISC_FLW;
> + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + } else { /* C.LDSP (RV64/RV128) */
> + xinsn = OPC_RISC_LD;
> + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_FSDSP_SQSP:
> + if (riscv_cpu_xlen(env) != 128) { /* C.FSDSP (RV32/64) */
> + xinsn = OPC_RISC_FSD;
> + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + case OPC_RISC_C_FUNC_SWSP: /* C.SWSP */
> + xinsn = OPC_RISC_SW;
> + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + break;
> + case 7:
> + if (riscv_cpu_xlen(env) == 32) { /* C.FSWSP (RV32) */
> + xinsn = OPC_RISC_FSW;
> + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + } else { /* C.SDSP (RV64/RV128) */
> + xinsn = OPC_RISC_SD;
> + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> + xinsn = SET_RS1(xinsn, 2);
> + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + default:
> + break;
> + }
> + break;
> + default:
> + break;
> + }
> +
> + /*
> + * Clear Bit1 of transformed instruction to indicate that
> + * original insruction was a 16bit instruction
> + */
> + xinsn &= ~((target_ulong)0x2);
> + } else {
> + /* No need to transform 32bit (or wider) instructions */
> + xinsn = insn;
> +
> + /* Check for instructions which need address offset */
> + switch (MASK_OP_MAJOR(insn)) {
> + case OPC_RISC_LOAD:
> + case OPC_RISC_STORE:
> + case OPC_RISC_ATOMIC:
> + case OPC_RISC_FP_LOAD:
> + case OPC_RISC_FP_STORE:
> + xinsn_has_addr_offset = true;
> + break;
> + case OPC_RISC_SYSTEM:
> + if (MASK_OP_SYSTEM(insn) == OPC_RISC_HLVHSV) {
> + xinsn_has_addr_offset = true;
> + }
> + break;
> + }
> + }
> +
> + if (xinsn_has_addr_offset) {
> + /*
> + * The "Addr. Offset" field in transformed instruction is non-zero
> + * only for misaligned load/store traps which are very unlikely on
> + * QEMU so for now always set "Addr. Offset" to zero.
> + */
> + xinsn = SET_RS1(xinsn, 0);
> + }
> +
> + return xinsn;
> +}
> #endif /* !CONFIG_USER_ONLY */
>
> /*
> @@ -1340,6 +1535,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
> uint64_t deleg = async ? env->mideleg : env->medeleg;
> target_ulong tval = 0;
> + target_ulong tinst = 0;
> target_ulong htval = 0;
> target_ulong mtval2 = 0;
>
> @@ -1355,18 +1551,22 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> if (!async) {
> /* set tval to badaddr for traps with address information */
> switch (cause) {
> - case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
> case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
> - case RISCV_EXCP_INST_ADDR_MIS:
> - case RISCV_EXCP_INST_ACCESS_FAULT:
> case RISCV_EXCP_LOAD_ADDR_MIS:
> case RISCV_EXCP_STORE_AMO_ADDR_MIS:
> case RISCV_EXCP_LOAD_ACCESS_FAULT:
> case RISCV_EXCP_STORE_AMO_ACCESS_FAULT:
> - case RISCV_EXCP_INST_PAGE_FAULT:
> case RISCV_EXCP_LOAD_PAGE_FAULT:
> case RISCV_EXCP_STORE_PAGE_FAULT:
> + write_gva = env->two_stage_lookup;
> + tval = env->badaddr;
> + tinst = riscv_transformed_insn(env, env->bins);
> + break;
> + case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> + case RISCV_EXCP_INST_ADDR_MIS:
> + case RISCV_EXCP_INST_ACCESS_FAULT:
> + case RISCV_EXCP_INST_PAGE_FAULT:
> write_gva = env->two_stage_lookup;
> tval = env->badaddr;
> break;
> @@ -1448,6 +1648,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> env->sepc = env->pc;
> env->stval = tval;
> env->htval = htval;
> + env->htinst = tinst;
> env->pc = (env->stvec >> 2 << 2) +
> ((async && (env->stvec & 3) == 1) ? cause * 4 : 0);
> riscv_cpu_set_mode(env, PRV_S);
> @@ -1478,6 +1679,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> env->mepc = env->pc;
> env->mtval = tval;
> env->mtval2 = mtval2;
> + env->mtinst = tinst;
> env->pc = (env->mtvec >> 2 << 2) +
> ((async && (env->mtvec & 3) == 1) ? cause * 4 : 0);
> riscv_cpu_set_mode(env, PRV_M);
> diff --git a/target/riscv/instmap.h b/target/riscv/instmap.h
> index 40b6d2b64d..f564a69d90 100644
> --- a/target/riscv/instmap.h
> +++ b/target/riscv/instmap.h
> @@ -184,6 +184,8 @@ enum {
> OPC_RISC_CSRRWI = OPC_RISC_SYSTEM | (0x5 << 12),
> OPC_RISC_CSRRSI = OPC_RISC_SYSTEM | (0x6 << 12),
> OPC_RISC_CSRRCI = OPC_RISC_SYSTEM | (0x7 << 12),
> +
> + OPC_RISC_HLVHSV = OPC_RISC_SYSTEM | (0x4 << 12),
> };
>
> #define MASK_OP_FP_LOAD(op) (MASK_OP_MAJOR(op) | (op & (0x7 << 12)))
> @@ -316,6 +318,12 @@ enum {
> #define GET_RS2(inst) extract32(inst, 20, 5)
> #define GET_RD(inst) extract32(inst, 7, 5)
> #define GET_IMM(inst) sextract64(inst, 20, 12)
> +#define SET_RS1(inst, val) deposit32(inst, 15, 5, val)
> +#define SET_RS2(inst, val) deposit32(inst, 20, 5, val)
> +#define SET_RD(inst, val) deposit32(inst, 7, 5, val)
> +#define SET_I_IMM(inst, val) deposit32(inst, 20, 12, val)
> +#define SET_S_IMM(inst, val) \
> + deposit32(deposit32(inst, 7, 5, val), 25, 7, (val) >> 5)
>
> /* RVC decoding macros */
> #define GET_C_IMM(inst) (extract32(inst, 2, 5) \
> @@ -346,6 +354,8 @@ enum {
> | (extract32(inst, 5, 1) << 6))
> #define GET_C_LD_IMM(inst) ((extract16(inst, 10, 3) << 3) \
> | (extract16(inst, 5, 2) << 6))
> +#define GET_C_SW_IMM(inst) GET_C_LW_IMM(inst)
> +#define GET_C_SD_IMM(inst) GET_C_LD_IMM(inst)
> #define GET_C_J_IMM(inst) ((extract32(inst, 3, 3) << 1) \
> | (extract32(inst, 11, 1) << 4) \
> | (extract32(inst, 2, 1) << 5) \
> @@ -366,4 +376,37 @@ enum {
> #define GET_C_RS1S(inst) (8 + extract16(inst, 7, 3))
> #define GET_C_RS2S(inst) (8 + extract16(inst, 2, 3))
>
> +#define GET_C_FUNC(inst) extract32(inst, 13, 3)
> +#define GET_C_OP(inst) extract32(inst, 0, 2)
> +
> +enum {
> + /* RVC Quadrants */
> + OPC_RISC_C_OP_QUAD0 = 0x0,
> + OPC_RISC_C_OP_QUAD1 = 0x1,
> + OPC_RISC_C_OP_QUAD2 = 0x2
> +};
> +
> +enum {
> + /* RVC Quadrant 0 */
> + OPC_RISC_C_FUNC_ADDI4SPN = 0x0,
> + OPC_RISC_C_FUNC_FLD_LQ = 0x1,
> + OPC_RISC_C_FUNC_LW = 0x2,
> + OPC_RISC_C_FUNC_FLW_LD = 0x3,
> + OPC_RISC_C_FUNC_FSD_SQ = 0x5,
> + OPC_RISC_C_FUNC_SW = 0x6,
> + OPC_RISC_C_FUNC_FSW_SD = 0x7
> +};
> +
> +enum {
> + /* RVC Quadrant 2 */
> + OPC_RISC_C_FUNC_SLLI_SLLI64 = 0x0,
> + OPC_RISC_C_FUNC_FLDSP_LQSP = 0x1,
> + OPC_RISC_C_FUNC_LWSP = 0x2,
> + OPC_RISC_C_FUNC_FLWSP_LDSP = 0x3,
> + OPC_RISC_C_FUNC_JR_MV_EBREAK_JALR_ADD = 0x4,
> + OPC_RISC_C_FUNC_FSDSP_SQSP = 0x5,
> + OPC_RISC_C_FUNC_SWSP = 0x6,
> + OPC_RISC_C_FUNC_FSWSP_SDSP = 0x7
> +};
> +
> #endif
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
2022-06-06 1:48 ` Alistair Francis
@ 2022-06-06 1:53 ` Alistair Francis
2022-06-07 3:01 ` Anup Patel
0 siblings, 1 reply; 11+ messages in thread
From: Alistair Francis @ 2022-06-06 1:53 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, Anup Patel, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Mon, Jun 6, 2022 at 11:48 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Thu, May 26, 2022 at 8:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > We should write transformed instruction encoding of the trapped
> > instruction in [m|h]tinst CSR at time of taking trap as defined
> > by the RISC-V privileged specification v1.12.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Whoops, I thought there was another patch. This doesn't seem to
implemented the guest-page fault pseudoinstructions which can be
generated while doing VS-stage translation
Alistair
>
> Alistair
>
> > ---
> > target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
> > target/riscv/instmap.h | 43 ++++++++
> > 2 files changed, 249 insertions(+), 4 deletions(-)
> >
> > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > index d99fac9d2d..2a2b6776fb 100644
> > --- a/target/riscv/cpu_helper.c
> > +++ b/target/riscv/cpu_helper.c
> > @@ -22,6 +22,7 @@
> > #include "qemu/main-loop.h"
> > #include "cpu.h"
> > #include "exec/exec-all.h"
> > +#include "instmap.h"
> > #include "tcg/tcg-op.h"
> > #include "trace.h"
> > #include "semihosting/common-semi.h"
> > @@ -1316,6 +1317,200 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> >
> > return true;
> > }
> > +
> > +static target_ulong riscv_transformed_insn(CPURISCVState *env,
> > + target_ulong insn)
> > +{
> > + bool xinsn_has_addr_offset = false;
> > + target_ulong xinsn = 0;
> > +
> > + /*
> > + * Only Quadrant 0 and Quadrant 2 of RVC instruction space need to
> > + * be uncompressed. The Quadrant 1 of RVC instruction space need
> > + * not be transformed because these instructions won't generate
> > + * any load/store trap.
> > + */
> > +
> > + if ((insn & 0x3) != 0x3) {
> > + /* Transform 16bit instruction into 32bit instruction */
> > + switch (GET_C_OP(insn)) {
> > + case OPC_RISC_C_OP_QUAD0: /* Quadrant 0 */
> > + switch (GET_C_FUNC(insn)) {
> > + case OPC_RISC_C_FUNC_FLD_LQ:
> > + if (riscv_cpu_xlen(env) != 128) { /* C.FLD (RV32/64) */
> > + xinsn = OPC_RISC_FLD;
> > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_LW: /* C.LW */
> > + xinsn = OPC_RISC_LW;
> > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + break;
> > + case OPC_RISC_C_FUNC_FLW_LD:
> > + if (riscv_cpu_xlen(env) == 32) { /* C.FLW (RV32) */
> > + xinsn = OPC_RISC_FLW;
> > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + } else { /* C.LD (RV64/RV128) */
> > + xinsn = OPC_RISC_LD;
> > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_FSD_SQ:
> > + if (riscv_cpu_xlen(env) != 128) { /* C.FSD (RV32/64) */
> > + xinsn = OPC_RISC_FSD;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_SW: /* C.SW */
> > + xinsn = OPC_RISC_SW;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + break;
> > + case OPC_RISC_C_FUNC_FSW_SD:
> > + if (riscv_cpu_xlen(env) == 32) { /* C.FSW (RV32) */
> > + xinsn = OPC_RISC_FSW;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + } else { /* C.SD (RV64/RV128) */
> > + xinsn = OPC_RISC_SD;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + default:
> > + break;
> > + }
> > + break;
> > + case OPC_RISC_C_OP_QUAD2: /* Quadrant 2 */
> > + switch (GET_C_FUNC(insn)) {
> > + case OPC_RISC_C_FUNC_FLDSP_LQSP:
> > + if (riscv_cpu_xlen(env) != 128) { /* C.FLDSP (RV32/64) */
> > + xinsn = OPC_RISC_FLD;
> > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_LWSP: /* C.LWSP */
> > + xinsn = OPC_RISC_LW;
> > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + break;
> > + case OPC_RISC_C_FUNC_FLWSP_LDSP:
> > + if (riscv_cpu_xlen(env) == 32) { /* C.FLWSP (RV32) */
> > + xinsn = OPC_RISC_FLW;
> > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + } else { /* C.LDSP (RV64/RV128) */
> > + xinsn = OPC_RISC_LD;
> > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_FSDSP_SQSP:
> > + if (riscv_cpu_xlen(env) != 128) { /* C.FSDSP (RV32/64) */
> > + xinsn = OPC_RISC_FSD;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + case OPC_RISC_C_FUNC_SWSP: /* C.SWSP */
> > + xinsn = OPC_RISC_SW;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + break;
> > + case 7:
> > + if (riscv_cpu_xlen(env) == 32) { /* C.FSWSP (RV32) */
> > + xinsn = OPC_RISC_FSW;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + } else { /* C.SDSP (RV64/RV128) */
> > + xinsn = OPC_RISC_SD;
> > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > + xinsn = SET_RS1(xinsn, 2);
> > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + default:
> > + break;
> > + }
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + /*
> > + * Clear Bit1 of transformed instruction to indicate that
> > + * original insruction was a 16bit instruction
> > + */
> > + xinsn &= ~((target_ulong)0x2);
> > + } else {
> > + /* No need to transform 32bit (or wider) instructions */
> > + xinsn = insn;
> > +
> > + /* Check for instructions which need address offset */
> > + switch (MASK_OP_MAJOR(insn)) {
> > + case OPC_RISC_LOAD:
> > + case OPC_RISC_STORE:
> > + case OPC_RISC_ATOMIC:
> > + case OPC_RISC_FP_LOAD:
> > + case OPC_RISC_FP_STORE:
> > + xinsn_has_addr_offset = true;
> > + break;
> > + case OPC_RISC_SYSTEM:
> > + if (MASK_OP_SYSTEM(insn) == OPC_RISC_HLVHSV) {
> > + xinsn_has_addr_offset = true;
> > + }
> > + break;
> > + }
> > + }
> > +
> > + if (xinsn_has_addr_offset) {
> > + /*
> > + * The "Addr. Offset" field in transformed instruction is non-zero
> > + * only for misaligned load/store traps which are very unlikely on
> > + * QEMU so for now always set "Addr. Offset" to zero.
> > + */
> > + xinsn = SET_RS1(xinsn, 0);
> > + }
> > +
> > + return xinsn;
> > +}
> > #endif /* !CONFIG_USER_ONLY */
> >
> > /*
> > @@ -1340,6 +1535,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
> > uint64_t deleg = async ? env->mideleg : env->medeleg;
> > target_ulong tval = 0;
> > + target_ulong tinst = 0;
> > target_ulong htval = 0;
> > target_ulong mtval2 = 0;
> >
> > @@ -1355,18 +1551,22 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > if (!async) {
> > /* set tval to badaddr for traps with address information */
> > switch (cause) {
> > - case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
> > case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
> > - case RISCV_EXCP_INST_ADDR_MIS:
> > - case RISCV_EXCP_INST_ACCESS_FAULT:
> > case RISCV_EXCP_LOAD_ADDR_MIS:
> > case RISCV_EXCP_STORE_AMO_ADDR_MIS:
> > case RISCV_EXCP_LOAD_ACCESS_FAULT:
> > case RISCV_EXCP_STORE_AMO_ACCESS_FAULT:
> > - case RISCV_EXCP_INST_PAGE_FAULT:
> > case RISCV_EXCP_LOAD_PAGE_FAULT:
> > case RISCV_EXCP_STORE_PAGE_FAULT:
> > + write_gva = env->two_stage_lookup;
> > + tval = env->badaddr;
> > + tinst = riscv_transformed_insn(env, env->bins);
> > + break;
> > + case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > + case RISCV_EXCP_INST_ADDR_MIS:
> > + case RISCV_EXCP_INST_ACCESS_FAULT:
> > + case RISCV_EXCP_INST_PAGE_FAULT:
> > write_gva = env->two_stage_lookup;
> > tval = env->badaddr;
> > break;
> > @@ -1448,6 +1648,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > env->sepc = env->pc;
> > env->stval = tval;
> > env->htval = htval;
> > + env->htinst = tinst;
> > env->pc = (env->stvec >> 2 << 2) +
> > ((async && (env->stvec & 3) == 1) ? cause * 4 : 0);
> > riscv_cpu_set_mode(env, PRV_S);
> > @@ -1478,6 +1679,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > env->mepc = env->pc;
> > env->mtval = tval;
> > env->mtval2 = mtval2;
> > + env->mtinst = tinst;
> > env->pc = (env->mtvec >> 2 << 2) +
> > ((async && (env->mtvec & 3) == 1) ? cause * 4 : 0);
> > riscv_cpu_set_mode(env, PRV_M);
> > diff --git a/target/riscv/instmap.h b/target/riscv/instmap.h
> > index 40b6d2b64d..f564a69d90 100644
> > --- a/target/riscv/instmap.h
> > +++ b/target/riscv/instmap.h
> > @@ -184,6 +184,8 @@ enum {
> > OPC_RISC_CSRRWI = OPC_RISC_SYSTEM | (0x5 << 12),
> > OPC_RISC_CSRRSI = OPC_RISC_SYSTEM | (0x6 << 12),
> > OPC_RISC_CSRRCI = OPC_RISC_SYSTEM | (0x7 << 12),
> > +
> > + OPC_RISC_HLVHSV = OPC_RISC_SYSTEM | (0x4 << 12),
> > };
> >
> > #define MASK_OP_FP_LOAD(op) (MASK_OP_MAJOR(op) | (op & (0x7 << 12)))
> > @@ -316,6 +318,12 @@ enum {
> > #define GET_RS2(inst) extract32(inst, 20, 5)
> > #define GET_RD(inst) extract32(inst, 7, 5)
> > #define GET_IMM(inst) sextract64(inst, 20, 12)
> > +#define SET_RS1(inst, val) deposit32(inst, 15, 5, val)
> > +#define SET_RS2(inst, val) deposit32(inst, 20, 5, val)
> > +#define SET_RD(inst, val) deposit32(inst, 7, 5, val)
> > +#define SET_I_IMM(inst, val) deposit32(inst, 20, 12, val)
> > +#define SET_S_IMM(inst, val) \
> > + deposit32(deposit32(inst, 7, 5, val), 25, 7, (val) >> 5)
> >
> > /* RVC decoding macros */
> > #define GET_C_IMM(inst) (extract32(inst, 2, 5) \
> > @@ -346,6 +354,8 @@ enum {
> > | (extract32(inst, 5, 1) << 6))
> > #define GET_C_LD_IMM(inst) ((extract16(inst, 10, 3) << 3) \
> > | (extract16(inst, 5, 2) << 6))
> > +#define GET_C_SW_IMM(inst) GET_C_LW_IMM(inst)
> > +#define GET_C_SD_IMM(inst) GET_C_LD_IMM(inst)
> > #define GET_C_J_IMM(inst) ((extract32(inst, 3, 3) << 1) \
> > | (extract32(inst, 11, 1) << 4) \
> > | (extract32(inst, 2, 1) << 5) \
> > @@ -366,4 +376,37 @@ enum {
> > #define GET_C_RS1S(inst) (8 + extract16(inst, 7, 3))
> > #define GET_C_RS2S(inst) (8 + extract16(inst, 2, 3))
> >
> > +#define GET_C_FUNC(inst) extract32(inst, 13, 3)
> > +#define GET_C_OP(inst) extract32(inst, 0, 2)
> > +
> > +enum {
> > + /* RVC Quadrants */
> > + OPC_RISC_C_OP_QUAD0 = 0x0,
> > + OPC_RISC_C_OP_QUAD1 = 0x1,
> > + OPC_RISC_C_OP_QUAD2 = 0x2
> > +};
> > +
> > +enum {
> > + /* RVC Quadrant 0 */
> > + OPC_RISC_C_FUNC_ADDI4SPN = 0x0,
> > + OPC_RISC_C_FUNC_FLD_LQ = 0x1,
> > + OPC_RISC_C_FUNC_LW = 0x2,
> > + OPC_RISC_C_FUNC_FLW_LD = 0x3,
> > + OPC_RISC_C_FUNC_FSD_SQ = 0x5,
> > + OPC_RISC_C_FUNC_SW = 0x6,
> > + OPC_RISC_C_FUNC_FSW_SD = 0x7
> > +};
> > +
> > +enum {
> > + /* RVC Quadrant 2 */
> > + OPC_RISC_C_FUNC_SLLI_SLLI64 = 0x0,
> > + OPC_RISC_C_FUNC_FLDSP_LQSP = 0x1,
> > + OPC_RISC_C_FUNC_LWSP = 0x2,
> > + OPC_RISC_C_FUNC_FLWSP_LDSP = 0x3,
> > + OPC_RISC_C_FUNC_JR_MV_EBREAK_JALR_ADD = 0x4,
> > + OPC_RISC_C_FUNC_FSDSP_SQSP = 0x5,
> > + OPC_RISC_C_FUNC_SWSP = 0x6,
> > + OPC_RISC_C_FUNC_FSWSP_SDSP = 0x7
> > +};
> > +
> > #endif
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match
2022-05-26 10:05 ` [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match Anup Patel
@ 2022-06-06 1:55 ` Alistair Francis
2022-06-07 2:50 ` Anup Patel
0 siblings, 1 reply; 11+ messages in thread
From: Alistair Francis @ 2022-06-06 1:55 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, Anup Patel, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Thu, May 26, 2022 at 8:09 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> We should disable extensions in riscv_cpu_realize() if minimum required
> priv spec version is not satisfied. This also ensures that machines with
> priv spec v1.11 (or lower) cannot enable H, V, and various multi-letter
> extensions.
>
> Fixes: a775398be2e ("target/riscv: Add isa extenstion strings to the
> device tree")
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
> target/riscv/cpu.c | 56 +++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 51 insertions(+), 5 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index b086eb25da..e6e878ceb3 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -43,9 +43,13 @@ static const char riscv_single_letter_exts[] = "IEMAFDQCPVH";
>
> struct isa_ext_data {
> const char *name;
> - bool enabled;
> + int min_version;
> + bool *enabled;
> };
>
> +#define ISA_EDATA_ENTRY(name, prop) {#name, PRIV_VERSION_1_10_0, &cpu->cfg.prop}
> +#define ISA_EDATA_ENTRY2(name, min_ver, prop) {#name, min_ver, &cpu->cfg.prop}
> +
> const char * const riscv_int_regnames[] = {
> "x0/zero", "x1/ra", "x2/sp", "x3/gp", "x4/tp", "x5/t0", "x6/t1",
> "x7/t2", "x8/s0", "x9/s1", "x10/a0", "x11/a1", "x12/a2", "x13/a3",
> @@ -513,8 +517,42 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> CPURISCVState *env = &cpu->env;
> RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(dev);
> CPUClass *cc = CPU_CLASS(mcc);
> - int priv_version = -1;
> + int i, priv_version = -1;
> Error *local_err = NULL;
> + struct isa_ext_data isa_edata_arr[] = {
> + ISA_EDATA_ENTRY2(h, PRIV_VERSION_1_12_0, ext_h),
> + ISA_EDATA_ENTRY2(v, PRIV_VERSION_1_12_0, ext_v),
> + ISA_EDATA_ENTRY2(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
> + ISA_EDATA_ENTRY2(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
> + ISA_EDATA_ENTRY2(zfh, PRIV_VERSION_1_12_0, ext_zfh),
> + ISA_EDATA_ENTRY2(zfhmin, PRIV_VERSION_1_12_0, ext_zfhmin),
> + ISA_EDATA_ENTRY2(zfinx, PRIV_VERSION_1_12_0, ext_zfinx),
> + ISA_EDATA_ENTRY2(zdinx, PRIV_VERSION_1_12_0, ext_zdinx),
> + ISA_EDATA_ENTRY2(zba, PRIV_VERSION_1_12_0, ext_zba),
> + ISA_EDATA_ENTRY2(zbb, PRIV_VERSION_1_12_0, ext_zbb),
> + ISA_EDATA_ENTRY2(zbc, PRIV_VERSION_1_12_0, ext_zbc),
> + ISA_EDATA_ENTRY2(zbkb, PRIV_VERSION_1_12_0, ext_zbkb),
> + ISA_EDATA_ENTRY2(zbkc, PRIV_VERSION_1_12_0, ext_zbkc),
> + ISA_EDATA_ENTRY2(zbkx, PRIV_VERSION_1_12_0, ext_zbkx),
> + ISA_EDATA_ENTRY2(zbs, PRIV_VERSION_1_12_0, ext_zbs),
> + ISA_EDATA_ENTRY2(zk, PRIV_VERSION_1_12_0, ext_zk),
> + ISA_EDATA_ENTRY2(zkn, PRIV_VERSION_1_12_0, ext_zkn),
> + ISA_EDATA_ENTRY2(zknd, PRIV_VERSION_1_12_0, ext_zknd),
> + ISA_EDATA_ENTRY2(zkne, PRIV_VERSION_1_12_0, ext_zkne),
> + ISA_EDATA_ENTRY2(zknh, PRIV_VERSION_1_12_0, ext_zknh),
> + ISA_EDATA_ENTRY2(zkr, PRIV_VERSION_1_12_0, ext_zkr),
> + ISA_EDATA_ENTRY2(zks, PRIV_VERSION_1_12_0, ext_zks),
> + ISA_EDATA_ENTRY2(zksed, PRIV_VERSION_1_12_0, ext_zksed),
> + ISA_EDATA_ENTRY2(zksh, PRIV_VERSION_1_12_0, ext_zksh),
> + ISA_EDATA_ENTRY2(zkt, PRIV_VERSION_1_12_0, ext_zkt),
> + ISA_EDATA_ENTRY2(zve32f, PRIV_VERSION_1_12_0, ext_zve32f),
> + ISA_EDATA_ENTRY2(zve64f, PRIV_VERSION_1_12_0, ext_zve64f),
> + ISA_EDATA_ENTRY2(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> + ISA_EDATA_ENTRY2(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> + ISA_EDATA_ENTRY2(svinval, PRIV_VERSION_1_12_0, ext_svinval),
> + ISA_EDATA_ENTRY2(svnapot, PRIV_VERSION_1_12_0, ext_svnapot),
> + ISA_EDATA_ENTRY2(svpbmt, PRIV_VERSION_1_12_0, ext_svpbmt),
> + };
>
> cpu_exec_realizefn(cs, &local_err);
> if (local_err != NULL) {
> @@ -541,6 +579,16 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> set_priv_version(env, priv_version);
> }
>
> + /* Force disable extensions if priv spec version does not match */
> + for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
> + if (*isa_edata_arr[i].enabled &&
> + (env->priv_ver < isa_edata_arr[i].min_version)) {
> + *isa_edata_arr[i].enabled = false;
> + warn_report("privilege spec version does not match for %s extension",
> + isa_edata_arr[i].name);
This should indicate to the user that we are disabling the extension
because of this
Alistair
> + }
> + }
> +
> if (cpu->cfg.mmu) {
> riscv_set_feature(env, RISCV_FEATURE_MMU);
> }
> @@ -1005,8 +1053,6 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
> device_class_set_props(dc, riscv_cpu_properties);
> }
>
> -#define ISA_EDATA_ENTRY(name, prop) {#name, cpu->cfg.prop}
> -
> static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> {
> char *old = *isa_str;
> @@ -1064,7 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> };
>
> for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
> - if (isa_edata_arr[i].enabled) {
> + if (*isa_edata_arr[i].enabled) {
> new = g_strconcat(old, "_", isa_edata_arr[i].name, NULL);
> g_free(old);
> old = new;
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match
2022-06-06 1:55 ` Alistair Francis
@ 2022-06-07 2:50 ` Anup Patel
0 siblings, 0 replies; 11+ messages in thread
From: Anup Patel @ 2022-06-07 2:50 UTC (permalink / raw)
To: Alistair Francis
Cc: Anup Patel, Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Mon, Jun 6, 2022 at 7:25 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Thu, May 26, 2022 at 8:09 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > We should disable extensions in riscv_cpu_realize() if minimum required
> > priv spec version is not satisfied. This also ensures that machines with
> > priv spec v1.11 (or lower) cannot enable H, V, and various multi-letter
> > extensions.
> >
> > Fixes: a775398be2e ("target/riscv: Add isa extenstion strings to the
> > device tree")
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> > target/riscv/cpu.c | 56 +++++++++++++++++++++++++++++++++++++++++-----
> > 1 file changed, 51 insertions(+), 5 deletions(-)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index b086eb25da..e6e878ceb3 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -43,9 +43,13 @@ static const char riscv_single_letter_exts[] = "IEMAFDQCPVH";
> >
> > struct isa_ext_data {
> > const char *name;
> > - bool enabled;
> > + int min_version;
> > + bool *enabled;
> > };
> >
> > +#define ISA_EDATA_ENTRY(name, prop) {#name, PRIV_VERSION_1_10_0, &cpu->cfg.prop}
> > +#define ISA_EDATA_ENTRY2(name, min_ver, prop) {#name, min_ver, &cpu->cfg.prop}
> > +
> > const char * const riscv_int_regnames[] = {
> > "x0/zero", "x1/ra", "x2/sp", "x3/gp", "x4/tp", "x5/t0", "x6/t1",
> > "x7/t2", "x8/s0", "x9/s1", "x10/a0", "x11/a1", "x12/a2", "x13/a3",
> > @@ -513,8 +517,42 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> > CPURISCVState *env = &cpu->env;
> > RISCVCPUClass *mcc = RISCV_CPU_GET_CLASS(dev);
> > CPUClass *cc = CPU_CLASS(mcc);
> > - int priv_version = -1;
> > + int i, priv_version = -1;
> > Error *local_err = NULL;
> > + struct isa_ext_data isa_edata_arr[] = {
> > + ISA_EDATA_ENTRY2(h, PRIV_VERSION_1_12_0, ext_h),
> > + ISA_EDATA_ENTRY2(v, PRIV_VERSION_1_12_0, ext_v),
> > + ISA_EDATA_ENTRY2(zicsr, PRIV_VERSION_1_10_0, ext_icsr),
> > + ISA_EDATA_ENTRY2(zifencei, PRIV_VERSION_1_10_0, ext_ifencei),
> > + ISA_EDATA_ENTRY2(zfh, PRIV_VERSION_1_12_0, ext_zfh),
> > + ISA_EDATA_ENTRY2(zfhmin, PRIV_VERSION_1_12_0, ext_zfhmin),
> > + ISA_EDATA_ENTRY2(zfinx, PRIV_VERSION_1_12_0, ext_zfinx),
> > + ISA_EDATA_ENTRY2(zdinx, PRIV_VERSION_1_12_0, ext_zdinx),
> > + ISA_EDATA_ENTRY2(zba, PRIV_VERSION_1_12_0, ext_zba),
> > + ISA_EDATA_ENTRY2(zbb, PRIV_VERSION_1_12_0, ext_zbb),
> > + ISA_EDATA_ENTRY2(zbc, PRIV_VERSION_1_12_0, ext_zbc),
> > + ISA_EDATA_ENTRY2(zbkb, PRIV_VERSION_1_12_0, ext_zbkb),
> > + ISA_EDATA_ENTRY2(zbkc, PRIV_VERSION_1_12_0, ext_zbkc),
> > + ISA_EDATA_ENTRY2(zbkx, PRIV_VERSION_1_12_0, ext_zbkx),
> > + ISA_EDATA_ENTRY2(zbs, PRIV_VERSION_1_12_0, ext_zbs),
> > + ISA_EDATA_ENTRY2(zk, PRIV_VERSION_1_12_0, ext_zk),
> > + ISA_EDATA_ENTRY2(zkn, PRIV_VERSION_1_12_0, ext_zkn),
> > + ISA_EDATA_ENTRY2(zknd, PRIV_VERSION_1_12_0, ext_zknd),
> > + ISA_EDATA_ENTRY2(zkne, PRIV_VERSION_1_12_0, ext_zkne),
> > + ISA_EDATA_ENTRY2(zknh, PRIV_VERSION_1_12_0, ext_zknh),
> > + ISA_EDATA_ENTRY2(zkr, PRIV_VERSION_1_12_0, ext_zkr),
> > + ISA_EDATA_ENTRY2(zks, PRIV_VERSION_1_12_0, ext_zks),
> > + ISA_EDATA_ENTRY2(zksed, PRIV_VERSION_1_12_0, ext_zksed),
> > + ISA_EDATA_ENTRY2(zksh, PRIV_VERSION_1_12_0, ext_zksh),
> > + ISA_EDATA_ENTRY2(zkt, PRIV_VERSION_1_12_0, ext_zkt),
> > + ISA_EDATA_ENTRY2(zve32f, PRIV_VERSION_1_12_0, ext_zve32f),
> > + ISA_EDATA_ENTRY2(zve64f, PRIV_VERSION_1_12_0, ext_zve64f),
> > + ISA_EDATA_ENTRY2(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> > + ISA_EDATA_ENTRY2(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> > + ISA_EDATA_ENTRY2(svinval, PRIV_VERSION_1_12_0, ext_svinval),
> > + ISA_EDATA_ENTRY2(svnapot, PRIV_VERSION_1_12_0, ext_svnapot),
> > + ISA_EDATA_ENTRY2(svpbmt, PRIV_VERSION_1_12_0, ext_svpbmt),
> > + };
> >
> > cpu_exec_realizefn(cs, &local_err);
> > if (local_err != NULL) {
> > @@ -541,6 +579,16 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> > set_priv_version(env, priv_version);
> > }
> >
> > + /* Force disable extensions if priv spec version does not match */
> > + for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
> > + if (*isa_edata_arr[i].enabled &&
> > + (env->priv_ver < isa_edata_arr[i].min_version)) {
> > + *isa_edata_arr[i].enabled = false;
> > + warn_report("privilege spec version does not match for %s extension",
> > + isa_edata_arr[i].name);
>
> This should indicate to the user that we are disabling the extension
> because of this
Sure, I will update.
Regards,
Anup
>
> Alistair
>
> > + }
> > + }
> > +
> > if (cpu->cfg.mmu) {
> > riscv_set_feature(env, RISCV_FEATURE_MMU);
> > }
> > @@ -1005,8 +1053,6 @@ static void riscv_cpu_class_init(ObjectClass *c, void *data)
> > device_class_set_props(dc, riscv_cpu_properties);
> > }
> >
> > -#define ISA_EDATA_ENTRY(name, prop) {#name, cpu->cfg.prop}
> > -
> > static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> > {
> > char *old = *isa_str;
> > @@ -1064,7 +1110,7 @@ static void riscv_isa_string_ext(RISCVCPU *cpu, char **isa_str, int max_str_len)
> > };
> >
> > for (i = 0; i < ARRAY_SIZE(isa_edata_arr); i++) {
> > - if (isa_edata_arr[i].enabled) {
> > + if (*isa_edata_arr[i].enabled) {
> > new = g_strconcat(old, "_", isa_edata_arr[i].name, NULL);
> > g_free(old);
> > old = new;
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
2022-06-06 1:53 ` Alistair Francis
@ 2022-06-07 3:01 ` Anup Patel
2022-06-07 3:04 ` Anup Patel
0 siblings, 1 reply; 11+ messages in thread
From: Anup Patel @ 2022-06-07 3:01 UTC (permalink / raw)
To: Alistair Francis
Cc: Anup Patel, Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Mon, Jun 6, 2022 at 7:23 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Mon, Jun 6, 2022 at 11:48 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Thu, May 26, 2022 at 8:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > >
> > > We should write transformed instruction encoding of the trapped
> > > instruction in [m|h]tinst CSR at time of taking trap as defined
> > > by the RISC-V privileged specification v1.12.
> > >
> > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> >
> > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> Whoops, I thought there was another patch. This doesn't seem to
> implemented the guest-page fault pseudoinstructions which can be
> generated while doing VS-stage translation
I did not understand. Which guest-page fault pseudo-instructions ?
Regards,
Anup
>
> Alistair
>
> >
> > Alistair
> >
> > > ---
> > > target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
> > > target/riscv/instmap.h | 43 ++++++++
> > > 2 files changed, 249 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > > index d99fac9d2d..2a2b6776fb 100644
> > > --- a/target/riscv/cpu_helper.c
> > > +++ b/target/riscv/cpu_helper.c
> > > @@ -22,6 +22,7 @@
> > > #include "qemu/main-loop.h"
> > > #include "cpu.h"
> > > #include "exec/exec-all.h"
> > > +#include "instmap.h"
> > > #include "tcg/tcg-op.h"
> > > #include "trace.h"
> > > #include "semihosting/common-semi.h"
> > > @@ -1316,6 +1317,200 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > >
> > > return true;
> > > }
> > > +
> > > +static target_ulong riscv_transformed_insn(CPURISCVState *env,
> > > + target_ulong insn)
> > > +{
> > > + bool xinsn_has_addr_offset = false;
> > > + target_ulong xinsn = 0;
> > > +
> > > + /*
> > > + * Only Quadrant 0 and Quadrant 2 of RVC instruction space need to
> > > + * be uncompressed. The Quadrant 1 of RVC instruction space need
> > > + * not be transformed because these instructions won't generate
> > > + * any load/store trap.
> > > + */
> > > +
> > > + if ((insn & 0x3) != 0x3) {
> > > + /* Transform 16bit instruction into 32bit instruction */
> > > + switch (GET_C_OP(insn)) {
> > > + case OPC_RISC_C_OP_QUAD0: /* Quadrant 0 */
> > > + switch (GET_C_FUNC(insn)) {
> > > + case OPC_RISC_C_FUNC_FLD_LQ:
> > > + if (riscv_cpu_xlen(env) != 128) { /* C.FLD (RV32/64) */
> > > + xinsn = OPC_RISC_FLD;
> > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_LW: /* C.LW */
> > > + xinsn = OPC_RISC_LW;
> > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + break;
> > > + case OPC_RISC_C_FUNC_FLW_LD:
> > > + if (riscv_cpu_xlen(env) == 32) { /* C.FLW (RV32) */
> > > + xinsn = OPC_RISC_FLW;
> > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + } else { /* C.LD (RV64/RV128) */
> > > + xinsn = OPC_RISC_LD;
> > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_FSD_SQ:
> > > + if (riscv_cpu_xlen(env) != 128) { /* C.FSD (RV32/64) */
> > > + xinsn = OPC_RISC_FSD;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_SW: /* C.SW */
> > > + xinsn = OPC_RISC_SW;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + break;
> > > + case OPC_RISC_C_FUNC_FSW_SD:
> > > + if (riscv_cpu_xlen(env) == 32) { /* C.FSW (RV32) */
> > > + xinsn = OPC_RISC_FSW;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + } else { /* C.SD (RV64/RV128) */
> > > + xinsn = OPC_RISC_SD;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_OP_QUAD2: /* Quadrant 2 */
> > > + switch (GET_C_FUNC(insn)) {
> > > + case OPC_RISC_C_FUNC_FLDSP_LQSP:
> > > + if (riscv_cpu_xlen(env) != 128) { /* C.FLDSP (RV32/64) */
> > > + xinsn = OPC_RISC_FLD;
> > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_LWSP: /* C.LWSP */
> > > + xinsn = OPC_RISC_LW;
> > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + break;
> > > + case OPC_RISC_C_FUNC_FLWSP_LDSP:
> > > + if (riscv_cpu_xlen(env) == 32) { /* C.FLWSP (RV32) */
> > > + xinsn = OPC_RISC_FLW;
> > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + } else { /* C.LDSP (RV64/RV128) */
> > > + xinsn = OPC_RISC_LD;
> > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_FSDSP_SQSP:
> > > + if (riscv_cpu_xlen(env) != 128) { /* C.FSDSP (RV32/64) */
> > > + xinsn = OPC_RISC_FSD;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + case OPC_RISC_C_FUNC_SWSP: /* C.SWSP */
> > > + xinsn = OPC_RISC_SW;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + break;
> > > + case 7:
> > > + if (riscv_cpu_xlen(env) == 32) { /* C.FSWSP (RV32) */
> > > + xinsn = OPC_RISC_FSW;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + } else { /* C.SDSP (RV64/RV128) */
> > > + xinsn = OPC_RISC_SD;
> > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > + xinsn = SET_RS1(xinsn, 2);
> > > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > + break;
> > > + default:
> > > + break;
> > > + }
> > > +
> > > + /*
> > > + * Clear Bit1 of transformed instruction to indicate that
> > > + * original insruction was a 16bit instruction
> > > + */
> > > + xinsn &= ~((target_ulong)0x2);
> > > + } else {
> > > + /* No need to transform 32bit (or wider) instructions */
> > > + xinsn = insn;
> > > +
> > > + /* Check for instructions which need address offset */
> > > + switch (MASK_OP_MAJOR(insn)) {
> > > + case OPC_RISC_LOAD:
> > > + case OPC_RISC_STORE:
> > > + case OPC_RISC_ATOMIC:
> > > + case OPC_RISC_FP_LOAD:
> > > + case OPC_RISC_FP_STORE:
> > > + xinsn_has_addr_offset = true;
> > > + break;
> > > + case OPC_RISC_SYSTEM:
> > > + if (MASK_OP_SYSTEM(insn) == OPC_RISC_HLVHSV) {
> > > + xinsn_has_addr_offset = true;
> > > + }
> > > + break;
> > > + }
> > > + }
> > > +
> > > + if (xinsn_has_addr_offset) {
> > > + /*
> > > + * The "Addr. Offset" field in transformed instruction is non-zero
> > > + * only for misaligned load/store traps which are very unlikely on
> > > + * QEMU so for now always set "Addr. Offset" to zero.
> > > + */
> > > + xinsn = SET_RS1(xinsn, 0);
> > > + }
> > > +
> > > + return xinsn;
> > > +}
> > > #endif /* !CONFIG_USER_ONLY */
> > >
> > > /*
> > > @@ -1340,6 +1535,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
> > > uint64_t deleg = async ? env->mideleg : env->medeleg;
> > > target_ulong tval = 0;
> > > + target_ulong tinst = 0;
> > > target_ulong htval = 0;
> > > target_ulong mtval2 = 0;
> > >
> > > @@ -1355,18 +1551,22 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > if (!async) {
> > > /* set tval to badaddr for traps with address information */
> > > switch (cause) {
> > > - case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > > case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
> > > case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
> > > - case RISCV_EXCP_INST_ADDR_MIS:
> > > - case RISCV_EXCP_INST_ACCESS_FAULT:
> > > case RISCV_EXCP_LOAD_ADDR_MIS:
> > > case RISCV_EXCP_STORE_AMO_ADDR_MIS:
> > > case RISCV_EXCP_LOAD_ACCESS_FAULT:
> > > case RISCV_EXCP_STORE_AMO_ACCESS_FAULT:
> > > - case RISCV_EXCP_INST_PAGE_FAULT:
> > > case RISCV_EXCP_LOAD_PAGE_FAULT:
> > > case RISCV_EXCP_STORE_PAGE_FAULT:
> > > + write_gva = env->two_stage_lookup;
> > > + tval = env->badaddr;
> > > + tinst = riscv_transformed_insn(env, env->bins);
> > > + break;
> > > + case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > > + case RISCV_EXCP_INST_ADDR_MIS:
> > > + case RISCV_EXCP_INST_ACCESS_FAULT:
> > > + case RISCV_EXCP_INST_PAGE_FAULT:
> > > write_gva = env->two_stage_lookup;
> > > tval = env->badaddr;
> > > break;
> > > @@ -1448,6 +1648,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > env->sepc = env->pc;
> > > env->stval = tval;
> > > env->htval = htval;
> > > + env->htinst = tinst;
> > > env->pc = (env->stvec >> 2 << 2) +
> > > ((async && (env->stvec & 3) == 1) ? cause * 4 : 0);
> > > riscv_cpu_set_mode(env, PRV_S);
> > > @@ -1478,6 +1679,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > env->mepc = env->pc;
> > > env->mtval = tval;
> > > env->mtval2 = mtval2;
> > > + env->mtinst = tinst;
> > > env->pc = (env->mtvec >> 2 << 2) +
> > > ((async && (env->mtvec & 3) == 1) ? cause * 4 : 0);
> > > riscv_cpu_set_mode(env, PRV_M);
> > > diff --git a/target/riscv/instmap.h b/target/riscv/instmap.h
> > > index 40b6d2b64d..f564a69d90 100644
> > > --- a/target/riscv/instmap.h
> > > +++ b/target/riscv/instmap.h
> > > @@ -184,6 +184,8 @@ enum {
> > > OPC_RISC_CSRRWI = OPC_RISC_SYSTEM | (0x5 << 12),
> > > OPC_RISC_CSRRSI = OPC_RISC_SYSTEM | (0x6 << 12),
> > > OPC_RISC_CSRRCI = OPC_RISC_SYSTEM | (0x7 << 12),
> > > +
> > > + OPC_RISC_HLVHSV = OPC_RISC_SYSTEM | (0x4 << 12),
> > > };
> > >
> > > #define MASK_OP_FP_LOAD(op) (MASK_OP_MAJOR(op) | (op & (0x7 << 12)))
> > > @@ -316,6 +318,12 @@ enum {
> > > #define GET_RS2(inst) extract32(inst, 20, 5)
> > > #define GET_RD(inst) extract32(inst, 7, 5)
> > > #define GET_IMM(inst) sextract64(inst, 20, 12)
> > > +#define SET_RS1(inst, val) deposit32(inst, 15, 5, val)
> > > +#define SET_RS2(inst, val) deposit32(inst, 20, 5, val)
> > > +#define SET_RD(inst, val) deposit32(inst, 7, 5, val)
> > > +#define SET_I_IMM(inst, val) deposit32(inst, 20, 12, val)
> > > +#define SET_S_IMM(inst, val) \
> > > + deposit32(deposit32(inst, 7, 5, val), 25, 7, (val) >> 5)
> > >
> > > /* RVC decoding macros */
> > > #define GET_C_IMM(inst) (extract32(inst, 2, 5) \
> > > @@ -346,6 +354,8 @@ enum {
> > > | (extract32(inst, 5, 1) << 6))
> > > #define GET_C_LD_IMM(inst) ((extract16(inst, 10, 3) << 3) \
> > > | (extract16(inst, 5, 2) << 6))
> > > +#define GET_C_SW_IMM(inst) GET_C_LW_IMM(inst)
> > > +#define GET_C_SD_IMM(inst) GET_C_LD_IMM(inst)
> > > #define GET_C_J_IMM(inst) ((extract32(inst, 3, 3) << 1) \
> > > | (extract32(inst, 11, 1) << 4) \
> > > | (extract32(inst, 2, 1) << 5) \
> > > @@ -366,4 +376,37 @@ enum {
> > > #define GET_C_RS1S(inst) (8 + extract16(inst, 7, 3))
> > > #define GET_C_RS2S(inst) (8 + extract16(inst, 2, 3))
> > >
> > > +#define GET_C_FUNC(inst) extract32(inst, 13, 3)
> > > +#define GET_C_OP(inst) extract32(inst, 0, 2)
> > > +
> > > +enum {
> > > + /* RVC Quadrants */
> > > + OPC_RISC_C_OP_QUAD0 = 0x0,
> > > + OPC_RISC_C_OP_QUAD1 = 0x1,
> > > + OPC_RISC_C_OP_QUAD2 = 0x2
> > > +};
> > > +
> > > +enum {
> > > + /* RVC Quadrant 0 */
> > > + OPC_RISC_C_FUNC_ADDI4SPN = 0x0,
> > > + OPC_RISC_C_FUNC_FLD_LQ = 0x1,
> > > + OPC_RISC_C_FUNC_LW = 0x2,
> > > + OPC_RISC_C_FUNC_FLW_LD = 0x3,
> > > + OPC_RISC_C_FUNC_FSD_SQ = 0x5,
> > > + OPC_RISC_C_FUNC_SW = 0x6,
> > > + OPC_RISC_C_FUNC_FSW_SD = 0x7
> > > +};
> > > +
> > > +enum {
> > > + /* RVC Quadrant 2 */
> > > + OPC_RISC_C_FUNC_SLLI_SLLI64 = 0x0,
> > > + OPC_RISC_C_FUNC_FLDSP_LQSP = 0x1,
> > > + OPC_RISC_C_FUNC_LWSP = 0x2,
> > > + OPC_RISC_C_FUNC_FLWSP_LDSP = 0x3,
> > > + OPC_RISC_C_FUNC_JR_MV_EBREAK_JALR_ADD = 0x4,
> > > + OPC_RISC_C_FUNC_FSDSP_SQSP = 0x5,
> > > + OPC_RISC_C_FUNC_SWSP = 0x6,
> > > + OPC_RISC_C_FUNC_FSWSP_SDSP = 0x7
> > > +};
> > > +
> > > #endif
> > > --
> > > 2.34.1
> > >
> > >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt()
2022-06-07 3:01 ` Anup Patel
@ 2022-06-07 3:04 ` Anup Patel
0 siblings, 0 replies; 11+ messages in thread
From: Anup Patel @ 2022-06-07 3:04 UTC (permalink / raw)
To: Alistair Francis
Cc: Anup Patel, Peter Maydell, Palmer Dabbelt, Alistair Francis,
Sagar Karandikar, Atish Patra, open list:RISC-V,
qemu-devel@nongnu.org Developers
On Tue, Jun 7, 2022 at 8:31 AM Anup Patel <anup@brainfault.org> wrote:
>
> On Mon, Jun 6, 2022 at 7:23 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Mon, Jun 6, 2022 at 11:48 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Thu, May 26, 2022 at 8:12 PM Anup Patel <apatel@ventanamicro.com> wrote:
> > > >
> > > > We should write transformed instruction encoding of the trapped
> > > > instruction in [m|h]tinst CSR at time of taking trap as defined
> > > > by the RISC-V privileged specification v1.12.
> > > >
> > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > >
> > > Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> >
> > Whoops, I thought there was another patch. This doesn't seem to
> > implemented the guest-page fault pseudoinstructions which can be
> > generated while doing VS-stage translation
>
> I did not understand. Which guest-page fault pseudo-instructions ?
Nevermind, I see what you meant. Let me check.
Regards,
Anup
>
> Regards,
> Anup
>
> >
> > Alistair
> >
> > >
> > > Alistair
> > >
> > > > ---
> > > > target/riscv/cpu_helper.c | 210 +++++++++++++++++++++++++++++++++++++-
> > > > target/riscv/instmap.h | 43 ++++++++
> > > > 2 files changed, 249 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> > > > index d99fac9d2d..2a2b6776fb 100644
> > > > --- a/target/riscv/cpu_helper.c
> > > > +++ b/target/riscv/cpu_helper.c
> > > > @@ -22,6 +22,7 @@
> > > > #include "qemu/main-loop.h"
> > > > #include "cpu.h"
> > > > #include "exec/exec-all.h"
> > > > +#include "instmap.h"
> > > > #include "tcg/tcg-op.h"
> > > > #include "trace.h"
> > > > #include "semihosting/common-semi.h"
> > > > @@ -1316,6 +1317,200 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> > > >
> > > > return true;
> > > > }
> > > > +
> > > > +static target_ulong riscv_transformed_insn(CPURISCVState *env,
> > > > + target_ulong insn)
> > > > +{
> > > > + bool xinsn_has_addr_offset = false;
> > > > + target_ulong xinsn = 0;
> > > > +
> > > > + /*
> > > > + * Only Quadrant 0 and Quadrant 2 of RVC instruction space need to
> > > > + * be uncompressed. The Quadrant 1 of RVC instruction space need
> > > > + * not be transformed because these instructions won't generate
> > > > + * any load/store trap.
> > > > + */
> > > > +
> > > > + if ((insn & 0x3) != 0x3) {
> > > > + /* Transform 16bit instruction into 32bit instruction */
> > > > + switch (GET_C_OP(insn)) {
> > > > + case OPC_RISC_C_OP_QUAD0: /* Quadrant 0 */
> > > > + switch (GET_C_FUNC(insn)) {
> > > > + case OPC_RISC_C_FUNC_FLD_LQ:
> > > > + if (riscv_cpu_xlen(env) != 128) { /* C.FLD (RV32/64) */
> > > > + xinsn = OPC_RISC_FLD;
> > > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_LW: /* C.LW */
> > > > + xinsn = OPC_RISC_LW;
> > > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_FLW_LD:
> > > > + if (riscv_cpu_xlen(env) == 32) { /* C.FLW (RV32) */
> > > > + xinsn = OPC_RISC_FLW;
> > > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LW_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + } else { /* C.LD (RV64/RV128) */
> > > > + xinsn = OPC_RISC_LD;
> > > > + xinsn = SET_RD(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LD_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_FSD_SQ:
> > > > + if (riscv_cpu_xlen(env) != 128) { /* C.FSD (RV32/64) */
> > > > + xinsn = OPC_RISC_FSD;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_SW: /* C.SW */
> > > > + xinsn = OPC_RISC_SW;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_FSW_SD:
> > > > + if (riscv_cpu_xlen(env) == 32) { /* C.FSW (RV32) */
> > > > + xinsn = OPC_RISC_FSW;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SW_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + } else { /* C.SD (RV64/RV128) */
> > > > + xinsn = OPC_RISC_SD;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2S(insn));
> > > > + xinsn = SET_RS1(xinsn, GET_C_RS1S(insn));
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SD_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + default:
> > > > + break;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_OP_QUAD2: /* Quadrant 2 */
> > > > + switch (GET_C_FUNC(insn)) {
> > > > + case OPC_RISC_C_FUNC_FLDSP_LQSP:
> > > > + if (riscv_cpu_xlen(env) != 128) { /* C.FLDSP (RV32/64) */
> > > > + xinsn = OPC_RISC_FLD;
> > > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_LWSP: /* C.LWSP */
> > > > + xinsn = OPC_RISC_LW;
> > > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_FLWSP_LDSP:
> > > > + if (riscv_cpu_xlen(env) == 32) { /* C.FLWSP (RV32) */
> > > > + xinsn = OPC_RISC_FLW;
> > > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LWSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + } else { /* C.LDSP (RV64/RV128) */
> > > > + xinsn = OPC_RISC_LD;
> > > > + xinsn = SET_RD(xinsn, GET_C_RD(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_I_IMM(xinsn, GET_C_LDSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_FSDSP_SQSP:
> > > > + if (riscv_cpu_xlen(env) != 128) { /* C.FSDSP (RV32/64) */
> > > > + xinsn = OPC_RISC_FSD;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + case OPC_RISC_C_FUNC_SWSP: /* C.SWSP */
> > > > + xinsn = OPC_RISC_SW;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + break;
> > > > + case 7:
> > > > + if (riscv_cpu_xlen(env) == 32) { /* C.FSWSP (RV32) */
> > > > + xinsn = OPC_RISC_FSW;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SWSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + } else { /* C.SDSP (RV64/RV128) */
> > > > + xinsn = OPC_RISC_SD;
> > > > + xinsn = SET_RS2(xinsn, GET_C_RS2(insn));
> > > > + xinsn = SET_RS1(xinsn, 2);
> > > > + xinsn = SET_S_IMM(xinsn, GET_C_SDSP_IMM(insn));
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + default:
> > > > + break;
> > > > + }
> > > > + break;
> > > > + default:
> > > > + break;
> > > > + }
> > > > +
> > > > + /*
> > > > + * Clear Bit1 of transformed instruction to indicate that
> > > > + * original insruction was a 16bit instruction
> > > > + */
> > > > + xinsn &= ~((target_ulong)0x2);
> > > > + } else {
> > > > + /* No need to transform 32bit (or wider) instructions */
> > > > + xinsn = insn;
> > > > +
> > > > + /* Check for instructions which need address offset */
> > > > + switch (MASK_OP_MAJOR(insn)) {
> > > > + case OPC_RISC_LOAD:
> > > > + case OPC_RISC_STORE:
> > > > + case OPC_RISC_ATOMIC:
> > > > + case OPC_RISC_FP_LOAD:
> > > > + case OPC_RISC_FP_STORE:
> > > > + xinsn_has_addr_offset = true;
> > > > + break;
> > > > + case OPC_RISC_SYSTEM:
> > > > + if (MASK_OP_SYSTEM(insn) == OPC_RISC_HLVHSV) {
> > > > + xinsn_has_addr_offset = true;
> > > > + }
> > > > + break;
> > > > + }
> > > > + }
> > > > +
> > > > + if (xinsn_has_addr_offset) {
> > > > + /*
> > > > + * The "Addr. Offset" field in transformed instruction is non-zero
> > > > + * only for misaligned load/store traps which are very unlikely on
> > > > + * QEMU so for now always set "Addr. Offset" to zero.
> > > > + */
> > > > + xinsn = SET_RS1(xinsn, 0);
> > > > + }
> > > > +
> > > > + return xinsn;
> > > > +}
> > > > #endif /* !CONFIG_USER_ONLY */
> > > >
> > > > /*
> > > > @@ -1340,6 +1535,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > > target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
> > > > uint64_t deleg = async ? env->mideleg : env->medeleg;
> > > > target_ulong tval = 0;
> > > > + target_ulong tinst = 0;
> > > > target_ulong htval = 0;
> > > > target_ulong mtval2 = 0;
> > > >
> > > > @@ -1355,18 +1551,22 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > > if (!async) {
> > > > /* set tval to badaddr for traps with address information */
> > > > switch (cause) {
> > > > - case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > > > case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
> > > > case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
> > > > - case RISCV_EXCP_INST_ADDR_MIS:
> > > > - case RISCV_EXCP_INST_ACCESS_FAULT:
> > > > case RISCV_EXCP_LOAD_ADDR_MIS:
> > > > case RISCV_EXCP_STORE_AMO_ADDR_MIS:
> > > > case RISCV_EXCP_LOAD_ACCESS_FAULT:
> > > > case RISCV_EXCP_STORE_AMO_ACCESS_FAULT:
> > > > - case RISCV_EXCP_INST_PAGE_FAULT:
> > > > case RISCV_EXCP_LOAD_PAGE_FAULT:
> > > > case RISCV_EXCP_STORE_PAGE_FAULT:
> > > > + write_gva = env->two_stage_lookup;
> > > > + tval = env->badaddr;
> > > > + tinst = riscv_transformed_insn(env, env->bins);
> > > > + break;
> > > > + case RISCV_EXCP_INST_GUEST_PAGE_FAULT:
> > > > + case RISCV_EXCP_INST_ADDR_MIS:
> > > > + case RISCV_EXCP_INST_ACCESS_FAULT:
> > > > + case RISCV_EXCP_INST_PAGE_FAULT:
> > > > write_gva = env->two_stage_lookup;
> > > > tval = env->badaddr;
> > > > break;
> > > > @@ -1448,6 +1648,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > > env->sepc = env->pc;
> > > > env->stval = tval;
> > > > env->htval = htval;
> > > > + env->htinst = tinst;
> > > > env->pc = (env->stvec >> 2 << 2) +
> > > > ((async && (env->stvec & 3) == 1) ? cause * 4 : 0);
> > > > riscv_cpu_set_mode(env, PRV_S);
> > > > @@ -1478,6 +1679,7 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> > > > env->mepc = env->pc;
> > > > env->mtval = tval;
> > > > env->mtval2 = mtval2;
> > > > + env->mtinst = tinst;
> > > > env->pc = (env->mtvec >> 2 << 2) +
> > > > ((async && (env->mtvec & 3) == 1) ? cause * 4 : 0);
> > > > riscv_cpu_set_mode(env, PRV_M);
> > > > diff --git a/target/riscv/instmap.h b/target/riscv/instmap.h
> > > > index 40b6d2b64d..f564a69d90 100644
> > > > --- a/target/riscv/instmap.h
> > > > +++ b/target/riscv/instmap.h
> > > > @@ -184,6 +184,8 @@ enum {
> > > > OPC_RISC_CSRRWI = OPC_RISC_SYSTEM | (0x5 << 12),
> > > > OPC_RISC_CSRRSI = OPC_RISC_SYSTEM | (0x6 << 12),
> > > > OPC_RISC_CSRRCI = OPC_RISC_SYSTEM | (0x7 << 12),
> > > > +
> > > > + OPC_RISC_HLVHSV = OPC_RISC_SYSTEM | (0x4 << 12),
> > > > };
> > > >
> > > > #define MASK_OP_FP_LOAD(op) (MASK_OP_MAJOR(op) | (op & (0x7 << 12)))
> > > > @@ -316,6 +318,12 @@ enum {
> > > > #define GET_RS2(inst) extract32(inst, 20, 5)
> > > > #define GET_RD(inst) extract32(inst, 7, 5)
> > > > #define GET_IMM(inst) sextract64(inst, 20, 12)
> > > > +#define SET_RS1(inst, val) deposit32(inst, 15, 5, val)
> > > > +#define SET_RS2(inst, val) deposit32(inst, 20, 5, val)
> > > > +#define SET_RD(inst, val) deposit32(inst, 7, 5, val)
> > > > +#define SET_I_IMM(inst, val) deposit32(inst, 20, 12, val)
> > > > +#define SET_S_IMM(inst, val) \
> > > > + deposit32(deposit32(inst, 7, 5, val), 25, 7, (val) >> 5)
> > > >
> > > > /* RVC decoding macros */
> > > > #define GET_C_IMM(inst) (extract32(inst, 2, 5) \
> > > > @@ -346,6 +354,8 @@ enum {
> > > > | (extract32(inst, 5, 1) << 6))
> > > > #define GET_C_LD_IMM(inst) ((extract16(inst, 10, 3) << 3) \
> > > > | (extract16(inst, 5, 2) << 6))
> > > > +#define GET_C_SW_IMM(inst) GET_C_LW_IMM(inst)
> > > > +#define GET_C_SD_IMM(inst) GET_C_LD_IMM(inst)
> > > > #define GET_C_J_IMM(inst) ((extract32(inst, 3, 3) << 1) \
> > > > | (extract32(inst, 11, 1) << 4) \
> > > > | (extract32(inst, 2, 1) << 5) \
> > > > @@ -366,4 +376,37 @@ enum {
> > > > #define GET_C_RS1S(inst) (8 + extract16(inst, 7, 3))
> > > > #define GET_C_RS2S(inst) (8 + extract16(inst, 2, 3))
> > > >
> > > > +#define GET_C_FUNC(inst) extract32(inst, 13, 3)
> > > > +#define GET_C_OP(inst) extract32(inst, 0, 2)
> > > > +
> > > > +enum {
> > > > + /* RVC Quadrants */
> > > > + OPC_RISC_C_OP_QUAD0 = 0x0,
> > > > + OPC_RISC_C_OP_QUAD1 = 0x1,
> > > > + OPC_RISC_C_OP_QUAD2 = 0x2
> > > > +};
> > > > +
> > > > +enum {
> > > > + /* RVC Quadrant 0 */
> > > > + OPC_RISC_C_FUNC_ADDI4SPN = 0x0,
> > > > + OPC_RISC_C_FUNC_FLD_LQ = 0x1,
> > > > + OPC_RISC_C_FUNC_LW = 0x2,
> > > > + OPC_RISC_C_FUNC_FLW_LD = 0x3,
> > > > + OPC_RISC_C_FUNC_FSD_SQ = 0x5,
> > > > + OPC_RISC_C_FUNC_SW = 0x6,
> > > > + OPC_RISC_C_FUNC_FSW_SD = 0x7
> > > > +};
> > > > +
> > > > +enum {
> > > > + /* RVC Quadrant 2 */
> > > > + OPC_RISC_C_FUNC_SLLI_SLLI64 = 0x0,
> > > > + OPC_RISC_C_FUNC_FLDSP_LQSP = 0x1,
> > > > + OPC_RISC_C_FUNC_LWSP = 0x2,
> > > > + OPC_RISC_C_FUNC_FLWSP_LDSP = 0x3,
> > > > + OPC_RISC_C_FUNC_JR_MV_EBREAK_JALR_ADD = 0x4,
> > > > + OPC_RISC_C_FUNC_FSDSP_SQSP = 0x5,
> > > > + OPC_RISC_C_FUNC_SWSP = 0x6,
> > > > + OPC_RISC_C_FUNC_FSWSP_SDSP = 0x7
> > > > +};
> > > > +
> > > > #endif
> > > > --
> > > > 2.34.1
> > > >
> > > >
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-06-07 3:27 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-26 10:05 [PATCH v3 0/4] QEMU RISC-V nested virtualization fixes Anup Patel
2022-05-26 10:05 ` [PATCH v3 1/4] target/riscv: Don't force update priv spec version to latest Anup Patel
2022-05-26 10:05 ` [PATCH v3 2/4] target/riscv: Add dummy mcountinhibit CSR for priv spec v1.11 or higher Anup Patel
2022-05-26 10:05 ` [PATCH v3 3/4] target/riscv: Update [m|h]tinst CSR in riscv_cpu_do_interrupt() Anup Patel
2022-06-06 1:48 ` Alistair Francis
2022-06-06 1:53 ` Alistair Francis
2022-06-07 3:01 ` Anup Patel
2022-06-07 3:04 ` Anup Patel
2022-05-26 10:05 ` [PATCH v3 4/4] target/riscv: Force disable extensions if priv spec version does not match Anup Patel
2022-06-06 1:55 ` Alistair Francis
2022-06-07 2:50 ` Anup Patel
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.