* [PATCH v2 1/6] target/riscv: Without H-mode mask all HS mode inturrupts in mie.
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-02 3:10 ` Alistair Francis
2023-05-26 16:23 ` [PATCH v2 2/6] target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST Rajnesh Kanwal
` (5 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/csr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 4451bd1263..041f0b3e2e 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1522,7 +1522,7 @@ static RISCVException rmw_mie64(CPURISCVState *env, int csrno,
env->mie = (env->mie & ~mask) | (new_val & mask);
if (!riscv_has_ext(env, RVH)) {
- env->mie &= ~((uint64_t)MIP_SGEIP);
+ env->mie &= ~((uint64_t)HS_MODE_INTERRUPTS);
}
return RISCV_EXCP_NONE;
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/6] target/riscv: Without H-mode mask all HS mode inturrupts in mie.
2023-05-26 16:23 ` [PATCH v2 1/6] target/riscv: Without H-mode mask all HS mode inturrupts in mie Rajnesh Kanwal
@ 2023-06-02 3:10 ` Alistair Francis
0 siblings, 0 replies; 15+ messages in thread
From: Alistair Francis @ 2023-06-02 3:10 UTC (permalink / raw)
To: Rajnesh Kanwal
Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
liweiwei, dbarboza, zhiwei_liu, atishp, apatel
On Sat, May 27, 2023 at 2:25 AM Rajnesh Kanwal <rkanwal@rivosinc.com> wrote:
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/csr.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 4451bd1263..041f0b3e2e 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -1522,7 +1522,7 @@ static RISCVException rmw_mie64(CPURISCVState *env, int csrno,
> env->mie = (env->mie & ~mask) | (new_val & mask);
>
> if (!riscv_has_ext(env, RVH)) {
> - env->mie &= ~((uint64_t)MIP_SGEIP);
> + env->mie &= ~((uint64_t)HS_MODE_INTERRUPTS);
> }
>
> return RISCV_EXCP_NONE;
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 2/6] target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST.
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
2023-05-26 16:23 ` [PATCH v2 1/6] target/riscv: Without H-mode mask all HS mode inturrupts in mie Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-02 3:13 ` Alistair Francis
2023-05-26 16:23 ` [PATCH v2 3/6] target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled Rajnesh Kanwal
` (4 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
RISCV_EXCP_SEMIHOST is set to 0x10, which can be a local interrupt id
as well. This change moves RISCV_EXCP_SEMIHOST to switch case so that
async flag check is performed before invoking semihosting logic.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/cpu_helper.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 57d04385f1..b25ee179e9 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -1602,15 +1602,13 @@ void riscv_cpu_do_interrupt(CPUState *cs)
target_ulong htval = 0;
target_ulong mtval2 = 0;
- if (cause == RISCV_EXCP_SEMIHOST) {
- do_common_semihosting(cs);
- env->pc += 4;
- return;
- }
-
if (!async) {
/* set tval to badaddr for traps with address information */
switch (cause) {
+ case RISCV_EXCP_SEMIHOST:
+ do_common_semihosting(cs);
+ env->pc += 4;
+ return;
case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
case RISCV_EXCP_LOAD_ADDR_MIS:
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 2/6] target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST.
2023-05-26 16:23 ` [PATCH v2 2/6] target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST Rajnesh Kanwal
@ 2023-06-02 3:13 ` Alistair Francis
0 siblings, 0 replies; 15+ messages in thread
From: Alistair Francis @ 2023-06-02 3:13 UTC (permalink / raw)
To: Rajnesh Kanwal
Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
liweiwei, dbarboza, zhiwei_liu, atishp, apatel
On Sat, May 27, 2023 at 2:24 AM Rajnesh Kanwal <rkanwal@rivosinc.com> wrote:
>
> RISCV_EXCP_SEMIHOST is set to 0x10, which can be a local interrupt id
> as well. This change moves RISCV_EXCP_SEMIHOST to switch case so that
> async flag check is performed before invoking semihosting logic.
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu_helper.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 57d04385f1..b25ee179e9 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -1602,15 +1602,13 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> target_ulong htval = 0;
> target_ulong mtval2 = 0;
>
> - if (cause == RISCV_EXCP_SEMIHOST) {
> - do_common_semihosting(cs);
> - env->pc += 4;
> - return;
> - }
> -
> if (!async) {
> /* set tval to badaddr for traps with address information */
> switch (cause) {
> + case RISCV_EXCP_SEMIHOST:
> + do_common_semihosting(cs);
> + env->pc += 4;
> + return;
> case RISCV_EXCP_LOAD_GUEST_ACCESS_FAULT:
> case RISCV_EXCP_STORE_GUEST_AMO_ACCESS_FAULT:
> case RISCV_EXCP_LOAD_ADDR_MIS:
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 3/6] target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
2023-05-26 16:23 ` [PATCH v2 1/6] target/riscv: Without H-mode mask all HS mode inturrupts in mie Rajnesh Kanwal
2023-05-26 16:23 ` [PATCH v2 2/6] target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-02 3:17 ` Alistair Francis
2023-05-26 16:23 ` [PATCH v2 4/6] target/riscv: Split interrupt logic from riscv_cpu_update_mip Rajnesh Kanwal
` (3 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
With H-Ext supported, VS bits are all hardwired to one in MIDELEG
denoting always delegated interrupts. This is being done in rmw_mideleg
but given mideleg is used in other places when routing interrupts
this change initializes it in riscv_cpu_realize to be on the safe side.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/cpu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index db0875fb43..269a094f42 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -1280,6 +1280,11 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
riscv_pmu_timer_cb, cpu);
}
}
+
+ /* With H-Ext, VSSIP, VSTIP, VSEIP and SGEIP are hardwired to one. */
+ if (riscv_has_ext(env, RVH)) {
+ env->mideleg = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP | MIP_SGEIP;
+ }
#endif
riscv_cpu_finalize_features(cpu, &local_err);
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 3/6] target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled
2023-05-26 16:23 ` [PATCH v2 3/6] target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled Rajnesh Kanwal
@ 2023-06-02 3:17 ` Alistair Francis
0 siblings, 0 replies; 15+ messages in thread
From: Alistair Francis @ 2023-06-02 3:17 UTC (permalink / raw)
To: Rajnesh Kanwal
Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
liweiwei, dbarboza, zhiwei_liu, atishp, apatel
On Sat, May 27, 2023 at 2:24 AM Rajnesh Kanwal <rkanwal@rivosinc.com> wrote:
>
> With H-Ext supported, VS bits are all hardwired to one in MIDELEG
> denoting always delegated interrupts. This is being done in rmw_mideleg
> but given mideleg is used in other places when routing interrupts
> this change initializes it in riscv_cpu_realize to be on the safe side.
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index db0875fb43..269a094f42 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -1280,6 +1280,11 @@ static void riscv_cpu_realize(DeviceState *dev, Error **errp)
> riscv_pmu_timer_cb, cpu);
> }
> }
> +
> + /* With H-Ext, VSSIP, VSTIP, VSEIP and SGEIP are hardwired to one. */
> + if (riscv_has_ext(env, RVH)) {
> + env->mideleg = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP | MIP_SGEIP;
> + }
> #endif
>
> riscv_cpu_finalize_features(cpu, &local_err);
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 4/6] target/riscv: Split interrupt logic from riscv_cpu_update_mip.
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
` (2 preceding siblings ...)
2023-05-26 16:23 ` [PATCH v2 3/6] target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-02 3:26 ` Alistair Francis
2023-05-26 16:23 ` [PATCH v2 5/6] target/riscv: Add M-mode virtual interrupt and IRQ filtering support Rajnesh Kanwal
` (2 subsequent siblings)
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
This is to allow virtual interrupts to be inserted into S and VS
modes. Given virtual interrupts will be maintained in separate
mvip and hvip CSRs, riscv_cpu_update_mip will no longer be in the
path and interrupts need to be triggered for these cases from
rmw_hvip64 and rmw_mvip64 functions.
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/cpu.h | 1 +
target/riscv/cpu_helper.c | 25 ++++++++++++++++++-------
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index de7e43126a..de55bfb775 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -562,6 +562,7 @@ void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env);
int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts);
uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
uint64_t value);
+void riscv_cpu_interrupt(CPURISCVState *env);
#define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(void *),
void *arg);
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index b25ee179e9..c79ec4db76 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -609,11 +609,12 @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
}
}
-uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
- uint64_t value)
+void riscv_cpu_interrupt(CPURISCVState *env)
{
+ uint64_t gein, vsgein = 0, vstip = 0;
CPUState *cs = env_cpu(env);
- uint64_t gein, vsgein = 0, vstip = 0, old = env->mip;
+
+ QEMU_IOTHREAD_LOCK_GUARD();
if (env->virt_enabled) {
gein = get_field(env->hstatus, HSTATUS_VGEIN);
@@ -622,15 +623,25 @@ uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
vstip = env->vstime_irq ? MIP_VSTIP : 0;
- QEMU_IOTHREAD_LOCK_GUARD();
-
- env->mip = (env->mip & ~mask) | (value & mask);
-
if (env->mip | vsgein | vstip) {
cpu_interrupt(cs, CPU_INTERRUPT_HARD);
} else {
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
}
+}
+
+uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask, uint64_t value)
+{
+ uint64_t old = env->mip;
+
+ /* No need to update mip for VSTIP */
+ mask = ((mask == MIP_VSTIP) && env->vstime_irq) ? 0 : mask;
+
+ QEMU_IOTHREAD_LOCK_GUARD();
+
+ env->mip = (env->mip & ~mask) | (value & mask);
+
+ riscv_cpu_interrupt(env);
return old;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/6] target/riscv: Split interrupt logic from riscv_cpu_update_mip.
2023-05-26 16:23 ` [PATCH v2 4/6] target/riscv: Split interrupt logic from riscv_cpu_update_mip Rajnesh Kanwal
@ 2023-06-02 3:26 ` Alistair Francis
0 siblings, 0 replies; 15+ messages in thread
From: Alistair Francis @ 2023-06-02 3:26 UTC (permalink / raw)
To: Rajnesh Kanwal
Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
liweiwei, dbarboza, zhiwei_liu, atishp, apatel
On Sat, May 27, 2023 at 2:24 AM Rajnesh Kanwal <rkanwal@rivosinc.com> wrote:
>
> This is to allow virtual interrupts to be inserted into S and VS
> modes. Given virtual interrupts will be maintained in separate
> mvip and hvip CSRs, riscv_cpu_update_mip will no longer be in the
> path and interrupts need to be triggered for these cases from
> rmw_hvip64 and rmw_mvip64 functions.
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.h | 1 +
> target/riscv/cpu_helper.c | 25 ++++++++++++++++++-------
> 2 files changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index de7e43126a..de55bfb775 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -562,6 +562,7 @@ void riscv_cpu_swap_hypervisor_regs(CPURISCVState *env);
> int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts);
> uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
> uint64_t value);
> +void riscv_cpu_interrupt(CPURISCVState *env);
> #define BOOL_TO_MASK(x) (-!!(x)) /* helper for riscv_cpu_update_mip value */
> void riscv_cpu_set_rdtime_fn(CPURISCVState *env, uint64_t (*fn)(void *),
> void *arg);
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index b25ee179e9..c79ec4db76 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -609,11 +609,12 @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
> }
> }
>
> -uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
> - uint64_t value)
> +void riscv_cpu_interrupt(CPURISCVState *env)
> {
> + uint64_t gein, vsgein = 0, vstip = 0;
> CPUState *cs = env_cpu(env);
> - uint64_t gein, vsgein = 0, vstip = 0, old = env->mip;
> +
> + QEMU_IOTHREAD_LOCK_GUARD();
>
> if (env->virt_enabled) {
> gein = get_field(env->hstatus, HSTATUS_VGEIN);
> @@ -622,15 +623,25 @@ uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask,
>
> vstip = env->vstime_irq ? MIP_VSTIP : 0;
>
> - QEMU_IOTHREAD_LOCK_GUARD();
> -
> - env->mip = (env->mip & ~mask) | (value & mask);
> -
> if (env->mip | vsgein | vstip) {
> cpu_interrupt(cs, CPU_INTERRUPT_HARD);
> } else {
> cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
> }
> +}
> +
> +uint64_t riscv_cpu_update_mip(CPURISCVState *env, uint64_t mask, uint64_t value)
> +{
> + uint64_t old = env->mip;
> +
> + /* No need to update mip for VSTIP */
> + mask = ((mask == MIP_VSTIP) && env->vstime_irq) ? 0 : mask;
> +
> + QEMU_IOTHREAD_LOCK_GUARD();
> +
> + env->mip = (env->mip & ~mask) | (value & mask);
> +
> + riscv_cpu_interrupt(env);
>
> return old;
> }
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 5/6] target/riscv: Add M-mode virtual interrupt and IRQ filtering support.
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
` (3 preceding siblings ...)
2023-05-26 16:23 ` [PATCH v2 4/6] target/riscv: Split interrupt logic from riscv_cpu_update_mip Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-30 9:26 ` Daniel Henrique Barboza
2023-05-26 16:23 ` [PATCH v2 6/6] target/riscv: Add HS-mode " Rajnesh Kanwal
2023-09-06 14:38 ` [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs " Daniel Henrique Barboza
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
This change adds support for inserting virtual interrupts from M-mode
into S-mode using mvien and mvip csrs. IRQ filtering is a use case of
this change, i-e M-mode can stop delegating an interrupt to S-mode and
instead enable it in MIE and receive those interrupts in M-mode and then
selectively inject the interrupt using mvien and mvip.
Also, the spec doesn't mandate the interrupt to be actually supported
in hardware. Which allows M-mode to assert virtual interrupts to S-mode
that have no connection to any real interrupt events.
This is defined as part of the AIA specification [0], "5.3 Interrupt
filtering and virtual interrupts for supervisor level".
[0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/cpu.c | 3 +-
target/riscv/cpu.h | 8 ++
target/riscv/cpu_bits.h | 6 +
target/riscv/cpu_helper.c | 26 +++-
target/riscv/csr.c | 279 ++++++++++++++++++++++++++++++++++----
target/riscv/machine.c | 3 +
6 files changed, 289 insertions(+), 36 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 269a094f42..7c4999431a 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -712,7 +712,8 @@ static bool riscv_cpu_has_work(CPUState *cs)
* Definition of the WFI instruction requires it to ignore the privilege
* mode and delegation registers, but respect individual enables
*/
- return riscv_cpu_all_pending(env) != 0;
+ return riscv_cpu_all_pending(env) != 0 ||
+ riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE;
#else
return true;
#endif
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index de55bfb775..07cf656471 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -190,6 +190,12 @@ struct CPUArchState {
uint64_t mie;
uint64_t mideleg;
+ /*
+ * When mideleg[i]=0 and mvien[i]=1, sie[i] is no more
+ * alias of mie[i] and needs to be maintained separatly.
+ */
+ uint64_t sie;
+
target_ulong satp; /* since: priv-1.10.0 */
target_ulong stval;
target_ulong medeleg;
@@ -210,6 +216,8 @@ struct CPUArchState {
/* AIA CSRs */
target_ulong miselect;
target_ulong siselect;
+ uint64_t mvien;
+ uint64_t mvip;
/* Hypervisor CSRs */
target_ulong hstatus;
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 59f0ffd9e1..0d32d2a5a7 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -735,6 +735,12 @@ typedef enum RISCVException {
#define MIE_SSIE (1 << IRQ_S_SOFT)
#define MIE_USIE (1 << IRQ_U_SOFT)
+/* Machine constants */
+#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
+#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
+#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
+#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
+
/* General PointerMasking CSR bits */
#define PM_ENABLE 0x00000001ULL
#define PM_CURRENT 0x00000002ULL
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index c79ec4db76..6567ddef7b 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -365,6 +365,10 @@ static int riscv_cpu_pending_to_irq(CPURISCVState *env,
return best_irq;
}
+/*
+ * Doesn't report interrupts inserted using mvip from M-mode firmware. Those
+ * are returned in riscv_cpu_sirq_pending().
+ */
uint64_t riscv_cpu_all_pending(CPURISCVState *env)
{
uint32_t gein = get_field(env->hstatus, HSTATUS_VGEIN);
@@ -387,9 +391,10 @@ int riscv_cpu_sirq_pending(CPURISCVState *env)
{
uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
~(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
+ uint64_t irqs_f = env->mvip & env->mvien & ~env->mideleg & env->sie;
return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
- irqs, env->siprio);
+ irqs | irqs_f, env->siprio);
}
int riscv_cpu_vsirq_pending(CPURISCVState *env)
@@ -403,8 +408,8 @@ int riscv_cpu_vsirq_pending(CPURISCVState *env)
static int riscv_cpu_local_irq_pending(CPURISCVState *env)
{
+ uint64_t irqs, pending, mie, hsie, vsie, irqs_f;
int virq;
- uint64_t irqs, pending, mie, hsie, vsie;
/* Determine interrupt enable state of all privilege modes */
if (env->virt_enabled) {
@@ -430,8 +435,11 @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
irqs, env->miprio);
}
+ /* Check for virtual S-mode interrupts. */
+ irqs_f = env->mvip & (env->mvien & ~env->mideleg) & env->sie;
+
/* Check HS-mode interrupts */
- irqs = pending & env->mideleg & ~env->hideleg & -hsie;
+ irqs = ((pending & env->mideleg & ~env->hideleg) | irqs_f) & -hsie;
if (irqs) {
return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
irqs, env->siprio);
@@ -611,7 +619,7 @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
void riscv_cpu_interrupt(CPURISCVState *env)
{
- uint64_t gein, vsgein = 0, vstip = 0;
+ uint64_t gein, vsgein = 0, vstip = 0, irqf = 0;
CPUState *cs = env_cpu(env);
QEMU_IOTHREAD_LOCK_GUARD();
@@ -619,11 +627,13 @@ void riscv_cpu_interrupt(CPURISCVState *env)
if (env->virt_enabled) {
gein = get_field(env->hstatus, HSTATUS_VGEIN);
vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
+ } else {
+ irqf = env->mvien & env->mvip & env->sie;
}
vstip = env->vstime_irq ? MIP_VSTIP : 0;
- if (env->mip | vsgein | vstip) {
+ if (env->mip | vsgein | vstip | irqf) {
cpu_interrupt(cs, CPU_INTERRUPT_HARD);
} else {
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
@@ -1608,6 +1618,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
bool async = !!(cs->exception_index & RISCV_EXCP_INT_FLAG);
target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
uint64_t deleg = async ? env->mideleg : env->medeleg;
+ bool s_injected = env->mvip & (1 << cause) & env->mvien &&
+ !(env->mip & (1 << cause));
target_ulong tval = 0;
target_ulong tinst = 0;
target_ulong htval = 0;
@@ -1696,8 +1708,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
__func__, env->mhartid, async, cause, env->pc, tval,
riscv_cpu_get_trap_name(cause, async));
- if (env->priv <= PRV_S &&
- cause < TARGET_LONG_BITS && ((deleg >> cause) & 1)) {
+ if (env->priv <= PRV_S && cause < 64 &&
+ (((deleg >> cause) & 1) || s_injected)) {
/* handle the trap in S-mode */
if (riscv_has_ext(env, RVH)) {
uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 041f0b3e2e..c1ca065a81 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -1113,21 +1113,16 @@ static RISCVException write_stimecmph(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
-/* Machine constants */
-
-#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
-#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
- MIP_LCOFIP))
-#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
-#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
-
#define VSTOPI_NUM_SRCS 5
-static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
- VS_MODE_INTERRUPTS;
-static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
+#define LOCAL_INTERRUPTS (~0x1FFF)
+
+static const uint64_t delegable_ints =
+ S_MODE_INTERRUPTS | VS_MODE_INTERRUPTS | MIP_LCOFIP;
+static const uint64_t vs_delegable_ints =
+ (VS_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & ~MIP_LCOFIP;
static const uint64_t all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
- HS_MODE_INTERRUPTS;
+ HS_MODE_INTERRUPTS | LOCAL_INTERRUPTS;
#define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
(1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
(1ULL << (RISCV_EXCP_ILLEGAL_INST)) | \
@@ -1158,12 +1153,30 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
-static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
- SIP_LCOFIP;
+
+/*
+ * Spec allows for bits 13:63 to be either read-only or writable.
+ * So far we have interrupt LCOFIP in that region which is writable.
+ *
+ * Also, spec allows to inject virtual interrupts in this region even
+ * without any hardware interrupts for that interrupt number.
+ *
+ * For now interrupt in 13:63 region are all kept writable. 13 being
+ * LCOFIP and 14:63 being virtual only. Change this in future if we
+ * introduce more interrupts that are not writable.
+ */
+
+/* Bit STIP can be an alias of mip.STIP that's why it's writable in mvip. */
+static const target_ulong mvip_writable_mask = MIP_SSIP | MIP_STIP | MIP_SEIP |
+ LOCAL_INTERRUPTS;
+static const target_ulong mvien_writable_mask = MIP_SSIP | MIP_SEIP |
+ LOCAL_INTERRUPTS;
+
+static const target_ulong sip_writable_mask = SIP_SSIP | LOCAL_INTERRUPTS;
static const target_ulong hip_writable_mask = MIP_VSSIP;
static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
- MIP_VSEIP;
-static const target_ulong vsip_writable_mask = MIP_VSSIP;
+ MIP_VSEIP | LOCAL_INTERRUPTS;
+static const target_ulong vsip_writable_mask = MIP_VSSIP | LOCAL_INTERRUPTS;
const bool valid_vm_1_10_32[16] = {
[VM_1_10_MBARE] = true,
@@ -1559,6 +1572,52 @@ static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
return ret;
}
+static RISCVException rmw_mvien64(CPURISCVState *env, int csrno,
+ uint64_t *ret_val,
+ uint64_t new_val, uint64_t wr_mask)
+{
+ uint64_t mask = wr_mask & mvien_writable_mask;
+
+ if (ret_val) {
+ *ret_val = env->mvien;
+ }
+
+ env->mvien = (env->mvien & ~mask) | (new_val & mask);
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException rmw_mvien(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_mvien64(env, csrno, &rval, new_val, wr_mask);
+ if (ret_val) {
+ *ret_val = rval;
+ }
+
+ return ret;
+}
+
+static RISCVException rmw_mvienh(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_mvien64(env, csrno, &rval,
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
+ if (ret_val) {
+ *ret_val = rval >> 32;
+ }
+
+ return ret;
+}
+
static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
{
int irq;
@@ -1699,6 +1758,11 @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
priv = PRV_M;
break;
case CSR_SIREG:
+ if (env->priv == PRV_S && env->mvien & MIP_SEIP &&
+ env->siselect >= ISELECT_IMSIC_EIDELIVERY &&
+ env->siselect <= ISELECT_IMSIC_EIE63) {
+ goto done;
+ }
iprio = env->siprio;
isel = env->siselect;
priv = PRV_S;
@@ -1763,6 +1827,9 @@ static int rmw_xtopei(CPURISCVState *env, int csrno, target_ulong *val,
priv = PRV_M;
break;
case CSR_STOPEI:
+ if (env->mvien & MIP_SEIP && env->priv == PRV_S) {
+ goto done;
+ }
priv = PRV_S;
break;
case CSR_VSTOPEI:
@@ -2336,6 +2403,143 @@ static RISCVException rmw_miph(CPURISCVState *env, int csrno,
return ret;
}
+/*
+ * The function is written for two use-cases:
+ * 1- To access mvip csr as is for m-mode access.
+ * 2- To access sip as a combination of mip and mvip for s-mode.
+ *
+ * Both report bits 1, 5, 9 and 13:63 but with the exception of
+ * STIP being read-only zero in case of mvip when sstc extension
+ * is present.
+ * Also, sip needs to be read-only zero when both mideleg[i] and
+ * mvien[i] are zero but mvip needs to be an alias of mip.
+ */
+static RISCVException rmw_mvip64(CPURISCVState *env, int csrno,
+ uint64_t *ret_val,
+ uint64_t new_val, uint64_t wr_mask)
+{
+ RISCVCPU *cpu = env_archcpu(env);
+ target_ulong ret_mip = 0;
+ RISCVException ret;
+ uint64_t old_mvip;
+
+ /*
+ * mideleg[i] mvien[i]
+ * 0 0 No delegation. mvip[i] is alias of mip[i].
+ * 0 1 mvip[i] becomes source of interrupt, mip bypassed.
+ * 1 X mip[i] is source of interrupt and mvip[i] aliases
+ * mip[i].
+ *
+ * So alias condition would be for bits:
+ * ((S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & (mideleg | ~mvien)) |
+ * (!sstc & MIP_STIP)
+ *
+ * Non-alias condition will be for bits:
+ * (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & (~mideleg & mvien)
+ *
+ * alias_mask denotes the bits that come from mip nalias_mask denotes bits
+ * that come from hvip.
+ */
+ uint64_t alias_mask = ((S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
+ (env->mideleg | ~env->mvien)) | MIP_STIP;
+ uint64_t nalias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
+ (~env->mideleg & env->mvien);
+ uint64_t wr_mask_mvip;
+ uint64_t wr_mask_mip;
+
+ /*
+ * mideleg[i] mvien[i]
+ * 0 0 sip[i] read-only zero.
+ * 0 1 sip[i] alias of mvip[i].
+ * 1 X sip[i] alias of mip[i].
+ *
+ * Both alias and non-alias mask remain same for sip except for bits
+ * which are zero in both mideleg and mvien.
+ */
+ if (csrno == CSR_SIP) {
+ /* Remove bits that are zero in both mideleg and mvien. */
+ alias_mask &= (env->mideleg | env->mvien);
+ nalias_mask &= (env->mideleg | env->mvien);
+ }
+
+ /*
+ * If sstc is present, mvip.STIP is not an alias of mip.STIP so clear
+ * that our in mip returned value.
+ */
+ if (cpu->cfg.ext_sstc && (env->priv == PRV_M) &&
+ get_field(env->menvcfg, MENVCFG_STCE)) {
+ alias_mask &= ~MIP_STIP;
+ }
+
+ wr_mask_mip = wr_mask & alias_mask & mvip_writable_mask;
+ wr_mask_mvip = wr_mask & nalias_mask & mvip_writable_mask;
+
+ /*
+ * For bits set in alias_mask, mvip needs to be alias of mip, so forward
+ * this to rmw_mip.
+ */
+ ret = rmw_mip(env, CSR_MIP, &ret_mip, new_val, wr_mask_mip);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
+ old_mvip = env->mvip;
+
+ /*
+ * Write to mvip. Update only non-alias bits. Alias bits were updated
+ * in mip in rmw_mip above.
+ */
+ if (wr_mask_mvip) {
+ env->mvip = (env->mvip & ~wr_mask_mvip) | (new_val & wr_mask_mvip);
+
+ /*
+ * Given mvip is separate source from mip, we need to trigger interrupt
+ * from here separately. Normally this happen from riscv_cpu_update_mip.
+ */
+ riscv_cpu_interrupt(env);
+ }
+
+ if (ret_val) {
+ ret_mip &= alias_mask;
+ old_mvip &= nalias_mask;
+
+ *ret_val = old_mvip | ret_mip;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException rmw_mvip(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_mvip64(env, csrno, &rval, new_val, wr_mask);
+ if (ret_val) {
+ *ret_val = rval;
+ }
+
+ return ret;
+}
+
+static RISCVException rmw_mviph(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_mvip64(env, csrno, &rval,
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
+ if (ret_val) {
+ *ret_val = rval >> 32;
+ }
+
+ return ret;
+}
+
/* Supervisor Trap Setup */
static RISCVException read_sstatus_i128(CPURISCVState *env, int csrno,
Int128 *val)
@@ -2430,20 +2634,37 @@ static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
uint64_t *ret_val,
uint64_t new_val, uint64_t wr_mask)
{
+ uint64_t nalias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
+ (~env->mideleg & env->mvien);
+ uint64_t alias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & env->mideleg;
+ uint64_t sie_mask = wr_mask & nalias_mask;
RISCVException ret;
- uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
+ /*
+ * mideleg[i] mvien[i]
+ * 0 0 sie[i] read-only zero.
+ * 0 1 sie[i] is a separate writable bit.
+ * 1 X sie[i] alias of mie[i].
+ *
+ * Both alias and non-alias mask remain same for sip except for bits
+ * which are zero in both mideleg and mvien.
+ */
if (env->virt_enabled) {
if (env->hvictl & HVICTL_VTI) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
}
ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
+ if (ret_val) {
+ *ret_val &= alias_mask;
+ }
} else {
- ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
- }
+ ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & alias_mask);
+ if (ret_val) {
+ *ret_val &= alias_mask;
+ *ret_val |= env->sie & nalias_mask;
+ }
- if (ret_val) {
- *ret_val &= mask;
+ env->sie = (env->sie & ~sie_mask) | (new_val & sie_mask);
}
return ret;
@@ -2641,7 +2862,7 @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
uint64_t new_val, uint64_t wr_mask)
{
RISCVException ret;
- uint64_t mask = env->mideleg & sip_writable_mask;
+ uint64_t mask = (env->mideleg | env->mvien) & sip_writable_mask;
if (env->virt_enabled) {
if (env->hvictl & HVICTL_VTI) {
@@ -2649,11 +2870,12 @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
}
ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
} else {
- ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
+ ret = rmw_mvip64(env, csrno, ret_val, new_val, wr_mask & mask);
}
if (ret_val) {
- *ret_val &= env->mideleg & S_MODE_INTERRUPTS;
+ *ret_val &= (env->mideleg | env->mvien) &
+ (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS);
}
return ret;
@@ -2818,6 +3040,7 @@ static int read_vstopi(CPURISCVState *env, int csrno, target_ulong *val)
*val = (iid & TOPI_IID_MASK) << TOPI_IID_SHIFT;
*val |= iprio;
+
return RISCV_EXCP_NONE;
}
@@ -4123,14 +4346,14 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
/* Virtual Interrupts for Supervisor Level (AIA) */
- [CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
- [CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
+ [CSR_MVIEN] = { "mvien", aia_any, NULL, NULL, rmw_mvien },
+ [CSR_MVIP] = { "mvip", aia_any, NULL, NULL, rmw_mvip },
/* Machine-Level High-Half CSRs (AIA) */
[CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
[CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
- [CSR_MVIENH] = { "mvienh", aia_any32, read_zero, write_ignore },
- [CSR_MVIPH] = { "mviph", aia_any32, read_zero, write_ignore },
+ [CSR_MVIENH] = { "mvienh", aia_any32, NULL, NULL, rmw_mvienh },
+ [CSR_MVIPH] = { "mviph", aia_any32, NULL, NULL, rmw_mviph },
[CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
/* Execution environment configuration */
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index 3ce2970785..dd7bdbb691 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -377,6 +377,9 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_UINT64(env.mip, RISCVCPU),
VMSTATE_UINT64(env.miclaim, RISCVCPU),
VMSTATE_UINT64(env.mie, RISCVCPU),
+ VMSTATE_UINT64(env.mvien, RISCVCPU),
+ VMSTATE_UINT64(env.mvip, RISCVCPU),
+ VMSTATE_UINT64(env.sie, RISCVCPU),
VMSTATE_UINT64(env.mideleg, RISCVCPU),
VMSTATE_UINTTL(env.satp, RISCVCPU),
VMSTATE_UINTTL(env.stval, RISCVCPU),
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 5/6] target/riscv: Add M-mode virtual interrupt and IRQ filtering support.
2023-05-26 16:23 ` [PATCH v2 5/6] target/riscv: Add M-mode virtual interrupt and IRQ filtering support Rajnesh Kanwal
@ 2023-06-30 9:26 ` Daniel Henrique Barboza
0 siblings, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2023-06-30 9:26 UTC (permalink / raw)
To: Rajnesh Kanwal, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, zhiwei_liu, atishp, apatel
(sorry for the delayed review)
On 5/26/23 13:23, Rajnesh Kanwal wrote:
> This change adds support for inserting virtual interrupts from M-mode
> into S-mode using mvien and mvip csrs. IRQ filtering is a use case of
> this change, i-e M-mode can stop delegating an interrupt to S-mode and
> instead enable it in MIE and receive those interrupts in M-mode and then
> selectively inject the interrupt using mvien and mvip.
>
> Also, the spec doesn't mandate the interrupt to be actually supported
> in hardware. Which allows M-mode to assert virtual interrupts to S-mode
> that have no connection to any real interrupt events.
>
> This is defined as part of the AIA specification [0], "5.3 Interrupt
> filtering and virtual interrupts for supervisor level".
>
> [0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
> ---
I didn't find anything wrong following the spec but there's a lot being done
here. Let's see how it goes in the field.
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/cpu.c | 3 +-
> target/riscv/cpu.h | 8 ++
> target/riscv/cpu_bits.h | 6 +
> target/riscv/cpu_helper.c | 26 +++-
> target/riscv/csr.c | 279 ++++++++++++++++++++++++++++++++++----
> target/riscv/machine.c | 3 +
> 6 files changed, 289 insertions(+), 36 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 269a094f42..7c4999431a 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -712,7 +712,8 @@ static bool riscv_cpu_has_work(CPUState *cs)
> * Definition of the WFI instruction requires it to ignore the privilege
> * mode and delegation registers, but respect individual enables
> */
> - return riscv_cpu_all_pending(env) != 0;
> + return riscv_cpu_all_pending(env) != 0 ||
> + riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE;
> #else
> return true;
> #endif
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index de55bfb775..07cf656471 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -190,6 +190,12 @@ struct CPUArchState {
> uint64_t mie;
> uint64_t mideleg;
>
> + /*
> + * When mideleg[i]=0 and mvien[i]=1, sie[i] is no more
> + * alias of mie[i] and needs to be maintained separatly.
> + */
> + uint64_t sie;
> +
> target_ulong satp; /* since: priv-1.10.0 */
> target_ulong stval;
> target_ulong medeleg;
> @@ -210,6 +216,8 @@ struct CPUArchState {
> /* AIA CSRs */
> target_ulong miselect;
> target_ulong siselect;
> + uint64_t mvien;
> + uint64_t mvip;
>
> /* Hypervisor CSRs */
> target_ulong hstatus;
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 59f0ffd9e1..0d32d2a5a7 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -735,6 +735,12 @@ typedef enum RISCVException {
> #define MIE_SSIE (1 << IRQ_S_SOFT)
> #define MIE_USIE (1 << IRQ_U_SOFT)
>
> +/* Machine constants */
> +#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP))
> +#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
> +#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
> +
> /* General PointerMasking CSR bits */
> #define PM_ENABLE 0x00000001ULL
> #define PM_CURRENT 0x00000002ULL
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index c79ec4db76..6567ddef7b 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -365,6 +365,10 @@ static int riscv_cpu_pending_to_irq(CPURISCVState *env,
> return best_irq;
> }
>
> +/*
> + * Doesn't report interrupts inserted using mvip from M-mode firmware. Those
> + * are returned in riscv_cpu_sirq_pending().
> + */
> uint64_t riscv_cpu_all_pending(CPURISCVState *env)
> {
> uint32_t gein = get_field(env->hstatus, HSTATUS_VGEIN);
> @@ -387,9 +391,10 @@ int riscv_cpu_sirq_pending(CPURISCVState *env)
> {
> uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
> ~(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
> + uint64_t irqs_f = env->mvip & env->mvien & ~env->mideleg & env->sie;
>
> return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
> - irqs, env->siprio);
> + irqs | irqs_f, env->siprio);
> }
>
> int riscv_cpu_vsirq_pending(CPURISCVState *env)
> @@ -403,8 +408,8 @@ int riscv_cpu_vsirq_pending(CPURISCVState *env)
>
> static int riscv_cpu_local_irq_pending(CPURISCVState *env)
> {
> + uint64_t irqs, pending, mie, hsie, vsie, irqs_f;
> int virq;
> - uint64_t irqs, pending, mie, hsie, vsie;
>
> /* Determine interrupt enable state of all privilege modes */
> if (env->virt_enabled) {
> @@ -430,8 +435,11 @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
> irqs, env->miprio);
> }
>
> + /* Check for virtual S-mode interrupts. */
> + irqs_f = env->mvip & (env->mvien & ~env->mideleg) & env->sie;
> +
> /* Check HS-mode interrupts */
> - irqs = pending & env->mideleg & ~env->hideleg & -hsie;
> + irqs = ((pending & env->mideleg & ~env->hideleg) | irqs_f) & -hsie;
> if (irqs) {
> return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
> irqs, env->siprio);
> @@ -611,7 +619,7 @@ int riscv_cpu_claim_interrupts(RISCVCPU *cpu, uint64_t interrupts)
>
> void riscv_cpu_interrupt(CPURISCVState *env)
> {
> - uint64_t gein, vsgein = 0, vstip = 0;
> + uint64_t gein, vsgein = 0, vstip = 0, irqf = 0;
> CPUState *cs = env_cpu(env);
>
> QEMU_IOTHREAD_LOCK_GUARD();
> @@ -619,11 +627,13 @@ void riscv_cpu_interrupt(CPURISCVState *env)
> if (env->virt_enabled) {
> gein = get_field(env->hstatus, HSTATUS_VGEIN);
> vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
> + } else {
> + irqf = env->mvien & env->mvip & env->sie;
> }
>
> vstip = env->vstime_irq ? MIP_VSTIP : 0;
>
> - if (env->mip | vsgein | vstip) {
> + if (env->mip | vsgein | vstip | irqf) {
> cpu_interrupt(cs, CPU_INTERRUPT_HARD);
> } else {
> cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
> @@ -1608,6 +1618,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> bool async = !!(cs->exception_index & RISCV_EXCP_INT_FLAG);
> target_ulong cause = cs->exception_index & RISCV_EXCP_INT_MASK;
> uint64_t deleg = async ? env->mideleg : env->medeleg;
> + bool s_injected = env->mvip & (1 << cause) & env->mvien &&
> + !(env->mip & (1 << cause));
> target_ulong tval = 0;
> target_ulong tinst = 0;
> target_ulong htval = 0;
> @@ -1696,8 +1708,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> __func__, env->mhartid, async, cause, env->pc, tval,
> riscv_cpu_get_trap_name(cause, async));
>
> - if (env->priv <= PRV_S &&
> - cause < TARGET_LONG_BITS && ((deleg >> cause) & 1)) {
> + if (env->priv <= PRV_S && cause < 64 &&
> + (((deleg >> cause) & 1) || s_injected)) {
> /* handle the trap in S-mode */
> if (riscv_has_ext(env, RVH)) {
> uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 041f0b3e2e..c1ca065a81 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -1113,21 +1113,16 @@ static RISCVException write_stimecmph(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> -/* Machine constants */
> -
> -#define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP | \
> - MIP_LCOFIP))
> -#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | MIP_VSEIP))
> -#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS))
> -
> #define VSTOPI_NUM_SRCS 5
>
> -static const uint64_t delegable_ints = S_MODE_INTERRUPTS |
> - VS_MODE_INTERRUPTS;
> -static const uint64_t vs_delegable_ints = VS_MODE_INTERRUPTS;
> +#define LOCAL_INTERRUPTS (~0x1FFF)
> +
> +static const uint64_t delegable_ints =
> + S_MODE_INTERRUPTS | VS_MODE_INTERRUPTS | MIP_LCOFIP;
> +static const uint64_t vs_delegable_ints =
> + (VS_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & ~MIP_LCOFIP;
> static const uint64_t all_ints = M_MODE_INTERRUPTS | S_MODE_INTERRUPTS |
> - HS_MODE_INTERRUPTS;
> + HS_MODE_INTERRUPTS | LOCAL_INTERRUPTS;
> #define DELEGABLE_EXCPS ((1ULL << (RISCV_EXCP_INST_ADDR_MIS)) | \
> (1ULL << (RISCV_EXCP_INST_ACCESS_FAULT)) | \
> (1ULL << (RISCV_EXCP_ILLEGAL_INST)) | \
> @@ -1158,12 +1153,30 @@ static const target_ulong vs_delegable_excps = DELEGABLE_EXCPS &
> static const target_ulong sstatus_v1_10_mask = SSTATUS_SIE | SSTATUS_SPIE |
> SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_XS |
> SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask = SIP_SSIP | MIP_USIP | MIP_UEIP |
> - SIP_LCOFIP;
> +
> +/*
> + * Spec allows for bits 13:63 to be either read-only or writable.
> + * So far we have interrupt LCOFIP in that region which is writable.
> + *
> + * Also, spec allows to inject virtual interrupts in this region even
> + * without any hardware interrupts for that interrupt number.
> + *
> + * For now interrupt in 13:63 region are all kept writable. 13 being
> + * LCOFIP and 14:63 being virtual only. Change this in future if we
> + * introduce more interrupts that are not writable.
> + */
> +
> +/* Bit STIP can be an alias of mip.STIP that's why it's writable in mvip. */
> +static const target_ulong mvip_writable_mask = MIP_SSIP | MIP_STIP | MIP_SEIP |
> + LOCAL_INTERRUPTS;
> +static const target_ulong mvien_writable_mask = MIP_SSIP | MIP_SEIP |
> + LOCAL_INTERRUPTS;
> +
> +static const target_ulong sip_writable_mask = SIP_SSIP | LOCAL_INTERRUPTS;
> static const target_ulong hip_writable_mask = MIP_VSSIP;
> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
> - MIP_VSEIP;
> -static const target_ulong vsip_writable_mask = MIP_VSSIP;
> + MIP_VSEIP | LOCAL_INTERRUPTS;
> +static const target_ulong vsip_writable_mask = MIP_VSSIP | LOCAL_INTERRUPTS;
>
> const bool valid_vm_1_10_32[16] = {
> [VM_1_10_MBARE] = true,
> @@ -1559,6 +1572,52 @@ static RISCVException rmw_mieh(CPURISCVState *env, int csrno,
> return ret;
> }
>
> +static RISCVException rmw_mvien64(CPURISCVState *env, int csrno,
> + uint64_t *ret_val,
> + uint64_t new_val, uint64_t wr_mask)
> +{
> + uint64_t mask = wr_mask & mvien_writable_mask;
> +
> + if (ret_val) {
> + *ret_val = env->mvien;
> + }
> +
> + env->mvien = (env->mvien & ~mask) | (new_val & mask);
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException rmw_mvien(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_mvien64(env, csrno, &rval, new_val, wr_mask);
> + if (ret_val) {
> + *ret_val = rval;
> + }
> +
> + return ret;
> +}
> +
> +static RISCVException rmw_mvienh(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_mvien64(env, csrno, &rval,
> + ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
> + if (ret_val) {
> + *ret_val = rval >> 32;
> + }
> +
> + return ret;
> +}
> +
> static int read_mtopi(CPURISCVState *env, int csrno, target_ulong *val)
> {
> int irq;
> @@ -1699,6 +1758,11 @@ static int rmw_xireg(CPURISCVState *env, int csrno, target_ulong *val,
> priv = PRV_M;
> break;
> case CSR_SIREG:
> + if (env->priv == PRV_S && env->mvien & MIP_SEIP &&
> + env->siselect >= ISELECT_IMSIC_EIDELIVERY &&
> + env->siselect <= ISELECT_IMSIC_EIE63) {
> + goto done;
> + }
> iprio = env->siprio;
> isel = env->siselect;
> priv = PRV_S;
> @@ -1763,6 +1827,9 @@ static int rmw_xtopei(CPURISCVState *env, int csrno, target_ulong *val,
> priv = PRV_M;
> break;
> case CSR_STOPEI:
> + if (env->mvien & MIP_SEIP && env->priv == PRV_S) {
> + goto done;
> + }
> priv = PRV_S;
> break;
> case CSR_VSTOPEI:
> @@ -2336,6 +2403,143 @@ static RISCVException rmw_miph(CPURISCVState *env, int csrno,
> return ret;
> }
>
> +/*
> + * The function is written for two use-cases:
> + * 1- To access mvip csr as is for m-mode access.
> + * 2- To access sip as a combination of mip and mvip for s-mode.
> + *
> + * Both report bits 1, 5, 9 and 13:63 but with the exception of
> + * STIP being read-only zero in case of mvip when sstc extension
> + * is present.
> + * Also, sip needs to be read-only zero when both mideleg[i] and
> + * mvien[i] are zero but mvip needs to be an alias of mip.
> + */
> +static RISCVException rmw_mvip64(CPURISCVState *env, int csrno,
> + uint64_t *ret_val,
> + uint64_t new_val, uint64_t wr_mask)
> +{
> + RISCVCPU *cpu = env_archcpu(env);
> + target_ulong ret_mip = 0;
> + RISCVException ret;
> + uint64_t old_mvip;
> +
> + /*
> + * mideleg[i] mvien[i]
> + * 0 0 No delegation. mvip[i] is alias of mip[i].
> + * 0 1 mvip[i] becomes source of interrupt, mip bypassed.
> + * 1 X mip[i] is source of interrupt and mvip[i] aliases
> + * mip[i].
> + *
> + * So alias condition would be for bits:
> + * ((S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & (mideleg | ~mvien)) |
> + * (!sstc & MIP_STIP)
> + *
> + * Non-alias condition will be for bits:
> + * (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & (~mideleg & mvien)
> + *
> + * alias_mask denotes the bits that come from mip nalias_mask denotes bits
> + * that come from hvip.
> + */
> + uint64_t alias_mask = ((S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
> + (env->mideleg | ~env->mvien)) | MIP_STIP;
> + uint64_t nalias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
> + (~env->mideleg & env->mvien);
> + uint64_t wr_mask_mvip;
> + uint64_t wr_mask_mip;
> +
> + /*
> + * mideleg[i] mvien[i]
> + * 0 0 sip[i] read-only zero.
> + * 0 1 sip[i] alias of mvip[i].
> + * 1 X sip[i] alias of mip[i].
> + *
> + * Both alias and non-alias mask remain same for sip except for bits
> + * which are zero in both mideleg and mvien.
> + */
> + if (csrno == CSR_SIP) {
> + /* Remove bits that are zero in both mideleg and mvien. */
> + alias_mask &= (env->mideleg | env->mvien);
> + nalias_mask &= (env->mideleg | env->mvien);
> + }
> +
> + /*
> + * If sstc is present, mvip.STIP is not an alias of mip.STIP so clear
> + * that our in mip returned value.
> + */
> + if (cpu->cfg.ext_sstc && (env->priv == PRV_M) &&
> + get_field(env->menvcfg, MENVCFG_STCE)) {
> + alias_mask &= ~MIP_STIP;
> + }
> +
> + wr_mask_mip = wr_mask & alias_mask & mvip_writable_mask;
> + wr_mask_mvip = wr_mask & nalias_mask & mvip_writable_mask;
> +
> + /*
> + * For bits set in alias_mask, mvip needs to be alias of mip, so forward
> + * this to rmw_mip.
> + */
> + ret = rmw_mip(env, CSR_MIP, &ret_mip, new_val, wr_mask_mip);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> + old_mvip = env->mvip;
> +
> + /*
> + * Write to mvip. Update only non-alias bits. Alias bits were updated
> + * in mip in rmw_mip above.
> + */
> + if (wr_mask_mvip) {
> + env->mvip = (env->mvip & ~wr_mask_mvip) | (new_val & wr_mask_mvip);
> +
> + /*
> + * Given mvip is separate source from mip, we need to trigger interrupt
> + * from here separately. Normally this happen from riscv_cpu_update_mip.
> + */
> + riscv_cpu_interrupt(env);
> + }
> +
> + if (ret_val) {
> + ret_mip &= alias_mask;
> + old_mvip &= nalias_mask;
> +
> + *ret_val = old_mvip | ret_mip;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException rmw_mvip(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_mvip64(env, csrno, &rval, new_val, wr_mask);
> + if (ret_val) {
> + *ret_val = rval;
> + }
> +
> + return ret;
> +}
> +
> +static RISCVException rmw_mviph(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_mvip64(env, csrno, &rval,
> + ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
> + if (ret_val) {
> + *ret_val = rval >> 32;
> + }
> +
> + return ret;
> +}
> +
> /* Supervisor Trap Setup */
> static RISCVException read_sstatus_i128(CPURISCVState *env, int csrno,
> Int128 *val)
> @@ -2430,20 +2634,37 @@ static RISCVException rmw_sie64(CPURISCVState *env, int csrno,
> uint64_t *ret_val,
> uint64_t new_val, uint64_t wr_mask)
> {
> + uint64_t nalias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) &
> + (~env->mideleg & env->mvien);
> + uint64_t alias_mask = (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS) & env->mideleg;
> + uint64_t sie_mask = wr_mask & nalias_mask;
> RISCVException ret;
> - uint64_t mask = env->mideleg & S_MODE_INTERRUPTS;
>
> + /*
> + * mideleg[i] mvien[i]
> + * 0 0 sie[i] read-only zero.
> + * 0 1 sie[i] is a separate writable bit.
> + * 1 X sie[i] alias of mie[i].
> + *
> + * Both alias and non-alias mask remain same for sip except for bits
> + * which are zero in both mideleg and mvien.
> + */
> if (env->virt_enabled) {
> if (env->hvictl & HVICTL_VTI) {
> return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> }
> ret = rmw_vsie64(env, CSR_VSIE, ret_val, new_val, wr_mask);
> + if (ret_val) {
> + *ret_val &= alias_mask;
> + }
> } else {
> - ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & mask);
> - }
> + ret = rmw_mie64(env, csrno, ret_val, new_val, wr_mask & alias_mask);
> + if (ret_val) {
> + *ret_val &= alias_mask;
> + *ret_val |= env->sie & nalias_mask;
> + }
>
> - if (ret_val) {
> - *ret_val &= mask;
> + env->sie = (env->sie & ~sie_mask) | (new_val & sie_mask);
> }
>
> return ret;
> @@ -2641,7 +2862,7 @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
> uint64_t new_val, uint64_t wr_mask)
> {
> RISCVException ret;
> - uint64_t mask = env->mideleg & sip_writable_mask;
> + uint64_t mask = (env->mideleg | env->mvien) & sip_writable_mask;
>
> if (env->virt_enabled) {
> if (env->hvictl & HVICTL_VTI) {
> @@ -2649,11 +2870,12 @@ static RISCVException rmw_sip64(CPURISCVState *env, int csrno,
> }
> ret = rmw_vsip64(env, CSR_VSIP, ret_val, new_val, wr_mask);
> } else {
> - ret = rmw_mip64(env, csrno, ret_val, new_val, wr_mask & mask);
> + ret = rmw_mvip64(env, csrno, ret_val, new_val, wr_mask & mask);
> }
>
> if (ret_val) {
> - *ret_val &= env->mideleg & S_MODE_INTERRUPTS;
> + *ret_val &= (env->mideleg | env->mvien) &
> + (S_MODE_INTERRUPTS | LOCAL_INTERRUPTS);
> }
>
> return ret;
> @@ -2818,6 +3040,7 @@ static int read_vstopi(CPURISCVState *env, int csrno, target_ulong *val)
>
> *val = (iid & TOPI_IID_MASK) << TOPI_IID_SHIFT;
> *val |= iprio;
> +
> return RISCV_EXCP_NONE;
> }
>
> @@ -4123,14 +4346,14 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
>
> /* Virtual Interrupts for Supervisor Level (AIA) */
> - [CSR_MVIEN] = { "mvien", aia_any, read_zero, write_ignore },
> - [CSR_MVIP] = { "mvip", aia_any, read_zero, write_ignore },
> + [CSR_MVIEN] = { "mvien", aia_any, NULL, NULL, rmw_mvien },
> + [CSR_MVIP] = { "mvip", aia_any, NULL, NULL, rmw_mvip },
>
> /* Machine-Level High-Half CSRs (AIA) */
> [CSR_MIDELEGH] = { "midelegh", aia_any32, NULL, NULL, rmw_midelegh },
> [CSR_MIEH] = { "mieh", aia_any32, NULL, NULL, rmw_mieh },
> - [CSR_MVIENH] = { "mvienh", aia_any32, read_zero, write_ignore },
> - [CSR_MVIPH] = { "mviph", aia_any32, read_zero, write_ignore },
> + [CSR_MVIENH] = { "mvienh", aia_any32, NULL, NULL, rmw_mvienh },
> + [CSR_MVIPH] = { "mviph", aia_any32, NULL, NULL, rmw_mviph },
> [CSR_MIPH] = { "miph", aia_any32, NULL, NULL, rmw_miph },
>
> /* Execution environment configuration */
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index 3ce2970785..dd7bdbb691 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -377,6 +377,9 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_UINT64(env.mip, RISCVCPU),
> VMSTATE_UINT64(env.miclaim, RISCVCPU),
> VMSTATE_UINT64(env.mie, RISCVCPU),
> + VMSTATE_UINT64(env.mvien, RISCVCPU),
> + VMSTATE_UINT64(env.mvip, RISCVCPU),
> + VMSTATE_UINT64(env.sie, RISCVCPU),
> VMSTATE_UINT64(env.mideleg, RISCVCPU),
> VMSTATE_UINTTL(env.satp, RISCVCPU),
> VMSTATE_UINTTL(env.stval, RISCVCPU),
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 6/6] target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
` (4 preceding siblings ...)
2023-05-26 16:23 ` [PATCH v2 5/6] target/riscv: Add M-mode virtual interrupt and IRQ filtering support Rajnesh Kanwal
@ 2023-05-26 16:23 ` Rajnesh Kanwal
2023-06-30 9:33 ` Daniel Henrique Barboza
2023-09-06 14:38 ` [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs " Daniel Henrique Barboza
6 siblings, 1 reply; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-05-26 16:23 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, dbarboza,
zhiwei_liu, atishp, apatel, rkanwal
This change adds support for inserting virtual interrupts from HS-mode
into VS-mode using hvien and hvip csrs. This also allows for IRQ filtering
from HS-mode.
Also, the spec doesn't mandate the interrupt to be actually supported
in hardware. Which allows HS-mode to assert virtual interrupts to VS-mode
that have no connection to any real interrupt events.
This is defined as part of the AIA specification [0], "6.3.2 Virtual
interrupts for VS level".
[0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
---
target/riscv/cpu.c | 3 +-
target/riscv/cpu.h | 14 +++
target/riscv/cpu_helper.c | 48 +++++++---
target/riscv/csr.c | 196 ++++++++++++++++++++++++++++++++++----
target/riscv/machine.c | 3 +
5 files changed, 234 insertions(+), 30 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 7c4999431a..6f2f8f21cc 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -713,7 +713,8 @@ static bool riscv_cpu_has_work(CPUState *cs)
* mode and delegation registers, but respect individual enables
*/
return riscv_cpu_all_pending(env) != 0 ||
- riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE;
+ riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE ||
+ riscv_cpu_vsirq_pending(env) != RISCV_EXCP_NONE;
#else
return true;
#endif
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 07cf656471..3e10eee38f 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -196,6 +196,12 @@ struct CPUArchState {
*/
uint64_t sie;
+ /*
+ * When hideleg[i]=0 and hvien[i]=1, vsie[i] is no more
+ * alias of sie[i] (mie[i]) and needs to be maintained separatly.
+ */
+ uint64_t vsie;
+
target_ulong satp; /* since: priv-1.10.0 */
target_ulong stval;
target_ulong medeleg;
@@ -230,6 +236,14 @@ struct CPUArchState {
target_ulong hgeie;
target_ulong hgeip;
uint64_t htimedelta;
+ uint64_t hvien;
+
+ /*
+ * Bits VSSIP, VSTIP and VSEIP in hvip are maintained in mip. Other bits
+ * from 0:12 are reserved. Bits 13:63 are not aliased and must be separately
+ * maintain in hvip.
+ */
+ uint64_t hvip;
/* Hypervisor controlled virtual interrupt priorities */
target_ulong hvictl;
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 6567ddef7b..fd7dae9b68 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -366,8 +366,9 @@ static int riscv_cpu_pending_to_irq(CPURISCVState *env,
}
/*
- * Doesn't report interrupts inserted using mvip from M-mode firmware. Those
- * are returned in riscv_cpu_sirq_pending().
+ * Doesn't report interrupts inserted using mvip from M-mode firmware or
+ * using hvip bits 13:63 from HS-mode. Those are returned in
+ * riscv_cpu_sirq_pending() and riscv_cpu_vsirq_pending().
*/
uint64_t riscv_cpu_all_pending(CPURISCVState *env)
{
@@ -399,16 +400,23 @@ int riscv_cpu_sirq_pending(CPURISCVState *env)
int riscv_cpu_vsirq_pending(CPURISCVState *env)
{
- uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
- (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
+ uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg & env->hideleg;
+ uint64_t irqs_f_vs = env->hvip & env->hvien & ~env->hideleg & env->vsie;
+ uint64_t vsbits;
+
+ /* Bring VS-level bits to correct position */
+ vsbits = irqs & VS_MODE_INTERRUPTS;
+ irqs &= ~VS_MODE_INTERRUPTS;
+ irqs |= vsbits >> 1;
return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
- irqs >> 1, env->hviprio);
+ (irqs | irqs_f_vs), env->hviprio);
}
static int riscv_cpu_local_irq_pending(CPURISCVState *env)
{
- uint64_t irqs, pending, mie, hsie, vsie, irqs_f;
+ uint64_t irqs, pending, mie, hsie, vsie, irqs_f, irqs_f_vs;
+ uint64_t vsbits, irq_delegated;
int virq;
/* Determine interrupt enable state of all privilege modes */
@@ -445,12 +453,26 @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
irqs, env->siprio);
}
+ /* Check for virtual VS-mode interrupts. */
+ irqs_f_vs = env->hvip & env->hvien & ~env->hideleg & env->vsie;
+
/* Check VS-mode interrupts */
- irqs = pending & env->mideleg & env->hideleg & -vsie;
+ irq_delegated = pending & env->mideleg & env->hideleg;
+
+ /* Bring VS-level bits to correct position */
+ vsbits = irq_delegated & VS_MODE_INTERRUPTS;
+ irq_delegated &= ~VS_MODE_INTERRUPTS;
+ irq_delegated |= vsbits >> 1;
+
+ irqs = (irq_delegated | irqs_f_vs) & -vsie;
if (irqs) {
virq = riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
- irqs >> 1, env->hviprio);
- return (virq <= 0) ? virq : virq + 1;
+ irqs, env->hviprio);
+ if (virq <= 0 || (virq > 12 && virq <= 63)) {
+ return virq;
+ } else {
+ return virq + 1;
+ }
}
/* Indicate no pending interrupt */
@@ -627,6 +649,7 @@ void riscv_cpu_interrupt(CPURISCVState *env)
if (env->virt_enabled) {
gein = get_field(env->hstatus, HSTATUS_VGEIN);
vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
+ irqf = env->hvien & env->hvip & env->vsie;
} else {
irqf = env->mvien & env->mvip & env->sie;
}
@@ -1620,6 +1643,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
uint64_t deleg = async ? env->mideleg : env->medeleg;
bool s_injected = env->mvip & (1 << cause) & env->mvien &&
!(env->mip & (1 << cause));
+ bool vs_injected = env->hvip & (1 << cause) & env->hvien &&
+ !(env->mip & (1 << cause));
target_ulong tval = 0;
target_ulong tinst = 0;
target_ulong htval = 0;
@@ -1709,12 +1734,13 @@ void riscv_cpu_do_interrupt(CPUState *cs)
riscv_cpu_get_trap_name(cause, async));
if (env->priv <= PRV_S && cause < 64 &&
- (((deleg >> cause) & 1) || s_injected)) {
+ (((deleg >> cause) & 1) || s_injected || vs_injected)) {
/* handle the trap in S-mode */
if (riscv_has_ext(env, RVH)) {
uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
- if (env->virt_enabled && ((hdeleg >> cause) & 1)) {
+ if (env->virt_enabled &&
+ (((hdeleg >> cause) & 1) || vs_injected)) {
/* Trap to VS mode */
/*
* See if we need to adjust cause. Yes if its VS mode interrupt
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index c1ca065a81..e165bb5632 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -30,6 +30,7 @@
#include "qemu/guest-random.h"
#include "qapi/error.h"
+
/* CSR function table public API */
void riscv_get_csr_ops(int csrno, riscv_csr_operations *ops)
{
@@ -1176,6 +1177,8 @@ static const target_ulong sip_writable_mask = SIP_SSIP | LOCAL_INTERRUPTS;
static const target_ulong hip_writable_mask = MIP_VSSIP;
static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
MIP_VSEIP | LOCAL_INTERRUPTS;
+static const target_ulong hvien_writable_mask = LOCAL_INTERRUPTS;
+
static const target_ulong vsip_writable_mask = MIP_VSSIP | LOCAL_INTERRUPTS;
const bool valid_vm_1_10_32[16] = {
@@ -2584,16 +2587,36 @@ static RISCVException rmw_vsie64(CPURISCVState *env, int csrno,
uint64_t *ret_val,
uint64_t new_val, uint64_t wr_mask)
{
+ uint64_t alias_mask = (LOCAL_INTERRUPTS | VS_MODE_INTERRUPTS) &
+ env->hideleg;
+ uint64_t nalias_mask = LOCAL_INTERRUPTS & (~env->hideleg & env->hvien);
+ uint64_t rval, rval_vs, vsbits;
+ uint64_t wr_mask_vsie;
+ uint64_t wr_mask_mie;
RISCVException ret;
- uint64_t rval, mask = env->hideleg & VS_MODE_INTERRUPTS;
/* Bring VS-level bits to correct position */
- new_val = (new_val & (VS_MODE_INTERRUPTS >> 1)) << 1;
- wr_mask = (wr_mask & (VS_MODE_INTERRUPTS >> 1)) << 1;
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
+ new_val |= vsbits << 1;
+
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
+ wr_mask |= vsbits << 1;
+
+ wr_mask_mie = wr_mask & alias_mask;
+ wr_mask_vsie = wr_mask & nalias_mask;
+
+ ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask_mie);
+
+ rval_vs = env->vsie & nalias_mask;
+ env->vsie = (env->vsie & ~wr_mask_vsie) | (new_val & wr_mask_vsie);
- ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & mask);
if (ret_val) {
- *ret_val = (rval & mask) >> 1;
+ rval &= alias_mask;
+ vsbits = rval & VS_MODE_INTERRUPTS;
+ rval &= ~VS_MODE_INTERRUPTS;
+ *ret_val = rval | (vsbits >> 1) | rval_vs;
}
return ret;
@@ -2806,21 +2829,36 @@ static RISCVException write_stval(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
+ uint64_t *ret_val,
+ uint64_t new_val, uint64_t wr_mask);
+
static RISCVException rmw_vsip64(CPURISCVState *env, int csrno,
uint64_t *ret_val,
uint64_t new_val, uint64_t wr_mask)
{
RISCVException ret;
uint64_t rval, mask = env->hideleg & VS_MODE_INTERRUPTS;
+ uint64_t vsbits;
- /* Bring VS-level bits to correct position */
- new_val = (new_val & (VS_MODE_INTERRUPTS >> 1)) << 1;
- wr_mask = (wr_mask & (VS_MODE_INTERRUPTS >> 1)) << 1;
+ /* Add virtualized bits into vsip mask. */
+ mask |= env->hvien & ~env->hideleg;
- ret = rmw_mip64(env, csrno, &rval, new_val,
- wr_mask & mask & vsip_writable_mask);
+ /* Bring VS-level bits to correct position */
+ vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
+ new_val &= ~(VS_MODE_INTERRUPTS >> 1);
+ new_val |= vsbits << 1;
+ vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
+ wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
+ wr_mask |= vsbits << 1;
+
+ ret = rmw_hvip64(env, csrno, &rval, new_val,
+ wr_mask & mask & vsip_writable_mask);
if (ret_val) {
- *ret_val = (rval & mask) >> 1;
+ rval &= mask;
+ vsbits = rval & VS_MODE_INTERRUPTS;
+ rval &= ~VS_MODE_INTERRUPTS;
+ *ret_val = rval | (vsbits >> 1);
}
return ret;
@@ -3112,6 +3150,52 @@ static RISCVException write_hedeleg(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static RISCVException rmw_hvien64(CPURISCVState *env, int csrno,
+ uint64_t *ret_val,
+ uint64_t new_val, uint64_t wr_mask)
+{
+ uint64_t mask = wr_mask & hvien_writable_mask;
+
+ if (ret_val) {
+ *ret_val = env->hvien;
+ }
+
+ env->hvien = (env->hvien & ~mask) | (new_val & mask);
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException rmw_hvien(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_hvien64(env, csrno, &rval, new_val, wr_mask);
+ if (ret_val) {
+ *ret_val = rval;
+ }
+
+ return ret;
+}
+
+static RISCVException rmw_hvienh(CPURISCVState *env, int csrno,
+ target_ulong *ret_val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ uint64_t rval;
+ RISCVException ret;
+
+ ret = rmw_hvien64(env, csrno, &rval,
+ ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
+ if (ret_val) {
+ *ret_val = rval >> 32;
+ }
+
+ return ret;
+}
+
static RISCVException rmw_hideleg64(CPURISCVState *env, int csrno,
uint64_t *ret_val,
uint64_t new_val, uint64_t wr_mask)
@@ -3157,16 +3241,94 @@ static RISCVException rmw_hidelegh(CPURISCVState *env, int csrno,
return ret;
}
+/*
+ * The function is written for two use-cases:
+ * 1- To access hvip csr as is for HS-mode access.
+ * 2- To access vsip as a combination of hvip, and mip for vs-mode.
+ *
+ * Both report bits 2, 6, 10 and 13:63.
+ * vsip needs to be read-only zero when both hideleg[i] and
+ * hvien[i] are zero.
+ */
static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
uint64_t *ret_val,
uint64_t new_val, uint64_t wr_mask)
{
RISCVException ret;
+ uint64_t old_hvip;
+ uint64_t ret_mip;
+
+ /*
+ * For bits 10, 6 and 2, vsip[i] is an alias of hip[i]. These bits are
+ * present in hip, hvip and mip. Where mip[i] is alias of hip[i] and hvip[i]
+ * is OR'ed in hip[i] to inject virtual interrupts from hypervisor. These
+ * bits are actually being maintained in mip so we read them from there.
+ * This way we have a single source of truth and allows for easier
+ * implementation.
+ *
+ * For bits 13:63 we have:
+ *
+ * hideleg[i] hvien[i]
+ * 0 0 No delegation. vsip[i] readonly zero.
+ * 0 1 vsip[i] is alias of hvip[i], sip bypassed.
+ * 1 X vsip[i] is alias of sip[i], hvip bypassed.
+ *
+ * alias_mask denotes the bits that come from sip (mip here given we
+ * maintain all bits there). nalias_mask denotes bits that come from
+ * hvip.
+ */
+ uint64_t alias_mask = (env->hideleg | ~env->hvien) | VS_MODE_INTERRUPTS;
+ uint64_t nalias_mask = (~env->hideleg & env->hvien);
+ uint64_t wr_mask_hvip;
+ uint64_t wr_mask_mip;
+
+ /*
+ * Both alias and non-alias mask remain same for vsip except:
+ * 1- For VS* bits if they are zero in hideleg.
+ * 2- For 13:63 bits if they are zero in both hideleg and hvien.
+ */
+ if (csrno == CSR_VSIP) {
+ /* zero-out VS* bits that are not delegated to VS mode. */
+ alias_mask &= (env->hideleg | ~VS_MODE_INTERRUPTS);
+
+ /*
+ * zero-out 13:63 bits that are zero in both hideleg and hvien.
+ * nalias_mask mask can not contain any VS* bits so only second
+ * condition applies on it.
+ */
+ nalias_mask &= (env->hideleg | env->hvien);
+ alias_mask &= (env->hideleg | env->hvien);
+ }
+
+ wr_mask_hvip = wr_mask & nalias_mask & hvip_writable_mask;
+ wr_mask_mip = wr_mask & alias_mask & hvip_writable_mask;
+
+ /* Aliased bits, bits 10, 6, 2 need to come from mip. */
+ ret = rmw_mip64(env, csrno, &ret_mip, new_val, wr_mask_mip);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
+ old_hvip = env->hvip;
+
+ if (wr_mask_hvip) {
+ env->hvip = (env->hvip & ~wr_mask_hvip) | (new_val & wr_mask_hvip);
+
+ /*
+ * Given hvip is separate source from mip, we need to trigger interrupt
+ * from here separately. Normally this happen from riscv_cpu_update_mip.
+ */
+ riscv_cpu_interrupt(env);
+ }
- ret = rmw_mip64(env, csrno, ret_val, new_val,
- wr_mask & hvip_writable_mask);
if (ret_val) {
- *ret_val &= VS_MODE_INTERRUPTS;
+ /* Only take VS* bits from mip. */
+ ret_mip &= alias_mask;
+
+ /* Take in non-delegated 13:63 bits from hvip. */
+ old_hvip &= nalias_mask;
+
+ *ret_val = ret_mip | old_hvip;
}
return ret;
@@ -4527,14 +4689,13 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
.min_priv_ver = PRIV_VERSION_1_12_0 },
/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
- [CSR_HVIEN] = { "hvien", aia_hmode, read_zero, write_ignore },
+ [CSR_HVIEN] = { "hvien", aia_hmode, NULL, NULL, rmw_hvien },
[CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl,
write_hvictl },
[CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1,
write_hviprio1 },
[CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2,
write_hviprio2 },
-
/*
* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
*/
@@ -4549,8 +4710,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
[CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL,
rmw_hidelegh },
- [CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero,
- write_ignore },
+ [CSR_HVIENH] = { "hvienh", aia_hmode32, NULL, NULL, rmw_hvienh },
[CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
[CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h,
write_hviprio1h },
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index dd7bdbb691..3fff230a1c 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -92,6 +92,8 @@ static const VMStateDescription vmstate_hyper = {
VMSTATE_UINTTL(env.hgatp, RISCVCPU),
VMSTATE_UINTTL(env.hgeie, RISCVCPU),
VMSTATE_UINTTL(env.hgeip, RISCVCPU),
+ VMSTATE_UINT64(env.hvien, RISCVCPU),
+ VMSTATE_UINT64(env.hvip, RISCVCPU),
VMSTATE_UINT64(env.htimedelta, RISCVCPU),
VMSTATE_UINT64(env.vstimecmp, RISCVCPU),
@@ -106,6 +108,7 @@ static const VMStateDescription vmstate_hyper = {
VMSTATE_UINTTL(env.vstval, RISCVCPU),
VMSTATE_UINTTL(env.vsatp, RISCVCPU),
VMSTATE_UINTTL(env.vsiselect, RISCVCPU),
+ VMSTATE_UINT64(env.vsie, RISCVCPU),
VMSTATE_UINTTL(env.mtval2, RISCVCPU),
VMSTATE_UINTTL(env.mtinst, RISCVCPU),
--
2.25.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v2 6/6] target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.
2023-05-26 16:23 ` [PATCH v2 6/6] target/riscv: Add HS-mode " Rajnesh Kanwal
@ 2023-06-30 9:33 ` Daniel Henrique Barboza
0 siblings, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2023-06-30 9:33 UTC (permalink / raw)
To: Rajnesh Kanwal, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, zhiwei_liu, atishp, apatel
On 5/26/23 13:23, Rajnesh Kanwal wrote:
> This change adds support for inserting virtual interrupts from HS-mode
> into VS-mode using hvien and hvip csrs. This also allows for IRQ filtering
> from HS-mode.
>
> Also, the spec doesn't mandate the interrupt to be actually supported
> in hardware. Which allows HS-mode to assert virtual interrupts to VS-mode
> that have no connection to any real interrupt events.
>
> This is defined as part of the AIA specification [0], "6.3.2 Virtual
> interrupts for VS level".
>
> [0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
>
> Signed-off-by: Rajnesh Kanwal <rkanwal@rivosinc.com>
> ---
LGTM.
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/cpu.c | 3 +-
> target/riscv/cpu.h | 14 +++
> target/riscv/cpu_helper.c | 48 +++++++---
> target/riscv/csr.c | 196 ++++++++++++++++++++++++++++++++++----
> target/riscv/machine.c | 3 +
> 5 files changed, 234 insertions(+), 30 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 7c4999431a..6f2f8f21cc 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -713,7 +713,8 @@ static bool riscv_cpu_has_work(CPUState *cs)
> * mode and delegation registers, but respect individual enables
> */
> return riscv_cpu_all_pending(env) != 0 ||
> - riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE;
> + riscv_cpu_sirq_pending(env) != RISCV_EXCP_NONE ||
> + riscv_cpu_vsirq_pending(env) != RISCV_EXCP_NONE;
> #else
> return true;
> #endif
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 07cf656471..3e10eee38f 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -196,6 +196,12 @@ struct CPUArchState {
> */
> uint64_t sie;
>
> + /*
> + * When hideleg[i]=0 and hvien[i]=1, vsie[i] is no more
> + * alias of sie[i] (mie[i]) and needs to be maintained separatly.
> + */
> + uint64_t vsie;
> +
> target_ulong satp; /* since: priv-1.10.0 */
> target_ulong stval;
> target_ulong medeleg;
> @@ -230,6 +236,14 @@ struct CPUArchState {
> target_ulong hgeie;
> target_ulong hgeip;
> uint64_t htimedelta;
> + uint64_t hvien;
> +
> + /*
> + * Bits VSSIP, VSTIP and VSEIP in hvip are maintained in mip. Other bits
> + * from 0:12 are reserved. Bits 13:63 are not aliased and must be separately
> + * maintain in hvip.
> + */
> + uint64_t hvip;
>
> /* Hypervisor controlled virtual interrupt priorities */
> target_ulong hvictl;
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 6567ddef7b..fd7dae9b68 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -366,8 +366,9 @@ static int riscv_cpu_pending_to_irq(CPURISCVState *env,
> }
>
> /*
> - * Doesn't report interrupts inserted using mvip from M-mode firmware. Those
> - * are returned in riscv_cpu_sirq_pending().
> + * Doesn't report interrupts inserted using mvip from M-mode firmware or
> + * using hvip bits 13:63 from HS-mode. Those are returned in
> + * riscv_cpu_sirq_pending() and riscv_cpu_vsirq_pending().
> */
> uint64_t riscv_cpu_all_pending(CPURISCVState *env)
> {
> @@ -399,16 +400,23 @@ int riscv_cpu_sirq_pending(CPURISCVState *env)
>
> int riscv_cpu_vsirq_pending(CPURISCVState *env)
> {
> - uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg &
> - (MIP_VSSIP | MIP_VSTIP | MIP_VSEIP);
> + uint64_t irqs = riscv_cpu_all_pending(env) & env->mideleg & env->hideleg;
> + uint64_t irqs_f_vs = env->hvip & env->hvien & ~env->hideleg & env->vsie;
> + uint64_t vsbits;
> +
> + /* Bring VS-level bits to correct position */
> + vsbits = irqs & VS_MODE_INTERRUPTS;
> + irqs &= ~VS_MODE_INTERRUPTS;
> + irqs |= vsbits >> 1;
>
> return riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
> - irqs >> 1, env->hviprio);
> + (irqs | irqs_f_vs), env->hviprio);
> }
>
> static int riscv_cpu_local_irq_pending(CPURISCVState *env)
> {
> - uint64_t irqs, pending, mie, hsie, vsie, irqs_f;
> + uint64_t irqs, pending, mie, hsie, vsie, irqs_f, irqs_f_vs;
> + uint64_t vsbits, irq_delegated;
> int virq;
>
> /* Determine interrupt enable state of all privilege modes */
> @@ -445,12 +453,26 @@ static int riscv_cpu_local_irq_pending(CPURISCVState *env)
> irqs, env->siprio);
> }
>
> + /* Check for virtual VS-mode interrupts. */
> + irqs_f_vs = env->hvip & env->hvien & ~env->hideleg & env->vsie;
> +
> /* Check VS-mode interrupts */
> - irqs = pending & env->mideleg & env->hideleg & -vsie;
> + irq_delegated = pending & env->mideleg & env->hideleg;
> +
> + /* Bring VS-level bits to correct position */
> + vsbits = irq_delegated & VS_MODE_INTERRUPTS;
> + irq_delegated &= ~VS_MODE_INTERRUPTS;
> + irq_delegated |= vsbits >> 1;
> +
> + irqs = (irq_delegated | irqs_f_vs) & -vsie;
> if (irqs) {
> virq = riscv_cpu_pending_to_irq(env, IRQ_S_EXT, IPRIO_DEFAULT_S,
> - irqs >> 1, env->hviprio);
> - return (virq <= 0) ? virq : virq + 1;
> + irqs, env->hviprio);
> + if (virq <= 0 || (virq > 12 && virq <= 63)) {
> + return virq;
> + } else {
> + return virq + 1;
> + }
> }
>
> /* Indicate no pending interrupt */
> @@ -627,6 +649,7 @@ void riscv_cpu_interrupt(CPURISCVState *env)
> if (env->virt_enabled) {
> gein = get_field(env->hstatus, HSTATUS_VGEIN);
> vsgein = (env->hgeip & (1ULL << gein)) ? MIP_VSEIP : 0;
> + irqf = env->hvien & env->hvip & env->vsie;
> } else {
> irqf = env->mvien & env->mvip & env->sie;
> }
> @@ -1620,6 +1643,8 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> uint64_t deleg = async ? env->mideleg : env->medeleg;
> bool s_injected = env->mvip & (1 << cause) & env->mvien &&
> !(env->mip & (1 << cause));
> + bool vs_injected = env->hvip & (1 << cause) & env->hvien &&
> + !(env->mip & (1 << cause));
> target_ulong tval = 0;
> target_ulong tinst = 0;
> target_ulong htval = 0;
> @@ -1709,12 +1734,13 @@ void riscv_cpu_do_interrupt(CPUState *cs)
> riscv_cpu_get_trap_name(cause, async));
>
> if (env->priv <= PRV_S && cause < 64 &&
> - (((deleg >> cause) & 1) || s_injected)) {
> + (((deleg >> cause) & 1) || s_injected || vs_injected)) {
> /* handle the trap in S-mode */
> if (riscv_has_ext(env, RVH)) {
> uint64_t hdeleg = async ? env->hideleg : env->hedeleg;
>
> - if (env->virt_enabled && ((hdeleg >> cause) & 1)) {
> + if (env->virt_enabled &&
> + (((hdeleg >> cause) & 1) || vs_injected)) {
> /* Trap to VS mode */
> /*
> * See if we need to adjust cause. Yes if its VS mode interrupt
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index c1ca065a81..e165bb5632 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -30,6 +30,7 @@
> #include "qemu/guest-random.h"
> #include "qapi/error.h"
>
> +
> /* CSR function table public API */
> void riscv_get_csr_ops(int csrno, riscv_csr_operations *ops)
> {
> @@ -1176,6 +1177,8 @@ static const target_ulong sip_writable_mask = SIP_SSIP | LOCAL_INTERRUPTS;
> static const target_ulong hip_writable_mask = MIP_VSSIP;
> static const target_ulong hvip_writable_mask = MIP_VSSIP | MIP_VSTIP |
> MIP_VSEIP | LOCAL_INTERRUPTS;
> +static const target_ulong hvien_writable_mask = LOCAL_INTERRUPTS;
> +
> static const target_ulong vsip_writable_mask = MIP_VSSIP | LOCAL_INTERRUPTS;
>
> const bool valid_vm_1_10_32[16] = {
> @@ -2584,16 +2587,36 @@ static RISCVException rmw_vsie64(CPURISCVState *env, int csrno,
> uint64_t *ret_val,
> uint64_t new_val, uint64_t wr_mask)
> {
> + uint64_t alias_mask = (LOCAL_INTERRUPTS | VS_MODE_INTERRUPTS) &
> + env->hideleg;
> + uint64_t nalias_mask = LOCAL_INTERRUPTS & (~env->hideleg & env->hvien);
> + uint64_t rval, rval_vs, vsbits;
> + uint64_t wr_mask_vsie;
> + uint64_t wr_mask_mie;
> RISCVException ret;
> - uint64_t rval, mask = env->hideleg & VS_MODE_INTERRUPTS;
>
> /* Bring VS-level bits to correct position */
> - new_val = (new_val & (VS_MODE_INTERRUPTS >> 1)) << 1;
> - wr_mask = (wr_mask & (VS_MODE_INTERRUPTS >> 1)) << 1;
> + vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
> + new_val &= ~(VS_MODE_INTERRUPTS >> 1);
> + new_val |= vsbits << 1;
> +
> + vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
> + wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
> + wr_mask |= vsbits << 1;
> +
> + wr_mask_mie = wr_mask & alias_mask;
> + wr_mask_vsie = wr_mask & nalias_mask;
> +
> + ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask_mie);
> +
> + rval_vs = env->vsie & nalias_mask;
> + env->vsie = (env->vsie & ~wr_mask_vsie) | (new_val & wr_mask_vsie);
>
> - ret = rmw_mie64(env, csrno, &rval, new_val, wr_mask & mask);
> if (ret_val) {
> - *ret_val = (rval & mask) >> 1;
> + rval &= alias_mask;
> + vsbits = rval & VS_MODE_INTERRUPTS;
> + rval &= ~VS_MODE_INTERRUPTS;
> + *ret_val = rval | (vsbits >> 1) | rval_vs;
> }
>
> return ret;
> @@ -2806,21 +2829,36 @@ static RISCVException write_stval(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
> + uint64_t *ret_val,
> + uint64_t new_val, uint64_t wr_mask);
> +
> static RISCVException rmw_vsip64(CPURISCVState *env, int csrno,
> uint64_t *ret_val,
> uint64_t new_val, uint64_t wr_mask)
> {
> RISCVException ret;
> uint64_t rval, mask = env->hideleg & VS_MODE_INTERRUPTS;
> + uint64_t vsbits;
>
> - /* Bring VS-level bits to correct position */
> - new_val = (new_val & (VS_MODE_INTERRUPTS >> 1)) << 1;
> - wr_mask = (wr_mask & (VS_MODE_INTERRUPTS >> 1)) << 1;
> + /* Add virtualized bits into vsip mask. */
> + mask |= env->hvien & ~env->hideleg;
>
> - ret = rmw_mip64(env, csrno, &rval, new_val,
> - wr_mask & mask & vsip_writable_mask);
> + /* Bring VS-level bits to correct position */
> + vsbits = new_val & (VS_MODE_INTERRUPTS >> 1);
> + new_val &= ~(VS_MODE_INTERRUPTS >> 1);
> + new_val |= vsbits << 1;
> + vsbits = wr_mask & (VS_MODE_INTERRUPTS >> 1);
> + wr_mask &= ~(VS_MODE_INTERRUPTS >> 1);
> + wr_mask |= vsbits << 1;
> +
> + ret = rmw_hvip64(env, csrno, &rval, new_val,
> + wr_mask & mask & vsip_writable_mask);
> if (ret_val) {
> - *ret_val = (rval & mask) >> 1;
> + rval &= mask;
> + vsbits = rval & VS_MODE_INTERRUPTS;
> + rval &= ~VS_MODE_INTERRUPTS;
> + *ret_val = rval | (vsbits >> 1);
> }
>
> return ret;
> @@ -3112,6 +3150,52 @@ static RISCVException write_hedeleg(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static RISCVException rmw_hvien64(CPURISCVState *env, int csrno,
> + uint64_t *ret_val,
> + uint64_t new_val, uint64_t wr_mask)
> +{
> + uint64_t mask = wr_mask & hvien_writable_mask;
> +
> + if (ret_val) {
> + *ret_val = env->hvien;
> + }
> +
> + env->hvien = (env->hvien & ~mask) | (new_val & mask);
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException rmw_hvien(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_hvien64(env, csrno, &rval, new_val, wr_mask);
> + if (ret_val) {
> + *ret_val = rval;
> + }
> +
> + return ret;
> +}
> +
> +static RISCVException rmw_hvienh(CPURISCVState *env, int csrno,
> + target_ulong *ret_val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + uint64_t rval;
> + RISCVException ret;
> +
> + ret = rmw_hvien64(env, csrno, &rval,
> + ((uint64_t)new_val) << 32, ((uint64_t)wr_mask) << 32);
> + if (ret_val) {
> + *ret_val = rval >> 32;
> + }
> +
> + return ret;
> +}
> +
> static RISCVException rmw_hideleg64(CPURISCVState *env, int csrno,
> uint64_t *ret_val,
> uint64_t new_val, uint64_t wr_mask)
> @@ -3157,16 +3241,94 @@ static RISCVException rmw_hidelegh(CPURISCVState *env, int csrno,
> return ret;
> }
>
> +/*
> + * The function is written for two use-cases:
> + * 1- To access hvip csr as is for HS-mode access.
> + * 2- To access vsip as a combination of hvip, and mip for vs-mode.
> + *
> + * Both report bits 2, 6, 10 and 13:63.
> + * vsip needs to be read-only zero when both hideleg[i] and
> + * hvien[i] are zero.
> + */
> static RISCVException rmw_hvip64(CPURISCVState *env, int csrno,
> uint64_t *ret_val,
> uint64_t new_val, uint64_t wr_mask)
> {
> RISCVException ret;
> + uint64_t old_hvip;
> + uint64_t ret_mip;
> +
> + /*
> + * For bits 10, 6 and 2, vsip[i] is an alias of hip[i]. These bits are
> + * present in hip, hvip and mip. Where mip[i] is alias of hip[i] and hvip[i]
> + * is OR'ed in hip[i] to inject virtual interrupts from hypervisor. These
> + * bits are actually being maintained in mip so we read them from there.
> + * This way we have a single source of truth and allows for easier
> + * implementation.
> + *
> + * For bits 13:63 we have:
> + *
> + * hideleg[i] hvien[i]
> + * 0 0 No delegation. vsip[i] readonly zero.
> + * 0 1 vsip[i] is alias of hvip[i], sip bypassed.
> + * 1 X vsip[i] is alias of sip[i], hvip bypassed.
> + *
> + * alias_mask denotes the bits that come from sip (mip here given we
> + * maintain all bits there). nalias_mask denotes bits that come from
> + * hvip.
> + */
> + uint64_t alias_mask = (env->hideleg | ~env->hvien) | VS_MODE_INTERRUPTS;
> + uint64_t nalias_mask = (~env->hideleg & env->hvien);
> + uint64_t wr_mask_hvip;
> + uint64_t wr_mask_mip;
> +
> + /*
> + * Both alias and non-alias mask remain same for vsip except:
> + * 1- For VS* bits if they are zero in hideleg.
> + * 2- For 13:63 bits if they are zero in both hideleg and hvien.
> + */
> + if (csrno == CSR_VSIP) {
> + /* zero-out VS* bits that are not delegated to VS mode. */
> + alias_mask &= (env->hideleg | ~VS_MODE_INTERRUPTS);
> +
> + /*
> + * zero-out 13:63 bits that are zero in both hideleg and hvien.
> + * nalias_mask mask can not contain any VS* bits so only second
> + * condition applies on it.
> + */
> + nalias_mask &= (env->hideleg | env->hvien);
> + alias_mask &= (env->hideleg | env->hvien);
> + }
> +
> + wr_mask_hvip = wr_mask & nalias_mask & hvip_writable_mask;
> + wr_mask_mip = wr_mask & alias_mask & hvip_writable_mask;
> +
> + /* Aliased bits, bits 10, 6, 2 need to come from mip. */
> + ret = rmw_mip64(env, csrno, &ret_mip, new_val, wr_mask_mip);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> + old_hvip = env->hvip;
> +
> + if (wr_mask_hvip) {
> + env->hvip = (env->hvip & ~wr_mask_hvip) | (new_val & wr_mask_hvip);
> +
> + /*
> + * Given hvip is separate source from mip, we need to trigger interrupt
> + * from here separately. Normally this happen from riscv_cpu_update_mip.
> + */
> + riscv_cpu_interrupt(env);
> + }
>
> - ret = rmw_mip64(env, csrno, ret_val, new_val,
> - wr_mask & hvip_writable_mask);
> if (ret_val) {
> - *ret_val &= VS_MODE_INTERRUPTS;
> + /* Only take VS* bits from mip. */
> + ret_mip &= alias_mask;
> +
> + /* Take in non-delegated 13:63 bits from hvip. */
> + old_hvip &= nalias_mask;
> +
> + *ret_val = ret_mip | old_hvip;
> }
>
> return ret;
> @@ -4527,14 +4689,13 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> .min_priv_ver = PRIV_VERSION_1_12_0 },
>
> /* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
> - [CSR_HVIEN] = { "hvien", aia_hmode, read_zero, write_ignore },
> + [CSR_HVIEN] = { "hvien", aia_hmode, NULL, NULL, rmw_hvien },
> [CSR_HVICTL] = { "hvictl", aia_hmode, read_hvictl,
> write_hvictl },
> [CSR_HVIPRIO1] = { "hviprio1", aia_hmode, read_hviprio1,
> write_hviprio1 },
> [CSR_HVIPRIO2] = { "hviprio2", aia_hmode, read_hviprio2,
> write_hviprio2 },
> -
> /*
> * VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
> */
> @@ -4549,8 +4710,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> /* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
> [CSR_HIDELEGH] = { "hidelegh", aia_hmode32, NULL, NULL,
> rmw_hidelegh },
> - [CSR_HVIENH] = { "hvienh", aia_hmode32, read_zero,
> - write_ignore },
> + [CSR_HVIENH] = { "hvienh", aia_hmode32, NULL, NULL, rmw_hvienh },
> [CSR_HVIPH] = { "hviph", aia_hmode32, NULL, NULL, rmw_hviph },
> [CSR_HVIPRIO1H] = { "hviprio1h", aia_hmode32, read_hviprio1h,
> write_hviprio1h },
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dd7bdbb691..3fff230a1c 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -92,6 +92,8 @@ static const VMStateDescription vmstate_hyper = {
> VMSTATE_UINTTL(env.hgatp, RISCVCPU),
> VMSTATE_UINTTL(env.hgeie, RISCVCPU),
> VMSTATE_UINTTL(env.hgeip, RISCVCPU),
> + VMSTATE_UINT64(env.hvien, RISCVCPU),
> + VMSTATE_UINT64(env.hvip, RISCVCPU),
> VMSTATE_UINT64(env.htimedelta, RISCVCPU),
> VMSTATE_UINT64(env.vstimecmp, RISCVCPU),
>
> @@ -106,6 +108,7 @@ static const VMStateDescription vmstate_hyper = {
> VMSTATE_UINTTL(env.vstval, RISCVCPU),
> VMSTATE_UINTTL(env.vsatp, RISCVCPU),
> VMSTATE_UINTTL(env.vsiselect, RISCVCPU),
> + VMSTATE_UINT64(env.vsie, RISCVCPU),
>
> VMSTATE_UINTTL(env.mtval2, RISCVCPU),
> VMSTATE_UINTTL(env.mtinst, RISCVCPU),
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support
2023-05-26 16:23 [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support Rajnesh Kanwal
` (5 preceding siblings ...)
2023-05-26 16:23 ` [PATCH v2 6/6] target/riscv: Add HS-mode " Rajnesh Kanwal
@ 2023-09-06 14:38 ` Daniel Henrique Barboza
2023-09-21 6:29 ` Rajnesh Kanwal
6 siblings, 1 reply; 15+ messages in thread
From: Daniel Henrique Barboza @ 2023-09-06 14:38 UTC (permalink / raw)
To: Rajnesh Kanwal, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, liweiwei, zhiwei_liu, atishp, apatel
Hey,
What's the latest on this work? It seems that all patches are acked:
https://lore.kernel.org/qemu-riscv/20230526162308.22892-1-rkanwal@rivosinc.com/
It'll probably conflict with current Alistair's riscv-to-apply.next though, so
perhaps Rajnesh could gather the acks and send a rebased version.
Thanks,
Daniel
On 5/26/23 13:23, Rajnesh Kanwal wrote:
> This series adds M and HS-mode virtual interrupt and IRQ filtering support.
> This allows inserting virtual interrupts from M/HS-mode into S/VS-mode
> using mvien/hvien and mvip/hvip csrs. IRQ filtering is a use case of
> this change, i-e M-mode can stop delegating an interrupt to S-mode and
> instead enable it in MIE and receive those interrupts in M-mode and then
> selectively inject the interrupt using mvien and mvip.
>
> Also, the spec doesn't mandate the interrupt to be actually supported
> in hardware. Which allows M/HS-mode to assert virtual interrupts to
> S/VS-mode that have no connection to any real interrupt events.
>
> This is defined as part of the AIA specification [0], "5.3 Interrupt
> filtering and virtual interrupts for supervisor level" and "6.3.2 Virtual
> interrupts for VS level".
>
> Most of the testing is done by hacking around OpenSBI and linux host.
> The changes for those can be found at [1] and [2].
>
> It's my first touch on RISC-V qemu IRQ subsystem. Any feedback would
> be much appreciated.
>
> The change can also be found on github [3].
>
> TODO: This change doesn't support delegating virtual interrupts injected
> by M-mode to VS-mode by the Hypervisor. This is true for bits 13:63 only.
>
> Thanks
> Rajnesh
>
> [0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
> [1]: https://github.com/rajnesh-kanwal/opensbi/tree/dev/rkanwal/irq_filter
> [2]: https://github.com/rajnesh-kanwal/linux/commits/dev/rkanwal/aia_irq_filter
> [3]: https://github.com/rajnesh-kanwal/qemu/tree/dev/rkanwal/riscv_irq_filter
>
> v2:
> * Move RISCV_EXCP_SEMIHOST to switch case and remove special handling.
> * Fix linux-user build.
>
> Rajnesh Kanwal (6):
> target/riscv: Without H-mode mask all HS mode inturrupts in mie.
> target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST.
> target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled
> target/riscv: Split interrupt logic from riscv_cpu_update_mip.
> target/riscv: Add M-mode virtual interrupt and IRQ filtering support.
> target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.
>
> target/riscv/cpu.c | 9 +-
> target/riscv/cpu.h | 23 ++
> target/riscv/cpu_bits.h | 6 +
> target/riscv/cpu_helper.c | 99 +++++---
> target/riscv/csr.c | 477 ++++++++++++++++++++++++++++++++++----
> target/riscv/machine.c | 6 +
> 6 files changed, 546 insertions(+), 74 deletions(-)
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs and IRQ filtering support
2023-09-06 14:38 ` [PATCH v2 0/6] target/riscv: Add RISC-V Virtual IRQs " Daniel Henrique Barboza
@ 2023-09-21 6:29 ` Rajnesh Kanwal
0 siblings, 0 replies; 15+ messages in thread
From: Rajnesh Kanwal @ 2023-09-21 6:29 UTC (permalink / raw)
To: Daniel Henrique Barboza
Cc: qemu-riscv, qemu-devel, palmer, alistair.francis, bin.meng,
liweiwei, zhiwei_liu, atishp, apatel
Hey Daniel,
Sorry I was on holiday. There is no new work on this AFAIK. I will
rebase and send a new version for this shortly.
Thanks
Rajnesh
On Wed, Sep 6, 2023 at 3:38 PM Daniel Henrique Barboza
<dbarboza@ventanamicro.com> wrote:
>
> Hey,
>
>
> What's the latest on this work? It seems that all patches are acked:
>
> https://lore.kernel.org/qemu-riscv/20230526162308.22892-1-rkanwal@rivosinc.com/
>
>
> It'll probably conflict with current Alistair's riscv-to-apply.next though, so
> perhaps Rajnesh could gather the acks and send a rebased version.
>
>
> Thanks,
>
> Daniel
>
>
> On 5/26/23 13:23, Rajnesh Kanwal wrote:
> > This series adds M and HS-mode virtual interrupt and IRQ filtering support.
> > This allows inserting virtual interrupts from M/HS-mode into S/VS-mode
> > using mvien/hvien and mvip/hvip csrs. IRQ filtering is a use case of
> > this change, i-e M-mode can stop delegating an interrupt to S-mode and
> > instead enable it in MIE and receive those interrupts in M-mode and then
> > selectively inject the interrupt using mvien and mvip.
> >
> > Also, the spec doesn't mandate the interrupt to be actually supported
> > in hardware. Which allows M/HS-mode to assert virtual interrupts to
> > S/VS-mode that have no connection to any real interrupt events.
> >
> > This is defined as part of the AIA specification [0], "5.3 Interrupt
> > filtering and virtual interrupts for supervisor level" and "6.3.2 Virtual
> > interrupts for VS level".
> >
> > Most of the testing is done by hacking around OpenSBI and linux host.
> > The changes for those can be found at [1] and [2].
> >
> > It's my first touch on RISC-V qemu IRQ subsystem. Any feedback would
> > be much appreciated.
> >
> > The change can also be found on github [3].
> >
> > TODO: This change doesn't support delegating virtual interrupts injected
> > by M-mode to VS-mode by the Hypervisor. This is true for bits 13:63 only.
> >
> > Thanks
> > Rajnesh
> >
> > [0]: https://github.com/riscv/riscv-aia/releases/download/1.0-RC4/riscv-interrupts-1.0-RC4.pdf
> > [1]: https://github.com/rajnesh-kanwal/opensbi/tree/dev/rkanwal/irq_filter
> > [2]: https://github.com/rajnesh-kanwal/linux/commits/dev/rkanwal/aia_irq_filter
> > [3]: https://github.com/rajnesh-kanwal/qemu/tree/dev/rkanwal/riscv_irq_filter
> >
> > v2:
> > * Move RISCV_EXCP_SEMIHOST to switch case and remove special handling.
> > * Fix linux-user build.
> >
> > Rajnesh Kanwal (6):
> > target/riscv: Without H-mode mask all HS mode inturrupts in mie.
> > target/riscv: Check for async flag in case of RISCV_EXCP_SEMIHOST.
> > target/riscv: Set VS* bits to one in mideleg when H-Ext is enabled
> > target/riscv: Split interrupt logic from riscv_cpu_update_mip.
> > target/riscv: Add M-mode virtual interrupt and IRQ filtering support.
> > target/riscv: Add HS-mode virtual interrupt and IRQ filtering support.
> >
> > target/riscv/cpu.c | 9 +-
> > target/riscv/cpu.h | 23 ++
> > target/riscv/cpu_bits.h | 6 +
> > target/riscv/cpu_helper.c | 99 +++++---
> > target/riscv/csr.c | 477 ++++++++++++++++++++++++++++++++++----
> > target/riscv/machine.c | 6 +
> > 6 files changed, 546 insertions(+), 74 deletions(-)
> >
^ permalink raw reply [flat|nested] 15+ messages in thread