linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs
@ 2023-04-03  9:33 Anup Patel
  2023-04-03  9:33 ` [PATCH v3 1/8] RISC-V: Add AIA related CSR defines Anup Patel
                   ` (7 more replies)
  0 siblings, 8 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The RISC-V AIA specification is now frozen as-per the RISC-V international
process. The latest frozen specifcation can be found at:
https://github.com/riscv/riscv-aia/releases/download/1.0-RC3/riscv-interrupts-1.0-RC3.pdf

This series implements first phase of AIA virtualization which targets
virtualizing AIA CSRs. This also provides a foundation for the second
phase of AIA virtualization which will target in-kernel AIA irqchip
(including both IMSIC and APLIC).

The first two patches are shared with the "Linux RISC-V AIA Support"
series which adds AIA driver support.

To test this series, use AIA drivers from the "Linux RISC-V AIA Support"
series and use KVMTOOL from the riscv_aia_v1 branch at:
https://github.com/avpatel/kvmtool.git

These patches can also be found in the riscv_kvm_aia_csr_v3 branch at:
https://github.com/avpatel/linux.git

Changes since v2:
 - Rebased on Linux-6.3-rc5
 - Split PATCH5 into two separate patches as suggested by Atish.

Changes since v1:
 - Addressed from Drew and Conor in PATCH1
 - Use alphabetical ordering for SMAIA and SSAIA enum in PATCH2
 - Use GENMASK() in PATCH3

Anup Patel (8):
  RISC-V: Add AIA related CSR defines
  RISC-V: Detect AIA CSRs from ISA string
  RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines
  RISC-V: KVM: Initial skeletal support for AIA
  RISC-V: KVM: Implement subtype for CSR ONE_REG interface
  RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  RISC-V: KVM: Virtualize per-HART AIA CSRs
  RISC-V: KVM: Implement guest external interrupt line management

 arch/riscv/include/asm/csr.h      | 107 ++++-
 arch/riscv/include/asm/hwcap.h    |   8 +
 arch/riscv/include/asm/kvm_aia.h  | 137 +++++++
 arch/riscv/include/asm/kvm_host.h |  14 +-
 arch/riscv/include/uapi/asm/kvm.h |  18 +-
 arch/riscv/kernel/cpu.c           |   2 +
 arch/riscv/kernel/cpufeature.c    |   2 +
 arch/riscv/kvm/Makefile           |   1 +
 arch/riscv/kvm/aia.c              | 624 ++++++++++++++++++++++++++++++
 arch/riscv/kvm/main.c             |  23 +-
 arch/riscv/kvm/mmu.c              |   3 +-
 arch/riscv/kvm/vcpu.c             | 185 +++++++--
 arch/riscv/kvm/vcpu_insn.c        |   1 +
 arch/riscv/kvm/vm.c               |   4 +
 arch/riscv/kvm/vmid.c             |   4 +-
 15 files changed, 1077 insertions(+), 56 deletions(-)
 create mode 100644 arch/riscv/include/asm/kvm_aia.h
 create mode 100644 arch/riscv/kvm/aia.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v3 1/8] RISC-V: Add AIA related CSR defines
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03  9:33 ` [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string Anup Patel
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel, Conor Dooley,
	Atish Patra, Palmer Dabbelt

The RISC-V AIA specification improves handling per-HART local interrupts
in a backward compatible manner. This patch adds defines for new RISC-V
AIA CSRs.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Conor Dooley <conor.dooley@microchip.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
---
 arch/riscv/include/asm/csr.h | 95 +++++++++++++++++++++++++++++++++++-
 1 file changed, 94 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 0e571f6483d9..3c8d68152bce 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -7,7 +7,7 @@
 #define _ASM_RISCV_CSR_H
 
 #include <asm/asm.h>
-#include <linux/const.h>
+#include <linux/bits.h>
 
 /* Status register flags */
 #define SR_SIE		_AC(0x00000002, UL) /* Supervisor Interrupt Enable */
@@ -73,7 +73,10 @@
 #define IRQ_S_EXT		9
 #define IRQ_VS_EXT		10
 #define IRQ_M_EXT		11
+#define IRQ_S_GEXT		12
 #define IRQ_PMU_OVF		13
+#define IRQ_LOCAL_MAX		(IRQ_PMU_OVF + 1)
+#define IRQ_LOCAL_MASK		GENMASK((IRQ_LOCAL_MAX - 1), 0)
 
 /* Exception causes */
 #define EXC_INST_MISALIGNED	0
@@ -156,6 +159,27 @@
 				 (_AC(1, UL) << IRQ_S_TIMER) | \
 				 (_AC(1, UL) << IRQ_S_EXT))
 
+/* AIA CSR bits */
+#define TOPI_IID_SHIFT		16
+#define TOPI_IID_MASK		GENMASK(11, 0)
+#define TOPI_IPRIO_MASK		GENMASK(7, 0)
+#define TOPI_IPRIO_BITS		8
+
+#define TOPEI_ID_SHIFT		16
+#define TOPEI_ID_MASK		GENMASK(10, 0)
+#define TOPEI_PRIO_MASK		GENMASK(10, 0)
+
+#define ISELECT_IPRIO0		0x30
+#define ISELECT_IPRIO15		0x3f
+#define ISELECT_MASK		GENMASK(8, 0)
+
+#define HVICTL_VTI		BIT(30)
+#define HVICTL_IID		GENMASK(27, 16)
+#define HVICTL_IID_SHIFT	16
+#define HVICTL_DPR		BIT(9)
+#define HVICTL_IPRIOM		BIT(8)
+#define HVICTL_IPRIO		GENMASK(7, 0)
+
 /* xENVCFG flags */
 #define ENVCFG_STCE			(_AC(1, ULL) << 63)
 #define ENVCFG_PBMTE			(_AC(1, ULL) << 62)
@@ -250,6 +274,18 @@
 #define CSR_STIMECMP		0x14D
 #define CSR_STIMECMPH		0x15D
 
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_SISELECT		0x150
+#define CSR_SIREG		0x151
+
+/* Supervisor-Level Interrupts (AIA) */
+#define CSR_STOPEI		0x15c
+#define CSR_STOPI		0xdb0
+
+/* Supervisor-Level High-Half CSRs (AIA) */
+#define CSR_SIEH		0x114
+#define CSR_SIPH		0x154
+
 #define CSR_VSSTATUS		0x200
 #define CSR_VSIE		0x204
 #define CSR_VSTVEC		0x205
@@ -279,8 +315,32 @@
 #define CSR_HGATP		0x680
 #define CSR_HGEIP		0xe12
 
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
+#define CSR_HVIEN		0x608
+#define CSR_HVICTL		0x609
+#define CSR_HVIPRIO1		0x646
+#define CSR_HVIPRIO2		0x647
+
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
+#define CSR_VSISELECT		0x250
+#define CSR_VSIREG		0x251
+
+/* VS-Level Interrupts (H-extension with AIA) */
+#define CSR_VSTOPEI		0x25c
+#define CSR_VSTOPI		0xeb0
+
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
+#define CSR_HIDELEGH		0x613
+#define CSR_HVIENH		0x618
+#define CSR_HVIPH		0x655
+#define CSR_HVIPRIO1H		0x656
+#define CSR_HVIPRIO2H		0x657
+#define CSR_VSIEH		0x214
+#define CSR_VSIPH		0x254
+
 #define CSR_MSTATUS		0x300
 #define CSR_MISA		0x301
+#define CSR_MIDELEG		0x303
 #define CSR_MIE			0x304
 #define CSR_MTVEC		0x305
 #define CSR_MENVCFG		0x30a
@@ -297,6 +357,25 @@
 #define CSR_MIMPID		0xf13
 #define CSR_MHARTID		0xf14
 
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_MISELECT		0x350
+#define CSR_MIREG		0x351
+
+/* Machine-Level Interrupts (AIA) */
+#define CSR_MTOPEI		0x35c
+#define CSR_MTOPI		0xfb0
+
+/* Virtual Interrupts for Supervisor Level (AIA) */
+#define CSR_MVIEN		0x308
+#define CSR_MVIP		0x309
+
+/* Machine-Level High-Half CSRs (AIA) */
+#define CSR_MIDELEGH		0x313
+#define CSR_MIEH		0x314
+#define CSR_MVIENH		0x318
+#define CSR_MVIPH		0x319
+#define CSR_MIPH		0x354
+
 #ifdef CONFIG_RISCV_M_MODE
 # define CSR_STATUS	CSR_MSTATUS
 # define CSR_IE		CSR_MIE
@@ -307,6 +386,13 @@
 # define CSR_TVAL	CSR_MTVAL
 # define CSR_IP		CSR_MIP
 
+# define CSR_IEH		CSR_MIEH
+# define CSR_ISELECT	CSR_MISELECT
+# define CSR_IREG	CSR_MIREG
+# define CSR_IPH		CSR_MIPH
+# define CSR_TOPEI	CSR_MTOPEI
+# define CSR_TOPI	CSR_MTOPI
+
 # define SR_IE		SR_MIE
 # define SR_PIE		SR_MPIE
 # define SR_PP		SR_MPP
@@ -324,6 +410,13 @@
 # define CSR_TVAL	CSR_STVAL
 # define CSR_IP		CSR_SIP
 
+# define CSR_IEH		CSR_SIEH
+# define CSR_ISELECT	CSR_SISELECT
+# define CSR_IREG	CSR_SIREG
+# define CSR_IPH		CSR_SIPH
+# define CSR_TOPEI	CSR_STOPEI
+# define CSR_TOPI	CSR_STOPI
+
 # define SR_IE		SR_SIE
 # define SR_PIE		SR_SPIE
 # define SR_PP		SR_SPP
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
  2023-04-03  9:33 ` [PATCH v3 1/8] RISC-V: Add AIA related CSR defines Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03  9:39   ` Conor Dooley
  2023-04-03  9:33 ` [PATCH v3 3/8] RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines Anup Patel
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel, Atish Patra

We have two extension names for AIA ISA support: Smaia (M-mode AIA CSRs)
and Ssaia (S-mode AIA CSRs).

We extend the ISA string parsing to detect Smaia and Ssaia extensions.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
---
 arch/riscv/include/asm/hwcap.h | 2 ++
 arch/riscv/kernel/cpu.c        | 2 ++
 arch/riscv/kernel/cpufeature.c | 2 ++
 3 files changed, 6 insertions(+)

diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
index 6263a0de1c6a..9c8ae4399565 100644
--- a/arch/riscv/include/asm/hwcap.h
+++ b/arch/riscv/include/asm/hwcap.h
@@ -42,6 +42,8 @@
 #define RISCV_ISA_EXT_ZBB		30
 #define RISCV_ISA_EXT_ZICBOM		31
 #define RISCV_ISA_EXT_ZIHINTPAUSE	32
+#define RISCV_ISA_EXT_SSAIA		33
+#define RISCV_ISA_EXT_SMAIA		34
 
 #define RISCV_ISA_EXT_MAX		64
 #define RISCV_ISA_EXT_NAME_LEN_MAX	32
diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c
index 8400f0cc9704..7d20036bcc6c 100644
--- a/arch/riscv/kernel/cpu.c
+++ b/arch/riscv/kernel/cpu.c
@@ -188,8 +188,10 @@ static struct riscv_isa_ext_data isa_ext_arr[] = {
 	__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
 	__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
 	__RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB),
+	__RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA),
 	__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
 	__RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
+	__RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA),
 	__RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL),
 	__RISCV_ISA_EXT_DATA(svpbmt, RISCV_ISA_EXT_SVPBMT),
 	__RISCV_ISA_EXT_DATA("", RISCV_ISA_EXT_MAX),
diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
index 59d58ee0f68d..1b13a5823b90 100644
--- a/arch/riscv/kernel/cpufeature.c
+++ b/arch/riscv/kernel/cpufeature.c
@@ -221,8 +221,10 @@ void __init riscv_fill_hwcap(void)
 				}
 			} else {
 				/* sorted alphabetically */
+				SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA);
 				SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
 				SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
+				SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA);
 				SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
 				SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
 				SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 3/8] RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
  2023-04-03  9:33 ` [PATCH v3 1/8] RISC-V: Add AIA related CSR defines Anup Patel
  2023-04-03  9:33 ` [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03  9:33 ` [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA Anup Patel
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel, Atish Patra

The hgatp.VMID mask defines are used before shifting when extracting
VMID value from hgatp CSR value so based on the convention followed
in the other parts of asm/csr.h, the hgatp.VMID mask defines should
not have a _MASK suffix.

While we are here, let's use GENMASK() for hgatp.VMID and hgatp.PPN.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
---
 arch/riscv/include/asm/csr.h | 12 ++++++------
 arch/riscv/kvm/mmu.c         |  3 +--
 arch/riscv/kvm/vmid.c        |  4 ++--
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 3c8d68152bce..3176355cf4e9 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -131,25 +131,25 @@
 
 #define HGATP32_MODE_SHIFT	31
 #define HGATP32_VMID_SHIFT	22
-#define HGATP32_VMID_MASK	_AC(0x1FC00000, UL)
-#define HGATP32_PPN		_AC(0x003FFFFF, UL)
+#define HGATP32_VMID		GENMASK(28, 22)
+#define HGATP32_PPN		GENMASK(21, 0)
 
 #define HGATP64_MODE_SHIFT	60
 #define HGATP64_VMID_SHIFT	44
-#define HGATP64_VMID_MASK	_AC(0x03FFF00000000000, UL)
-#define HGATP64_PPN		_AC(0x00000FFFFFFFFFFF, UL)
+#define HGATP64_VMID		GENMASK(57, 44)
+#define HGATP64_PPN		GENMASK(43, 0)
 
 #define HGATP_PAGE_SHIFT	12
 
 #ifdef CONFIG_64BIT
 #define HGATP_PPN		HGATP64_PPN
 #define HGATP_VMID_SHIFT	HGATP64_VMID_SHIFT
-#define HGATP_VMID_MASK		HGATP64_VMID_MASK
+#define HGATP_VMID		HGATP64_VMID
 #define HGATP_MODE_SHIFT	HGATP64_MODE_SHIFT
 #else
 #define HGATP_PPN		HGATP32_PPN
 #define HGATP_VMID_SHIFT	HGATP32_VMID_SHIFT
-#define HGATP_VMID_MASK		HGATP32_VMID_MASK
+#define HGATP_VMID		HGATP32_VMID
 #define HGATP_MODE_SHIFT	HGATP32_MODE_SHIFT
 #endif
 
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 46d692995830..f2eb47925806 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -755,8 +755,7 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu)
 	unsigned long hgatp = gstage_mode;
 	struct kvm_arch *k = &vcpu->kvm->arch;
 
-	hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) &
-		 HGATP_VMID_MASK;
+	hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID;
 	hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN;
 
 	csr_write(CSR_HGATP, hgatp);
diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c
index 5246da1c9167..ddc98714ce8e 100644
--- a/arch/riscv/kvm/vmid.c
+++ b/arch/riscv/kvm/vmid.c
@@ -26,9 +26,9 @@ void __init kvm_riscv_gstage_vmid_detect(void)
 
 	/* Figure-out number of VMID bits in HW */
 	old = csr_read(CSR_HGATP);
-	csr_write(CSR_HGATP, old | HGATP_VMID_MASK);
+	csr_write(CSR_HGATP, old | HGATP_VMID);
 	vmid_bits = csr_read(CSR_HGATP);
-	vmid_bits = (vmid_bits & HGATP_VMID_MASK) >> HGATP_VMID_SHIFT;
+	vmid_bits = (vmid_bits & HGATP_VMID) >> HGATP_VMID_SHIFT;
 	vmid_bits = fls_long(vmid_bits);
 	csr_write(CSR_HGATP, old);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
                   ` (2 preceding siblings ...)
  2023-04-03  9:33 ` [PATCH v3 3/8] RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03 12:00   ` Andrew Jones
  2023-04-03 23:49   ` Atish Patra
  2023-04-03  9:33 ` [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface Anup Patel
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel, Atish Patra

To incrementally implement AIA support, we first add minimal skeletal
support which only compiles and detects AIA hardware support at the
boot-time but does not provide any functionality.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
Reviewed-by: Atish Patra <atishp@rivosinc.com>
---
 arch/riscv/include/asm/hwcap.h    |   6 ++
 arch/riscv/include/asm/kvm_aia.h  | 109 ++++++++++++++++++++++++++++++
 arch/riscv/include/asm/kvm_host.h |   7 ++
 arch/riscv/kvm/Makefile           |   1 +
 arch/riscv/kvm/aia.c              |  66 ++++++++++++++++++
 arch/riscv/kvm/main.c             |  22 +++++-
 arch/riscv/kvm/vcpu.c             |  40 ++++++++++-
 arch/riscv/kvm/vcpu_insn.c        |   1 +
 arch/riscv/kvm/vm.c               |   4 ++
 9 files changed, 252 insertions(+), 4 deletions(-)
 create mode 100644 arch/riscv/include/asm/kvm_aia.h
 create mode 100644 arch/riscv/kvm/aia.c

diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
index 9c8ae4399565..8087e11a5cf8 100644
--- a/arch/riscv/include/asm/hwcap.h
+++ b/arch/riscv/include/asm/hwcap.h
@@ -48,6 +48,12 @@
 #define RISCV_ISA_EXT_MAX		64
 #define RISCV_ISA_EXT_NAME_LEN_MAX	32
 
+#ifdef CONFIG_RISCV_M_MODE
+#define RISCV_ISA_EXT_SxAIA		RISCV_ISA_EXT_SMAIA
+#else
+#define RISCV_ISA_EXT_SxAIA		RISCV_ISA_EXT_SSAIA
+#endif
+
 #ifndef __ASSEMBLY__
 
 #include <linux/jump_label.h>
diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
new file mode 100644
index 000000000000..258a835d4c32
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_aia.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (C) 2022 Ventana Micro Systems Inc.
+ *
+ * Authors:
+ *	Anup Patel <apatel@ventanamicro.com>
+ */
+
+#ifndef __KVM_RISCV_AIA_H
+#define __KVM_RISCV_AIA_H
+
+#include <linux/jump_label.h>
+#include <linux/kvm_types.h>
+
+struct kvm_aia {
+	/* In-kernel irqchip created */
+	bool		in_kernel;
+
+	/* In-kernel irqchip initialized */
+	bool		initialized;
+};
+
+struct kvm_vcpu_aia {
+};
+
+#define kvm_riscv_aia_initialized(k)	((k)->arch.aia.initialized)
+
+#define irqchip_in_kernel(k)		((k)->arch.aia.in_kernel)
+
+DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
+#define kvm_riscv_aia_available() \
+	static_branch_unlikely(&kvm_riscv_aia_available)
+
+static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
+						     u64 mask)
+{
+	return false;
+}
+
+static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
+{
+}
+
+static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
+					     unsigned long reg_num,
+					     unsigned long *out_val)
+{
+	*out_val = 0;
+	return 0;
+}
+
+static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
+					     unsigned long reg_num,
+					     unsigned long val)
+{
+	return 0;
+}
+
+#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
+
+static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
+{
+	return 1;
+}
+
+static inline void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+
+static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu)
+{
+}
+
+static inline void kvm_riscv_aia_init_vm(struct kvm *kvm)
+{
+}
+
+static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
+{
+}
+
+void kvm_riscv_aia_enable(void);
+void kvm_riscv_aia_disable(void);
+int kvm_riscv_aia_init(void);
+void kvm_riscv_aia_exit(void);
+
+#endif
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index cc7da66ee0c0..3157cf748df1 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -14,6 +14,7 @@
 #include <linux/kvm_types.h>
 #include <linux/spinlock.h>
 #include <asm/hwcap.h>
+#include <asm/kvm_aia.h>
 #include <asm/kvm_vcpu_fp.h>
 #include <asm/kvm_vcpu_insn.h>
 #include <asm/kvm_vcpu_sbi.h>
@@ -94,6 +95,9 @@ struct kvm_arch {
 
 	/* Guest Timer */
 	struct kvm_guest_timer timer;
+
+	/* AIA Guest/VM context */
+	struct kvm_aia aia;
 };
 
 struct kvm_cpu_trap {
@@ -221,6 +225,9 @@ struct kvm_vcpu_arch {
 	/* SBI context */
 	struct kvm_vcpu_sbi_context sbi_context;
 
+	/* AIA VCPU context */
+	struct kvm_vcpu_aia aia_context;
+
 	/* Cache pages needed to program page tables with spinlock held */
 	struct kvm_mmu_memory_cache mmu_page_cache;
 
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 278e97c06e0a..8031b8912a0d 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-y += vcpu_sbi_replace.o
 kvm-y += vcpu_sbi_hsm.o
 kvm-y += vcpu_timer.o
 kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o
+kvm-y += aia.o
diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
new file mode 100644
index 000000000000..7a633331cd3e
--- /dev/null
+++ b/arch/riscv/kvm/aia.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Western Digital Corporation or its affiliates.
+ * Copyright (C) 2022 Ventana Micro Systems Inc.
+ *
+ * Authors:
+ *	Anup Patel <apatel@ventanamicro.com>
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/hwcap.h>
+
+DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
+
+static void aia_set_hvictl(bool ext_irq_pending)
+{
+	unsigned long hvictl;
+
+	/*
+	 * HVICTL.IID == 9 and HVICTL.IPRIO == 0 represents
+	 * no interrupt in HVICTL.
+	 */
+
+	hvictl = (IRQ_S_EXT << HVICTL_IID_SHIFT) & HVICTL_IID;
+	hvictl |= ext_irq_pending;
+	csr_write(CSR_HVICTL, hvictl);
+}
+
+void kvm_riscv_aia_enable(void)
+{
+	if (!kvm_riscv_aia_available())
+		return;
+
+	aia_set_hvictl(false);
+	csr_write(CSR_HVIPRIO1, 0x0);
+	csr_write(CSR_HVIPRIO2, 0x0);
+#ifdef CONFIG_32BIT
+	csr_write(CSR_HVIPH, 0x0);
+	csr_write(CSR_HIDELEGH, 0x0);
+	csr_write(CSR_HVIPRIO1H, 0x0);
+	csr_write(CSR_HVIPRIO2H, 0x0);
+#endif
+}
+
+void kvm_riscv_aia_disable(void)
+{
+	if (!kvm_riscv_aia_available())
+		return;
+
+	aia_set_hvictl(false);
+}
+
+int kvm_riscv_aia_init(void)
+{
+	if (!riscv_isa_extension_available(NULL, SxAIA))
+		return -ENODEV;
+
+	/* Enable KVM AIA support */
+	static_branch_enable(&kvm_riscv_aia_available);
+
+	return 0;
+}
+
+void kvm_riscv_aia_exit(void)
+{
+}
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 41ad7639a17b..6396352b4e4d 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -44,11 +44,15 @@ int kvm_arch_hardware_enable(void)
 
 	csr_write(CSR_HVIP, 0);
 
+	kvm_riscv_aia_enable();
+
 	return 0;
 }
 
 void kvm_arch_hardware_disable(void)
 {
+	kvm_riscv_aia_disable();
+
 	/*
 	 * After clearing the hideleg CSR, the host kernel will receive
 	 * spurious interrupts if hvip CSR has pending interrupts and the
@@ -63,6 +67,7 @@ void kvm_arch_hardware_disable(void)
 
 static int __init riscv_kvm_init(void)
 {
+	int rc;
 	const char *str;
 
 	if (!riscv_isa_extension_available(NULL, h)) {
@@ -84,6 +89,10 @@ static int __init riscv_kvm_init(void)
 
 	kvm_riscv_gstage_vmid_detect();
 
+	rc = kvm_riscv_aia_init();
+	if (rc && rc != -ENODEV)
+		return rc;
+
 	kvm_info("hypervisor extension available\n");
 
 	switch (kvm_riscv_gstage_mode()) {
@@ -106,12 +115,23 @@ static int __init riscv_kvm_init(void)
 
 	kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
 
-	return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	if (kvm_riscv_aia_available())
+		kvm_info("AIA available\n");
+
+	rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+	if (rc) {
+		kvm_riscv_aia_exit();
+		return rc;
+	}
+
+	return 0;
 }
 module_init(riscv_kvm_init);
 
 static void __exit riscv_kvm_exit(void)
 {
+	kvm_riscv_aia_exit();
+
 	kvm_exit();
 }
 module_exit(riscv_kvm_exit);
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 02b49cb94561..1fd54ec15622 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -137,6 +137,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	kvm_riscv_vcpu_timer_reset(vcpu);
 
+	kvm_riscv_vcpu_aia_reset(vcpu);
+
 	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
 	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
 
@@ -159,6 +161,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
+	int rc;
 	struct kvm_cpu_context *cntx;
 	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
 	unsigned long host_isa, i;
@@ -201,6 +204,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	/* setup performance monitoring */
 	kvm_riscv_vcpu_pmu_init(vcpu);
 
+	/* Setup VCPU AIA */
+	rc = kvm_riscv_vcpu_aia_init(vcpu);
+	if (rc)
+		return rc;
+
 	/* Reset VCPU */
 	kvm_riscv_reset_vcpu(vcpu);
 
@@ -220,6 +228,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 
 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 {
+	/* Cleanup VCPU AIA context */
+	kvm_riscv_vcpu_aia_deinit(vcpu);
+
 	/* Cleanup VCPU timer */
 	kvm_riscv_vcpu_timer_deinit(vcpu);
 
@@ -741,6 +752,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
 		csr->hvip &= ~mask;
 		csr->hvip |= val;
 	}
+
+	/* Flush AIA high interrupts */
+	kvm_riscv_vcpu_aia_flush_interrupts(vcpu);
 }
 
 void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
@@ -766,6 +780,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
 		}
 	}
 
+	/* Sync-up AIA high interrupts */
+	kvm_riscv_vcpu_aia_sync_interrupts(vcpu);
+
 	/* Sync-up timer CSRs */
 	kvm_riscv_vcpu_timer_sync(vcpu);
 }
@@ -802,10 +819,15 @@ int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
 
 bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
 {
-	unsigned long ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
-			    << VSIP_TO_HVIP_SHIFT) & mask;
+	unsigned long ie;
+
+	ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
+		<< VSIP_TO_HVIP_SHIFT) & mask;
+	if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
+		return true;
 
-	return (READ_ONCE(vcpu->arch.irqs_pending) & ie) ? true : false;
+	/* Check AIA high interrupts */
+	return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask);
 }
 
 void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu)
@@ -901,6 +923,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context,
 					vcpu->arch.isa);
 
+	kvm_riscv_vcpu_aia_load(vcpu, cpu);
+
 	vcpu->cpu = cpu;
 }
 
@@ -910,6 +934,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 
 	vcpu->cpu = -1;
 
+	kvm_riscv_vcpu_aia_put(vcpu);
+
 	kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context,
 				     vcpu->arch.isa);
 	kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context);
@@ -977,6 +1003,7 @@ static void kvm_riscv_update_hvip(struct kvm_vcpu *vcpu)
 	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
 
 	csr_write(CSR_HVIP, csr->hvip);
+	kvm_riscv_vcpu_aia_update_hvip(vcpu);
 }
 
 /*
@@ -1051,6 +1078,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
 		local_irq_disable();
 
+		/* Update AIA HW state before entering guest */
+		ret = kvm_riscv_vcpu_aia_update(vcpu);
+		if (ret <= 0) {
+			local_irq_enable();
+			continue;
+		}
+
 		/*
 		 * Ensure we set mode to IN_GUEST_MODE after we disable
 		 * interrupts and before the final VCPU requests check.
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index f689337b78ff..7a6abed41bc1 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -214,6 +214,7 @@ struct csr_func {
 };
 
 static const struct csr_func csr_funcs[] = {
+	KVM_RISCV_VCPU_AIA_CSR_FUNCS
 	KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS
 };
 
diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c
index 65a964d7e70d..bc03d2ddcb51 100644
--- a/arch/riscv/kvm/vm.c
+++ b/arch/riscv/kvm/vm.c
@@ -41,6 +41,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 		return r;
 	}
 
+	kvm_riscv_aia_init_vm(kvm);
+
 	kvm_riscv_guest_timer_init(kvm);
 
 	return 0;
@@ -49,6 +51,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 void kvm_arch_destroy_vm(struct kvm *kvm)
 {
 	kvm_destroy_vcpus(kvm);
+
+	kvm_riscv_aia_destroy_vm(kvm);
 }
 
 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
                   ` (3 preceding siblings ...)
  2023-04-03  9:33 ` [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03 12:18   ` Andrew Jones
  2023-04-04  0:54   ` Atish Patra
  2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

To make the CSR ONE_REG interface extensible, we implement subtype
for the CSR ONE_REG IDs. The existing CSR ONE_REG IDs are treated
as subtype = 0 (aka General CSRs).

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/uapi/asm/kvm.h |  3 +-
 arch/riscv/kvm/vcpu.c             | 88 +++++++++++++++++++++++--------
 2 files changed, 69 insertions(+), 22 deletions(-)

diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 47a7c3958229..182023dc9a51 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -65,7 +65,7 @@ struct kvm_riscv_core {
 #define KVM_RISCV_MODE_S	1
 #define KVM_RISCV_MODE_U	0
 
-/* CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
+/* General CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
 struct kvm_riscv_csr {
 	unsigned long sstatus;
 	unsigned long sie;
@@ -152,6 +152,7 @@ enum KVM_RISCV_SBI_EXT_ID {
 
 /* Control and status registers are mapped as type 3 */
 #define KVM_REG_RISCV_CSR		(0x03 << KVM_REG_RISCV_TYPE_SHIFT)
+#define KVM_REG_RISCV_CSR_GENERAL	(0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT)
 #define KVM_REG_RISCV_CSR_REG(name)	\
 		(offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long))
 
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 1fd54ec15622..aca6b4fb7519 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -460,27 +460,72 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
+					  unsigned long reg_num,
+					  unsigned long *out_val)
+{
+	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+
+	if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
+		return -EINVAL;
+
+	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
+		kvm_riscv_vcpu_flush_interrupts(vcpu);
+		*out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
+	} else
+		*out_val = ((unsigned long *)csr)[reg_num];
+
+	return 0;
+}
+
+static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
+						 unsigned long reg_num,
+						 unsigned long reg_val)
+{
+	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+
+	if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
+		return -EINVAL;
+
+	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
+		reg_val &= VSIP_VALID_MASK;
+		reg_val <<= VSIP_TO_HVIP_SHIFT;
+	}
+
+	((unsigned long *)csr)[reg_num] = reg_val;
+
+	if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
+		WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+
+	return 0;
+}
+
 static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
 				      const struct kvm_one_reg *reg)
 {
-	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+	int rc;
 	unsigned long __user *uaddr =
 			(unsigned long __user *)(unsigned long)reg->addr;
 	unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
 					    KVM_REG_SIZE_MASK |
 					    KVM_REG_RISCV_CSR);
-	unsigned long reg_val;
+	unsigned long reg_val, reg_subtype;
 
 	if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
 		return -EINVAL;
-	if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
-		return -EINVAL;
 
-	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
-		kvm_riscv_vcpu_flush_interrupts(vcpu);
-		reg_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
-	} else
-		reg_val = ((unsigned long *)csr)[reg_num];
+	reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK;
+	reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK;
+	switch (reg_subtype) {
+	case KVM_REG_RISCV_CSR_GENERAL:
+		rc = kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, &reg_val);
+		break;
+	default:
+		rc = -EINVAL;
+		break;
+	}
+	if (rc)
+		return rc;
 
 	if (copy_to_user(uaddr, &reg_val, KVM_REG_SIZE(reg->id)))
 		return -EFAULT;
@@ -491,31 +536,32 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
 static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
 				      const struct kvm_one_reg *reg)
 {
-	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+	int rc;
 	unsigned long __user *uaddr =
 			(unsigned long __user *)(unsigned long)reg->addr;
 	unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
 					    KVM_REG_SIZE_MASK |
 					    KVM_REG_RISCV_CSR);
-	unsigned long reg_val;
+	unsigned long reg_val, reg_subtype;
 
 	if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
 		return -EINVAL;
-	if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
-		return -EINVAL;
 
 	if (copy_from_user(&reg_val, uaddr, KVM_REG_SIZE(reg->id)))
 		return -EFAULT;
 
-	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
-		reg_val &= VSIP_VALID_MASK;
-		reg_val <<= VSIP_TO_HVIP_SHIFT;
+	reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK;
+	reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK;
+	switch (reg_subtype) {
+	case KVM_REG_RISCV_CSR_GENERAL:
+		rc = kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val);
+		break;
+	default:
+		rc = -EINVAL;
+		break;
 	}
-
-	((unsigned long *)csr)[reg_num] = reg_val;
-
-	if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
-		WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+	if (rc)
+		return rc;
 
 	return 0;
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
                   ` (4 preceding siblings ...)
  2023-04-03  9:33 ` [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03 11:31   ` Andrew Jones
                     ` (2 more replies)
  2023-04-03  9:33 ` [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART " Anup Patel
  2023-04-03  9:33 ` [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management Anup Patel
  7 siblings, 3 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

We implement ONE_REG interface for AIA CSRs as a separate subtype
under the CSR ONE_REG interface.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
 arch/riscv/kvm/vcpu.c             | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 182023dc9a51..cbc3e74fa670 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -79,6 +79,10 @@ struct kvm_riscv_csr {
 	unsigned long scounteren;
 };
 
+/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
+struct kvm_riscv_aia_csr {
+};
+
 /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
 struct kvm_riscv_timer {
 	__u64 frequency;
@@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
 	KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
 	KVM_RISCV_ISA_EXT_ZICBOM,
 	KVM_RISCV_ISA_EXT_ZBB,
+	KVM_RISCV_ISA_EXT_SSAIA,
 	KVM_RISCV_ISA_EXT_MAX,
 };
 
@@ -153,8 +158,11 @@ enum KVM_RISCV_SBI_EXT_ID {
 /* Control and status registers are mapped as type 3 */
 #define KVM_REG_RISCV_CSR		(0x03 << KVM_REG_RISCV_TYPE_SHIFT)
 #define KVM_REG_RISCV_CSR_GENERAL	(0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT)
+#define KVM_REG_RISCV_CSR_AIA		(0x1 << KVM_REG_RISCV_SUBTYPE_SHIFT)
 #define KVM_REG_RISCV_CSR_REG(name)	\
 		(offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long))
+#define KVM_REG_RISCV_CSR_AIA_REG(name)	\
+	(offsetof(struct kvm_riscv_aia_csr, name) / sizeof(unsigned long))
 
 /* Timer registers are mapped as type 4 */
 #define KVM_REG_RISCV_TIMER		(0x04 << KVM_REG_RISCV_TYPE_SHIFT)
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index aca6b4fb7519..15507cd3a595 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -58,6 +58,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
 	[KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
 	[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
 
+	KVM_ISA_EXT_ARR(SSAIA),
 	KVM_ISA_EXT_ARR(SSTC),
 	KVM_ISA_EXT_ARR(SVINVAL),
 	KVM_ISA_EXT_ARR(SVPBMT),
@@ -97,6 +98,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
 	case KVM_RISCV_ISA_EXT_C:
 	case KVM_RISCV_ISA_EXT_I:
 	case KVM_RISCV_ISA_EXT_M:
+	case KVM_RISCV_ISA_EXT_SSAIA:
 	case KVM_RISCV_ISA_EXT_SSTC:
 	case KVM_RISCV_ISA_EXT_SVINVAL:
 	case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
@@ -520,6 +522,9 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
 	case KVM_REG_RISCV_CSR_GENERAL:
 		rc = kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, &reg_val);
 		break;
+	case KVM_REG_RISCV_CSR_AIA:
+		rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, &reg_val);
+		break;
 	default:
 		rc = -EINVAL;
 		break;
@@ -556,6 +561,9 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
 	case KVM_REG_RISCV_CSR_GENERAL:
 		rc = kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val);
 		break;
+	case KVM_REG_RISCV_CSR_AIA:
+		rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val);
+		break;
 	default:
 		rc = -EINVAL;
 		break;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART AIA CSRs
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
                   ` (5 preceding siblings ...)
  2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-03 16:37   ` Andrew Jones
  2023-04-03  9:33 ` [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management Anup Patel
  7 siblings, 1 reply; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The AIA specification introduce per-HART AIA CSRs which primarily
support:
* 64 local interrupts on both RV64 and RV32
* priority for each of the 64 local interrupts
* interrupt filtering for local interrupts

This patch virtualize above mentioned AIA CSRs and also extend
ONE_REG interface to allow user-space save/restore Guest/VM
view of these CSRs.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/kvm_aia.h  |  88 +++++----
 arch/riscv/include/asm/kvm_host.h |   7 +-
 arch/riscv/include/uapi/asm/kvm.h |   7 +
 arch/riscv/kvm/aia.c              | 317 ++++++++++++++++++++++++++++++
 arch/riscv/kvm/vcpu.c             |  53 +++--
 5 files changed, 415 insertions(+), 57 deletions(-)

diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
index 258a835d4c32..1de0717112e5 100644
--- a/arch/riscv/include/asm/kvm_aia.h
+++ b/arch/riscv/include/asm/kvm_aia.h
@@ -12,6 +12,7 @@
 
 #include <linux/jump_label.h>
 #include <linux/kvm_types.h>
+#include <asm/csr.h>
 
 struct kvm_aia {
 	/* In-kernel irqchip created */
@@ -21,7 +22,22 @@ struct kvm_aia {
 	bool		initialized;
 };
 
+struct kvm_vcpu_aia_csr {
+	unsigned long vsiselect;
+	unsigned long hviprio1;
+	unsigned long hviprio2;
+	unsigned long vsieh;
+	unsigned long hviph;
+	unsigned long hviprio1h;
+	unsigned long hviprio2h;
+};
+
 struct kvm_vcpu_aia {
+	/* CPU AIA CSR context of Guest VCPU */
+	struct kvm_vcpu_aia_csr guest_csr;
+
+	/* CPU AIA CSR context upon Guest VCPU reset */
+	struct kvm_vcpu_aia_csr guest_reset_csr;
 };
 
 #define kvm_riscv_aia_initialized(k)	((k)->arch.aia.initialized)
@@ -32,48 +48,50 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
 #define kvm_riscv_aia_available() \
 	static_branch_unlikely(&kvm_riscv_aia_available)
 
-static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
-{
-}
-
-static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
-{
-}
-
-static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
-						     u64 mask)
-{
-	return false;
-}
-
-static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
-{
-}
-
-static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
-{
-}
-
-static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
+#define KVM_RISCV_AIA_IMSIC_TOPEI	(ISELECT_MASK + 1)
+static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
+					       unsigned long isel,
+					       unsigned long *val,
+					       unsigned long new_val,
+					       unsigned long wr_mask)
 {
+	return 0;
 }
 
-static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
-					     unsigned long reg_num,
-					     unsigned long *out_val)
+#ifdef CONFIG_32BIT
+void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu);
+#else
+static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
 {
-	*out_val = 0;
-	return 0;
 }
-
-static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
-					     unsigned long reg_num,
-					     unsigned long val)
+static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
 {
-	return 0;
 }
-
-#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
+#endif
+bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
+
+void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu);
+void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu);
+int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
+			       unsigned long reg_num,
+			       unsigned long *out_val);
+int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
+			       unsigned long reg_num,
+			       unsigned long val);
+
+int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
+				 unsigned int csr_num,
+				 unsigned long *val,
+				 unsigned long new_val,
+				 unsigned long wr_mask);
+int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
+				unsigned long *val, unsigned long new_val,
+				unsigned long wr_mask);
+#define KVM_RISCV_VCPU_AIA_CSR_FUNCS \
+{ .base = CSR_SIREG,      .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \
+{ .base = CSR_STOPEI,     .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei },
 
 static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 3157cf748df1..ee0acccb1d3b 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -204,8 +204,9 @@ struct kvm_vcpu_arch {
 	 * in irqs_pending. Our approach is modeled around multiple producer
 	 * and single consumer problem where the consumer is the VCPU itself.
 	 */
-	unsigned long irqs_pending;
-	unsigned long irqs_pending_mask;
+#define KVM_RISCV_VCPU_NR_IRQS	64
+	DECLARE_BITMAP(irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
+	DECLARE_BITMAP(irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
 
 	/* VCPU Timer */
 	struct kvm_vcpu_timer timer;
@@ -334,7 +335,7 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
 int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
 void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu);
 void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu);
-bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask);
+bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
 void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu);
 void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu);
 
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index cbc3e74fa670..c517e70ddcd6 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -81,6 +81,13 @@ struct kvm_riscv_csr {
 
 /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
 struct kvm_riscv_aia_csr {
+	unsigned long siselect;
+	unsigned long siprio1;
+	unsigned long siprio2;
+	unsigned long sieh;
+	unsigned long siph;
+	unsigned long siprio1h;
+	unsigned long siprio2h;
 };
 
 /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
index 7a633331cd3e..d530912f28bc 100644
--- a/arch/riscv/kvm/aia.c
+++ b/arch/riscv/kvm/aia.c
@@ -26,6 +26,323 @@ static void aia_set_hvictl(bool ext_irq_pending)
 	csr_write(CSR_HVICTL, hvictl);
 }
 
+#ifdef CONFIG_32BIT
+void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+	unsigned long mask, val;
+
+	if (!kvm_riscv_aia_available())
+		return;
+
+	if (READ_ONCE(vcpu->arch.irqs_pending_mask[1])) {
+		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[1], 0);
+		val = READ_ONCE(vcpu->arch.irqs_pending[1]) & mask;
+
+		csr->hviph &= ~mask;
+		csr->hviph |= val;
+	}
+}
+
+void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+
+	if (kvm_riscv_aia_available())
+		csr->vsieh = csr_read(CSR_VSIEH);
+}
+#endif
+
+bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
+{
+	unsigned long seip;
+
+	if (!kvm_riscv_aia_available())
+		return false;
+
+#ifdef CONFIG_32BIT
+	if (READ_ONCE(vcpu->arch.irqs_pending[1]) &
+	    (vcpu->arch.aia_context.guest_csr.vsieh & (unsigned long)(mask >> 32)))
+		return true;
+#endif
+
+	seip = vcpu->arch.guest_csr.vsie;
+	seip &= (unsigned long)mask;
+	seip &= BIT(IRQ_S_EXT);
+	if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
+		return false;
+
+	return false;
+}
+
+void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
+
+	if (!kvm_riscv_aia_available())
+		return;
+
+#ifdef CONFIG_32BIT
+	csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph);
+#endif
+	aia_set_hvictl((csr->hvip & BIT(IRQ_VS_EXT)) ? true : false);
+}
+
+void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+
+	if (!kvm_riscv_aia_available())
+		return;
+
+	csr_write(CSR_VSISELECT, csr->vsiselect);
+	csr_write(CSR_HVIPRIO1, csr->hviprio1);
+	csr_write(CSR_HVIPRIO2, csr->hviprio2);
+#ifdef CONFIG_32BIT
+	csr_write(CSR_VSIEH, csr->vsieh);
+	csr_write(CSR_HVIPH, csr->hviph);
+	csr_write(CSR_HVIPRIO1H, csr->hviprio1h);
+	csr_write(CSR_HVIPRIO2H, csr->hviprio2h);
+#endif
+}
+
+void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+
+	if (!kvm_riscv_aia_available())
+		return;
+
+	csr->vsiselect = csr_read(CSR_VSISELECT);
+	csr->hviprio1 = csr_read(CSR_HVIPRIO1);
+	csr->hviprio2 = csr_read(CSR_HVIPRIO2);
+#ifdef CONFIG_32BIT
+	csr->vsieh = csr_read(CSR_VSIEH);
+	csr->hviph = csr_read(CSR_HVIPH);
+	csr->hviprio1h = csr_read(CSR_HVIPRIO1H);
+	csr->hviprio2h = csr_read(CSR_HVIPRIO2H);
+#endif
+}
+
+int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
+			       unsigned long reg_num,
+			       unsigned long *out_val)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+
+	if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
+		return -EINVAL;
+
+	*out_val = 0;
+	if (kvm_riscv_aia_available())
+		*out_val = ((unsigned long *)csr)[reg_num];
+
+	return 0;
+}
+
+int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
+			       unsigned long reg_num,
+			       unsigned long val)
+{
+	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+
+	if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
+		return -EINVAL;
+
+	if (kvm_riscv_aia_available()) {
+		((unsigned long *)csr)[reg_num] = val;
+
+#ifdef CONFIG_32BIT
+		if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph))
+			WRITE_ONCE(vcpu->arch.irqs_pending_mask[1], 0);
+#endif
+	}
+
+	return 0;
+}
+
+int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
+				 unsigned int csr_num,
+				 unsigned long *val,
+				 unsigned long new_val,
+				 unsigned long wr_mask)
+{
+	/* If AIA not available then redirect trap */
+	if (!kvm_riscv_aia_available())
+		return KVM_INSN_ILLEGAL_TRAP;
+
+	/* If AIA not initialized then forward to user space */
+	if (!kvm_riscv_aia_initialized(vcpu->kvm))
+		return KVM_INSN_EXIT_TO_USER_SPACE;
+
+	return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, KVM_RISCV_AIA_IMSIC_TOPEI,
+					    val, new_val, wr_mask);
+}
+
+/*
+ * External IRQ priority always read-only zero. This means default
+ * priority order  is always preferred for external IRQs unless
+ * HVICTL.IID == 9 and HVICTL.IPRIO != 0
+ */
+static int aia_irq2bitpos[] = {
+0,     8,   -1,   -1,   16,   24,   -1,   -1, /* 0 - 7 */
+32,   -1,   -1,   -1,   -1,   40,   48,   56, /* 8 - 15 */
+64,   72,   80,   88,   96,  104,  112,  120, /* 16 - 23 */
+-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 24 - 31 */
+-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 32 - 39 */
+-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 40 - 47 */
+-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 48 - 55 */
+-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 56 - 63 */
+};
+
+static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	unsigned long hviprio;
+	int bitpos = aia_irq2bitpos[irq];
+
+	if (bitpos < 0)
+		return 0;
+
+	switch (bitpos / BITS_PER_LONG) {
+	case 0:
+		hviprio = csr_read(CSR_HVIPRIO1);
+		break;
+	case 1:
+#ifndef CONFIG_32BIT
+		hviprio = csr_read(CSR_HVIPRIO2);
+		break;
+#else
+		hviprio = csr_read(CSR_HVIPRIO1H);
+		break;
+	case 2:
+		hviprio = csr_read(CSR_HVIPRIO2);
+		break;
+	case 3:
+		hviprio = csr_read(CSR_HVIPRIO2H);
+		break;
+#endif
+	default:
+		return 0;
+	};
+
+	return (hviprio >> (bitpos % BITS_PER_LONG)) & TOPI_IPRIO_MASK;
+}
+
+static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio)
+{
+	unsigned long hviprio;
+	int bitpos = aia_irq2bitpos[irq];
+
+	if (bitpos < 0)
+		return;
+
+	switch (bitpos / BITS_PER_LONG) {
+	case 0:
+		hviprio = csr_read(CSR_HVIPRIO1);
+		break;
+	case 1:
+#ifndef CONFIG_32BIT
+		hviprio = csr_read(CSR_HVIPRIO2);
+		break;
+#else
+		hviprio = csr_read(CSR_HVIPRIO1H);
+		break;
+	case 2:
+		hviprio = csr_read(CSR_HVIPRIO2);
+		break;
+	case 3:
+		hviprio = csr_read(CSR_HVIPRIO2H);
+		break;
+#endif
+	default:
+		return;
+	};
+
+	hviprio &= ~((unsigned long)TOPI_IPRIO_MASK <<
+		     (bitpos % BITS_PER_LONG));
+	hviprio |= (unsigned long)prio << (bitpos % BITS_PER_LONG);
+
+	switch (bitpos / BITS_PER_LONG) {
+	case 0:
+		csr_write(CSR_HVIPRIO1, hviprio);
+		break;
+	case 1:
+#ifndef CONFIG_32BIT
+		csr_write(CSR_HVIPRIO2, hviprio);
+		break;
+#else
+		csr_write(CSR_HVIPRIO1H, hviprio);
+		break;
+	case 2:
+		csr_write(CSR_HVIPRIO2, hviprio);
+		break;
+	case 3:
+		csr_write(CSR_HVIPRIO2H, hviprio);
+		break;
+#endif
+	default:
+		return;
+	};
+}
+
+static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel,
+			 unsigned long *val, unsigned long new_val,
+			 unsigned long wr_mask)
+{
+	int i, firq, nirqs;
+	unsigned long old_val;
+
+#ifndef CONFIG_32BIT
+	if (isel & 0x1)
+		return KVM_INSN_ILLEGAL_TRAP;
+#endif
+
+	nirqs = 4 * (BITS_PER_LONG / 32);
+	firq = ((isel - ISELECT_IPRIO0) / (BITS_PER_LONG / 32)) * (nirqs);
+
+	old_val = 0;
+	for (i = 0; i < nirqs; i++)
+		old_val |= (unsigned long)aia_get_iprio8(vcpu, firq + i) <<
+			   (TOPI_IPRIO_BITS * i);
+
+	if (val)
+		*val = old_val;
+
+	if (wr_mask) {
+		new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
+		for (i = 0; i < nirqs; i++)
+			aia_set_iprio8(vcpu, firq + i,
+			(new_val >> (TOPI_IPRIO_BITS * i)) & TOPI_IPRIO_MASK);
+	}
+
+	return KVM_INSN_CONTINUE_NEXT_SEPC;
+}
+
+#define IMSIC_FIRST	0x70
+#define IMSIC_LAST	0xff
+int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
+				unsigned long *val, unsigned long new_val,
+				unsigned long wr_mask)
+{
+	unsigned int isel;
+
+	/* If AIA not available then redirect trap */
+	if (!kvm_riscv_aia_available())
+		return KVM_INSN_ILLEGAL_TRAP;
+
+	/* First try to emulate in kernel space */
+	isel = csr_read(CSR_VSISELECT) & ISELECT_MASK;
+	if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15)
+		return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask);
+	else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST &&
+		 kvm_riscv_aia_initialized(vcpu->kvm))
+		return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val,
+						    wr_mask);
+
+	/* We can't handle it here so redirect to user space */
+	return KVM_INSN_EXIT_TO_USER_SPACE;
+}
+
 void kvm_riscv_aia_enable(void)
 {
 	if (!kvm_riscv_aia_available())
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 15507cd3a595..30acf3ebdc3d 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -141,8 +141,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	kvm_riscv_vcpu_aia_reset(vcpu);
 
-	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
-	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+	bitmap_zero(vcpu->arch.irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
+	bitmap_zero(vcpu->arch.irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
 
 	kvm_riscv_vcpu_pmu_reset(vcpu);
 
@@ -474,6 +474,7 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
 	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
 		kvm_riscv_vcpu_flush_interrupts(vcpu);
 		*out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
+		*out_val |= csr->hvip & ~IRQ_LOCAL_MASK;
 	} else
 		*out_val = ((unsigned long *)csr)[reg_num];
 
@@ -497,7 +498,7 @@ static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
 	((unsigned long *)csr)[reg_num] = reg_val;
 
 	if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
-		WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+		WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0);
 
 	return 0;
 }
@@ -799,9 +800,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
 	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
 	unsigned long mask, val;
 
-	if (READ_ONCE(vcpu->arch.irqs_pending_mask)) {
-		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0);
-		val = READ_ONCE(vcpu->arch.irqs_pending) & mask;
+	if (READ_ONCE(vcpu->arch.irqs_pending_mask[0])) {
+		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[0], 0);
+		val = READ_ONCE(vcpu->arch.irqs_pending[0]) & mask;
 
 		csr->hvip &= ~mask;
 		csr->hvip |= val;
@@ -825,12 +826,12 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
 	if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) {
 		if (hvip & (1UL << IRQ_VS_SOFT)) {
 			if (!test_and_set_bit(IRQ_VS_SOFT,
-					      &v->irqs_pending_mask))
-				set_bit(IRQ_VS_SOFT, &v->irqs_pending);
+					      v->irqs_pending_mask))
+				set_bit(IRQ_VS_SOFT, v->irqs_pending);
 		} else {
 			if (!test_and_set_bit(IRQ_VS_SOFT,
-					      &v->irqs_pending_mask))
-				clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
+					      v->irqs_pending_mask))
+				clear_bit(IRQ_VS_SOFT, v->irqs_pending);
 		}
 	}
 
@@ -843,14 +844,20 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
 
 int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
 {
-	if (irq != IRQ_VS_SOFT &&
+	/*
+	 * We only allow VS-mode software, timer, and external
+	 * interrupts when irq is one of the local interrupts
+	 * defined by RISC-V privilege specification.
+	 */
+	if (irq < IRQ_LOCAL_MAX &&
+	    irq != IRQ_VS_SOFT &&
 	    irq != IRQ_VS_TIMER &&
 	    irq != IRQ_VS_EXT)
 		return -EINVAL;
 
-	set_bit(irq, &vcpu->arch.irqs_pending);
+	set_bit(irq, vcpu->arch.irqs_pending);
 	smp_mb__before_atomic();
-	set_bit(irq, &vcpu->arch.irqs_pending_mask);
+	set_bit(irq, vcpu->arch.irqs_pending_mask);
 
 	kvm_vcpu_kick(vcpu);
 
@@ -859,25 +866,33 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
 
 int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
 {
-	if (irq != IRQ_VS_SOFT &&
+	/*
+	 * We only allow VS-mode software, timer, and external
+	 * interrupts when irq is one of the local interrupts
+	 * defined by RISC-V privilege specification.
+	 */
+	if (irq < IRQ_LOCAL_MAX &&
+	    irq != IRQ_VS_SOFT &&
 	    irq != IRQ_VS_TIMER &&
 	    irq != IRQ_VS_EXT)
 		return -EINVAL;
 
-	clear_bit(irq, &vcpu->arch.irqs_pending);
+	clear_bit(irq, vcpu->arch.irqs_pending);
 	smp_mb__before_atomic();
-	set_bit(irq, &vcpu->arch.irqs_pending_mask);
+	set_bit(irq, vcpu->arch.irqs_pending_mask);
 
 	return 0;
 }
 
-bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
+bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
 {
 	unsigned long ie;
 
 	ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
-		<< VSIP_TO_HVIP_SHIFT) & mask;
-	if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
+		<< VSIP_TO_HVIP_SHIFT) & (unsigned long)mask;
+	ie |= vcpu->arch.guest_csr.vsie & ~IRQ_LOCAL_MASK &
+		(unsigned long)mask;
+	if (READ_ONCE(vcpu->arch.irqs_pending[0]) & ie)
 		return true;
 
 	/* Check AIA high interrupts */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management
  2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
                   ` (6 preceding siblings ...)
  2023-04-03  9:33 ` [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART " Anup Patel
@ 2023-04-03  9:33 ` Anup Patel
  2023-04-04 12:45   ` Andrew Jones
  7 siblings, 1 reply; 30+ messages in thread
From: Anup Patel @ 2023-04-03  9:33 UTC (permalink / raw)
  To: Paolo Bonzini, Atish Patra
  Cc: Palmer Dabbelt, Paul Walmsley, Andrew Jones, Anup Patel, kvm,
	kvm-riscv, linux-riscv, linux-kernel, Anup Patel

The RISC-V host will have one guest external interrupt line for each
VS-level IMSICs associated with a HART. The guest external interrupt
lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and
HIE CSRs to manage these guest external interrupt lines.

Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
 arch/riscv/include/asm/kvm_aia.h |  10 ++
 arch/riscv/kvm/aia.c             | 241 +++++++++++++++++++++++++++++++
 arch/riscv/kvm/main.c            |   3 +-
 arch/riscv/kvm/vcpu.c            |   2 +
 4 files changed, 255 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
index 1de0717112e5..0938e0cadf80 100644
--- a/arch/riscv/include/asm/kvm_aia.h
+++ b/arch/riscv/include/asm/kvm_aia.h
@@ -44,10 +44,15 @@ struct kvm_vcpu_aia {
 
 #define irqchip_in_kernel(k)		((k)->arch.aia.in_kernel)
 
+extern unsigned int kvm_riscv_aia_nr_hgei;
 DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
 #define kvm_riscv_aia_available() \
 	static_branch_unlikely(&kvm_riscv_aia_available)
 
+static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu)
+{
+}
+
 #define KVM_RISCV_AIA_IMSIC_TOPEI	(ISELECT_MASK + 1)
 static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
 					       unsigned long isel,
@@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
 {
 }
 
+int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
+			     void __iomem **hgei_va, phys_addr_t *hgei_pa);
+void kvm_riscv_aia_free_hgei(int cpu, int hgei);
+void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable);
+
 void kvm_riscv_aia_enable(void);
 void kvm_riscv_aia_disable(void);
 int kvm_riscv_aia_init(void);
diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
index d530912f28bc..1264783e7c4d 100644
--- a/arch/riscv/kvm/aia.c
+++ b/arch/riscv/kvm/aia.c
@@ -7,11 +7,46 @@
  *	Anup Patel <apatel@ventanamicro.com>
  */
 
+#include <linux/bitops.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
 #include <linux/kvm_host.h>
+#include <linux/percpu.h>
+#include <linux/spinlock.h>
 #include <asm/hwcap.h>
 
+struct aia_hgei_control {
+	raw_spinlock_t lock;
+	unsigned long free_bitmap;
+	struct kvm_vcpu *owners[BITS_PER_LONG];
+};
+static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei);
+static int hgei_parent_irq;
+
+unsigned int kvm_riscv_aia_nr_hgei;
 DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
 
+static int aia_find_hgei(struct kvm_vcpu *owner)
+{
+	int i, hgei;
+	unsigned long flags;
+	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
+
+	raw_spin_lock_irqsave(&hgctrl->lock, flags);
+
+	hgei = -1;
+	for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) {
+		if (hgctrl->owners[i] == owner) {
+			hgei = i;
+			break;
+		}
+	}
+
+	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
+
+	return hgei;
+}
+
 static void aia_set_hvictl(bool ext_irq_pending)
 {
 	unsigned long hvictl;
@@ -55,6 +90,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
 
 bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
 {
+	int hgei;
 	unsigned long seip;
 
 	if (!kvm_riscv_aia_available())
@@ -72,6 +108,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
 	if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
 		return false;
 
+	hgei = aia_find_hgei(vcpu);
+	if (hgei > 0)
+		return (csr_read(CSR_HGEIP) & BIT(hgei)) ? true : false;
+
 	return false;
 }
 
@@ -343,6 +383,144 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
 	return KVM_INSN_EXIT_TO_USER_SPACE;
 }
 
+int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
+			     void __iomem **hgei_va, phys_addr_t *hgei_pa)
+{
+	int ret = -ENOENT;
+	unsigned long flags;
+	struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
+
+	if (!kvm_riscv_aia_available())
+		return -ENODEV;
+	if (!hgctrl)
+		return -ENODEV;
+
+	raw_spin_lock_irqsave(&hgctrl->lock, flags);
+
+	if (hgctrl->free_bitmap) {
+		ret = __ffs(hgctrl->free_bitmap);
+		hgctrl->free_bitmap &= ~BIT(ret);
+		hgctrl->owners[ret] = owner;
+	}
+
+	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
+
+	/* TODO: To be updated later by AIA in-kernel irqchip support */
+	if (hgei_va)
+		*hgei_va = NULL;
+	if (hgei_pa)
+		*hgei_pa = 0;
+
+	return ret;
+}
+
+void kvm_riscv_aia_free_hgei(int cpu, int hgei)
+{
+	unsigned long flags;
+	struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
+
+	if (!kvm_riscv_aia_available() || !hgctrl)
+		return;
+
+	raw_spin_lock_irqsave(&hgctrl->lock, flags);
+
+	if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) {
+		if (!(hgctrl->free_bitmap & BIT(hgei))) {
+			hgctrl->free_bitmap |= BIT(hgei);
+			hgctrl->owners[hgei] = NULL;
+		}
+	}
+
+	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
+}
+
+void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable)
+{
+	int hgei;
+
+	if (!kvm_riscv_aia_available())
+		return;
+
+	hgei = aia_find_hgei(owner);
+	if (hgei > 0) {
+		if (enable)
+			csr_set(CSR_HGEIE, BIT(hgei));
+		else
+			csr_clear(CSR_HGEIE, BIT(hgei));
+	}
+}
+
+static irqreturn_t hgei_interrupt(int irq, void *dev_id)
+{
+	int i;
+	unsigned long hgei_mask, flags;
+	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
+
+	hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE);
+	csr_clear(CSR_HGEIE, hgei_mask);
+
+	raw_spin_lock_irqsave(&hgctrl->lock, flags);
+
+	for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) {
+		if (hgctrl->owners[i])
+			kvm_vcpu_kick(hgctrl->owners[i]);
+	}
+
+	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static int aia_hgei_init(void)
+{
+	int cpu, rc;
+	struct irq_domain *domain;
+	struct aia_hgei_control *hgctrl;
+
+	/* Initialize per-CPU guest external interrupt line management */
+	for_each_possible_cpu(cpu) {
+		hgctrl = per_cpu_ptr(&aia_hgei, cpu);
+		raw_spin_lock_init(&hgctrl->lock);
+		if (kvm_riscv_aia_nr_hgei) {
+			hgctrl->free_bitmap =
+				BIT(kvm_riscv_aia_nr_hgei + 1) - 1;
+			hgctrl->free_bitmap &= ~BIT(0);
+		} else
+			hgctrl->free_bitmap = 0;
+	}
+
+	/* Find INTC irq domain */
+	domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
+					  DOMAIN_BUS_ANY);
+	if (!domain) {
+		kvm_err("unable to find INTC domain\n");
+		return -ENOENT;
+	}
+
+	/* Map per-CPU SGEI interrupt from INTC domain */
+	hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT);
+	if (!hgei_parent_irq) {
+		kvm_err("unable to map SGEI IRQ\n");
+		return -ENOMEM;
+	}
+
+	/* Request per-CPU SGEI interrupt */
+	rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt,
+				"riscv-kvm", &aia_hgei);
+	if (rc) {
+		kvm_err("failed to request SGEI IRQ\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+static void aia_hgei_exit(void)
+{
+	/* Free per-CPU SGEI interrupt */
+	free_percpu_irq(hgei_parent_irq, &aia_hgei);
+}
+
 void kvm_riscv_aia_enable(void)
 {
 	if (!kvm_riscv_aia_available())
@@ -357,21 +535,79 @@ void kvm_riscv_aia_enable(void)
 	csr_write(CSR_HVIPRIO1H, 0x0);
 	csr_write(CSR_HVIPRIO2H, 0x0);
 #endif
+
+	/* Enable per-CPU SGEI interrupt */
+	enable_percpu_irq(hgei_parent_irq,
+			  irq_get_trigger_type(hgei_parent_irq));
+	csr_set(CSR_HIE, BIT(IRQ_S_GEXT));
 }
 
 void kvm_riscv_aia_disable(void)
 {
+	int i;
+	unsigned long flags;
+	struct kvm_vcpu *vcpu;
+	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
+
 	if (!kvm_riscv_aia_available())
 		return;
 
+	/* Disable per-CPU SGEI interrupt */
+	csr_clear(CSR_HIE, BIT(IRQ_S_GEXT));
+	disable_percpu_irq(hgei_parent_irq);
+
 	aia_set_hvictl(false);
+
+	raw_spin_lock_irqsave(&hgctrl->lock, flags);
+
+	for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) {
+		vcpu = hgctrl->owners[i];
+		if (!vcpu)
+			continue;
+
+		/*
+		 * We release hgctrl->lock before notifying IMSIC
+		 * so that we don't have lock ordering issues.
+		 */
+		raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
+
+		/* Notify IMSIC */
+		kvm_riscv_vcpu_aia_imsic_release(vcpu);
+
+		/*
+		 * Wakeup VCPU if it was blocked so that it can
+		 * run on other HARTs
+		 */
+		if (csr_read(CSR_HGEIE) & BIT(i)) {
+			csr_clear(CSR_HGEIE, BIT(i));
+			kvm_vcpu_kick(vcpu);
+		}
+
+		raw_spin_lock_irqsave(&hgctrl->lock, flags);
+	}
+
+	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
 }
 
 int kvm_riscv_aia_init(void)
 {
+	int rc;
+
 	if (!riscv_isa_extension_available(NULL, SxAIA))
 		return -ENODEV;
 
+	/* Figure-out number of bits in HGEIE */
+	csr_write(CSR_HGEIE, -1UL);
+	kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE));
+	csr_write(CSR_HGEIE, 0);
+	if (kvm_riscv_aia_nr_hgei)
+		kvm_riscv_aia_nr_hgei--;
+
+	/* Initialize guest external interrupt line management */
+	rc = aia_hgei_init();
+	if (rc)
+		return rc;
+
 	/* Enable KVM AIA support */
 	static_branch_enable(&kvm_riscv_aia_available);
 
@@ -380,4 +616,9 @@ int kvm_riscv_aia_init(void)
 
 void kvm_riscv_aia_exit(void)
 {
+	if (!kvm_riscv_aia_available())
+		return;
+
+	/* Cleanup the HGEI state */
+	aia_hgei_exit();
 }
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 6396352b4e4d..b0b46f48f31e 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -116,7 +116,8 @@ static int __init riscv_kvm_init(void)
 	kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
 
 	if (kvm_riscv_aia_available())
-		kvm_info("AIA available\n");
+		kvm_info("AIA available with %d guest external interrupts\n",
+			 kvm_riscv_aia_nr_hgei);
 
 	rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
 	if (rc) {
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 30acf3ebdc3d..eace51dd896f 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -249,10 +249,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
+	kvm_riscv_aia_wakeon_hgei(vcpu, true);
 }
 
 void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
 {
+	kvm_riscv_aia_wakeon_hgei(vcpu, false);
 }
 
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string
  2023-04-03  9:33 ` [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string Anup Patel
@ 2023-04-03  9:39   ` Conor Dooley
  2023-04-03 12:05     ` Anup Patel
  0 siblings, 1 reply; 30+ messages in thread
From: Conor Dooley @ 2023-04-03  9:39 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Andrew Jones, Anup Patel, kvm, kvm-riscv, linux-riscv,
	linux-kernel, Atish Patra

[-- Attachment #1: Type: text/plain, Size: 879 bytes --]

On Mon, Apr 03, 2023 at 03:03:04PM +0530, Anup Patel wrote:

> diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> index 59d58ee0f68d..1b13a5823b90 100644
> --- a/arch/riscv/kernel/cpufeature.c
> +++ b/arch/riscv/kernel/cpufeature.c
> @@ -221,8 +221,10 @@ void __init riscv_fill_hwcap(void)
>  				}
>  			} else {
>  				/* sorted alphabetically */
                                   ^^^^^^^^^^^^^^^^^^^^^

> +				SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA);
>  				SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
>  				SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
> +				SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA);

This entry has been added in an incorrect order chief :/

>  				SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
>  				SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
>  				SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB);


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
@ 2023-04-03 11:31   ` Andrew Jones
  2023-04-03 12:04     ` Anup Patel
  2023-04-03 12:27   ` Andrew Jones
  2023-04-04  0:55   ` Atish Patra
  2 siblings, 1 reply; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 11:31 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> We implement ONE_REG interface for AIA CSRs as a separate subtype
> under the CSR ONE_REG interface.
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
>  arch/riscv/kvm/vcpu.c             | 8 ++++++++
>  2 files changed, 16 insertions(+)
> 
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index 182023dc9a51..cbc3e74fa670 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
>  	unsigned long scounteren;
>  };
>  
> +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> +struct kvm_riscv_aia_csr {
> +};
> +
>  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
>  struct kvm_riscv_timer {
>  	__u64 frequency;
> @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
>  	KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
>  	KVM_RISCV_ISA_EXT_ZICBOM,
>  	KVM_RISCV_ISA_EXT_ZBB,

Looks like this patch is also based on "[PATCH] RISC-V: KVM: Allow Zbb
extension for Guest/VM"

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA
  2023-04-03  9:33 ` [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA Anup Patel
@ 2023-04-03 12:00   ` Andrew Jones
  2023-04-03 23:49   ` Atish Patra
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 12:00 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel,
	Atish Patra

On Mon, Apr 03, 2023 at 03:03:06PM +0530, Anup Patel wrote:
> To incrementally implement AIA support, we first add minimal skeletal
> support which only compiles and detects AIA hardware support at the
> boot-time but does not provide any functionality.
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>
> ---
>  arch/riscv/include/asm/hwcap.h    |   6 ++
>  arch/riscv/include/asm/kvm_aia.h  | 109 ++++++++++++++++++++++++++++++
>  arch/riscv/include/asm/kvm_host.h |   7 ++
>  arch/riscv/kvm/Makefile           |   1 +
>  arch/riscv/kvm/aia.c              |  66 ++++++++++++++++++
>  arch/riscv/kvm/main.c             |  22 +++++-
>  arch/riscv/kvm/vcpu.c             |  40 ++++++++++-
>  arch/riscv/kvm/vcpu_insn.c        |   1 +
>  arch/riscv/kvm/vm.c               |   4 ++
>  9 files changed, 252 insertions(+), 4 deletions(-)
>  create mode 100644 arch/riscv/include/asm/kvm_aia.h
>  create mode 100644 arch/riscv/kvm/aia.c
>

Reviewed-by: Andrew Jones <ajones@ventanamicro.com>

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03 11:31   ` Andrew Jones
@ 2023-04-03 12:04     ` Anup Patel
  2023-04-03 12:23       ` Andrew Jones
  0 siblings, 1 reply; 30+ messages in thread
From: Anup Patel @ 2023-04-03 12:04 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 3, 2023 at 5:01 PM Andrew Jones <ajones@ventanamicro.com> wrote:
>
> On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> > We implement ONE_REG interface for AIA CSRs as a separate subtype
> > under the CSR ONE_REG interface.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
> >  arch/riscv/kvm/vcpu.c             | 8 ++++++++
> >  2 files changed, 16 insertions(+)
> >
> > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > index 182023dc9a51..cbc3e74fa670 100644
> > --- a/arch/riscv/include/uapi/asm/kvm.h
> > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
> >       unsigned long scounteren;
> >  };
> >
> > +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > +struct kvm_riscv_aia_csr {
> > +};
> > +
> >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> >  struct kvm_riscv_timer {
> >       __u64 frequency;
> > @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> >       KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
> >       KVM_RISCV_ISA_EXT_ZICBOM,
> >       KVM_RISCV_ISA_EXT_ZBB,
>
> Looks like this patch is also based on "[PATCH] RISC-V: KVM: Allow Zbb
> extension for Guest/VM"

Yes, do you want me to change the order of dependency?

Regards,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string
  2023-04-03  9:39   ` Conor Dooley
@ 2023-04-03 12:05     ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-03 12:05 UTC (permalink / raw)
  To: Conor Dooley
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, Andrew Jones, kvm, kvm-riscv, linux-riscv,
	linux-kernel, Atish Patra

On Mon, Apr 3, 2023 at 3:10 PM Conor Dooley <conor.dooley@microchip.com> wrote:
>
> On Mon, Apr 03, 2023 at 03:03:04PM +0530, Anup Patel wrote:
>
> > diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> > index 59d58ee0f68d..1b13a5823b90 100644
> > --- a/arch/riscv/kernel/cpufeature.c
> > +++ b/arch/riscv/kernel/cpufeature.c
> > @@ -221,8 +221,10 @@ void __init riscv_fill_hwcap(void)
> >                               }
> >                       } else {
> >                               /* sorted alphabetically */
>                                    ^^^^^^^^^^^^^^^^^^^^^
>
> > +                             SET_ISA_EXT_MAP("ssaia", RISCV_ISA_EXT_SSAIA);
> >                               SET_ISA_EXT_MAP("sscofpmf", RISCV_ISA_EXT_SSCOFPMF);
> >                               SET_ISA_EXT_MAP("sstc", RISCV_ISA_EXT_SSTC);
> > +                             SET_ISA_EXT_MAP("smaia", RISCV_ISA_EXT_SMAIA);
>
> This entry has been added in an incorrect order chief :/

Okay, I will update in the next revision.

>
> >                               SET_ISA_EXT_MAP("svinval", RISCV_ISA_EXT_SVINVAL);
> >                               SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT);
> >                               SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB);

Regards,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface
  2023-04-03  9:33 ` [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface Anup Patel
@ 2023-04-03 12:18   ` Andrew Jones
  2023-04-04  0:54   ` Atish Patra
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 12:18 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 03:03:07PM +0530, Anup Patel wrote:
> To make the CSR ONE_REG interface extensible, we implement subtype
> for the CSR ONE_REG IDs. The existing CSR ONE_REG IDs are treated
> as subtype = 0 (aka General CSRs).
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/uapi/asm/kvm.h |  3 +-
>  arch/riscv/kvm/vcpu.c             | 88 +++++++++++++++++++++++--------
>  2 files changed, 69 insertions(+), 22 deletions(-)
>

Reviewed-by: Andrew Jones <ajones@ventanamicro.com>

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03 12:04     ` Anup Patel
@ 2023-04-03 12:23       ` Andrew Jones
  2023-04-04 11:52         ` Andrew Jones
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 12:23 UTC (permalink / raw)
  To: Anup Patel
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 05:34:57PM +0530, Anup Patel wrote:
> On Mon, Apr 3, 2023 at 5:01 PM Andrew Jones <ajones@ventanamicro.com> wrote:
> >
> > On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> > > We implement ONE_REG interface for AIA CSRs as a separate subtype
> > > under the CSR ONE_REG interface.
> > >
> > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > ---
> > >  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
> > >  arch/riscv/kvm/vcpu.c             | 8 ++++++++
> > >  2 files changed, 16 insertions(+)
> > >
> > > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > > index 182023dc9a51..cbc3e74fa670 100644
> > > --- a/arch/riscv/include/uapi/asm/kvm.h
> > > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > > @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
> > >       unsigned long scounteren;
> > >  };
> > >
> > > +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > > +struct kvm_riscv_aia_csr {
> > > +};
> > > +
> > >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > >  struct kvm_riscv_timer {
> > >       __u64 frequency;
> > > @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> > >       KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
> > >       KVM_RISCV_ISA_EXT_ZICBOM,
> > >       KVM_RISCV_ISA_EXT_ZBB,
> >
> > Looks like this patch is also based on "[PATCH] RISC-V: KVM: Allow Zbb
> > extension for Guest/VM"
> 
> Yes, do you want me to change the order of dependency?

It's probably best if neither depend on each other, since they're
independent, but otherwise the order doesn't matter. It'd be nice to call
the order out in the cover letter to give patchwork a chance at automatic
build testing, though. To call it out, I believe adding

Based-on: 20230401112730.2105240-1-apatel@ventanamicro.com

to the cover letter should work.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
  2023-04-03 11:31   ` Andrew Jones
@ 2023-04-03 12:27   ` Andrew Jones
  2023-04-04  0:55   ` Atish Patra
  2 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 12:27 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> We implement ONE_REG interface for AIA CSRs as a separate subtype
> under the CSR ONE_REG interface.
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
>  arch/riscv/kvm/vcpu.c             | 8 ++++++++
>  2 files changed, 16 insertions(+)
>

Reviewed-by: Andrew Jones <ajones@ventanamicro.com>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART AIA CSRs
  2023-04-03  9:33 ` [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART " Anup Patel
@ 2023-04-03 16:37   ` Andrew Jones
  2023-04-04 13:31     ` Anup Patel
  2023-04-04 13:54     ` Anup Patel
  0 siblings, 2 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-03 16:37 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 03:03:09PM +0530, Anup Patel wrote:
> The AIA specification introduce per-HART AIA CSRs which primarily
> support:
> * 64 local interrupts on both RV64 and RV32
> * priority for each of the 64 local interrupts
> * interrupt filtering for local interrupts
> 
> This patch virtualize above mentioned AIA CSRs and also extend
> ONE_REG interface to allow user-space save/restore Guest/VM
> view of these CSRs.
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/kvm_aia.h  |  88 +++++----
>  arch/riscv/include/asm/kvm_host.h |   7 +-
>  arch/riscv/include/uapi/asm/kvm.h |   7 +
>  arch/riscv/kvm/aia.c              | 317 ++++++++++++++++++++++++++++++
>  arch/riscv/kvm/vcpu.c             |  53 +++--
>  5 files changed, 415 insertions(+), 57 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> index 258a835d4c32..1de0717112e5 100644
> --- a/arch/riscv/include/asm/kvm_aia.h
> +++ b/arch/riscv/include/asm/kvm_aia.h

nit: Generating the diff with --patience makes this a bit easier to read,
and/or several of the stub functions could have been directly put in
arch/riscv/kvm/aia.c in the skeleton patch to avoid so many changes in
this one.

> @@ -12,6 +12,7 @@
>  
>  #include <linux/jump_label.h>
>  #include <linux/kvm_types.h>
> +#include <asm/csr.h>
>  
>  struct kvm_aia {
>  	/* In-kernel irqchip created */
> @@ -21,7 +22,22 @@ struct kvm_aia {
>  	bool		initialized;
>  };
>  
> +struct kvm_vcpu_aia_csr {
> +	unsigned long vsiselect;
> +	unsigned long hviprio1;
> +	unsigned long hviprio2;
> +	unsigned long vsieh;
> +	unsigned long hviph;
> +	unsigned long hviprio1h;
> +	unsigned long hviprio2h;
> +};
> +
>  struct kvm_vcpu_aia {
> +	/* CPU AIA CSR context of Guest VCPU */
> +	struct kvm_vcpu_aia_csr guest_csr;
> +
> +	/* CPU AIA CSR context upon Guest VCPU reset */
> +	struct kvm_vcpu_aia_csr guest_reset_csr;
>  };
>  
>  #define kvm_riscv_aia_initialized(k)	((k)->arch.aia.initialized)
> @@ -32,48 +48,50 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
>  #define kvm_riscv_aia_available() \
>  	static_branch_unlikely(&kvm_riscv_aia_available)
>  
> -static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> -{
> -}
> -
> -static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> -{
> -}
> -
> -static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
> -						     u64 mask)
> -{
> -	return false;
> -}
> -
> -static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> -{
> -}
> -
> -static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> -{
> -}
> -
> -static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> +#define KVM_RISCV_AIA_IMSIC_TOPEI	(ISELECT_MASK + 1)
> +static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
> +					       unsigned long isel,
> +					       unsigned long *val,
> +					       unsigned long new_val,
> +					       unsigned long wr_mask)
>  {
> +	return 0;
>  }
>  
> -static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> -					     unsigned long reg_num,
> -					     unsigned long *out_val)
> +#ifdef CONFIG_32BIT
> +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu);
> +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu);
> +#else
> +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
>  {
> -	*out_val = 0;
> -	return 0;
>  }
> -
> -static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> -					     unsigned long reg_num,
> -					     unsigned long val)
> +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
>  {
> -	return 0;
>  }
> -
> -#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
> +#endif
> +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
> +
> +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu);
> +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu);
> +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu);
> +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> +			       unsigned long reg_num,
> +			       unsigned long *out_val);
> +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> +			       unsigned long reg_num,
> +			       unsigned long val);
> +
> +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> +				 unsigned int csr_num,
> +				 unsigned long *val,
> +				 unsigned long new_val,
> +				 unsigned long wr_mask);
> +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> +				unsigned long *val, unsigned long new_val,
> +				unsigned long wr_mask);
> +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS \
> +{ .base = CSR_SIREG,      .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \
> +{ .base = CSR_STOPEI,     .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei },
>  
>  static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
>  {
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index 3157cf748df1..ee0acccb1d3b 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -204,8 +204,9 @@ struct kvm_vcpu_arch {
>  	 * in irqs_pending. Our approach is modeled around multiple producer
>  	 * and single consumer problem where the consumer is the VCPU itself.
>  	 */
> -	unsigned long irqs_pending;
> -	unsigned long irqs_pending_mask;
> +#define KVM_RISCV_VCPU_NR_IRQS	64
> +	DECLARE_BITMAP(irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> +	DECLARE_BITMAP(irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);

I'd prefer this ulong to bitmap change, and all its repercussions, be done
in a separate patch.

>  
>  	/* VCPU Timer */
>  	struct kvm_vcpu_timer timer;
> @@ -334,7 +335,7 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
>  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
>  void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu);
>  void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu);
> -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask);
> +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
>  void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu);
>  void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu);
>  
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index cbc3e74fa670..c517e70ddcd6 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -81,6 +81,13 @@ struct kvm_riscv_csr {
>  
>  /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
>  struct kvm_riscv_aia_csr {
> +	unsigned long siselect;
> +	unsigned long siprio1;
> +	unsigned long siprio2;
> +	unsigned long sieh;
> +	unsigned long siph;
> +	unsigned long siprio1h;
> +	unsigned long siprio2h;
>  };
>  
>  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> index 7a633331cd3e..d530912f28bc 100644
> --- a/arch/riscv/kvm/aia.c
> +++ b/arch/riscv/kvm/aia.c
> @@ -26,6 +26,323 @@ static void aia_set_hvictl(bool ext_irq_pending)
>  	csr_write(CSR_HVICTL, hvictl);
>  }
>  
> +#ifdef CONFIG_32BIT
> +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +	unsigned long mask, val;
> +
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +	if (READ_ONCE(vcpu->arch.irqs_pending_mask[1])) {
> +		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[1], 0);
> +		val = READ_ONCE(vcpu->arch.irqs_pending[1]) & mask;
> +
> +		csr->hviph &= ~mask;
> +		csr->hviph |= val;
> +	}
> +}
> +
> +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +
> +	if (kvm_riscv_aia_available())
> +		csr->vsieh = csr_read(CSR_VSIEH);
> +}
> +#endif
> +
> +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> +{
> +	unsigned long seip;
> +
> +	if (!kvm_riscv_aia_available())
> +		return false;
> +
> +#ifdef CONFIG_32BIT
> +	if (READ_ONCE(vcpu->arch.irqs_pending[1]) &
> +	    (vcpu->arch.aia_context.guest_csr.vsieh & (unsigned long)(mask >> 32)))

upper_32_bits()

> +		return true;
> +#endif
> +
> +	seip = vcpu->arch.guest_csr.vsie;
> +	seip &= (unsigned long)mask;
> +	seip &= BIT(IRQ_S_EXT);

Please add a blank line above the if-statement.

> +	if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)

Shouldn't we check kvm_riscv_aia_initialized() at the top of this
function?

> +		return false;
> +
> +	return false;

return true

But if we move kvm_riscv_aia_initialized() up, then we instead can do

 return !!seip;

> +}
> +
> +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> +
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +#ifdef CONFIG_32BIT
> +	csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph);
> +#endif
> +	aia_set_hvictl((csr->hvip & BIT(IRQ_VS_EXT)) ? true : false);

The compiler will manage the conversion of csr->hvip & BIT(IRQ_VS_EXT)
to a 1 or 0 since it's getting passed in as a boolean parameter.

> +}
> +
> +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +	csr_write(CSR_VSISELECT, csr->vsiselect);
> +	csr_write(CSR_HVIPRIO1, csr->hviprio1);
> +	csr_write(CSR_HVIPRIO2, csr->hviprio2);
> +#ifdef CONFIG_32BIT
> +	csr_write(CSR_VSIEH, csr->vsieh);
> +	csr_write(CSR_HVIPH, csr->hviph);
> +	csr_write(CSR_HVIPRIO1H, csr->hviprio1h);
> +	csr_write(CSR_HVIPRIO2H, csr->hviprio2h);
> +#endif
> +}
> +
> +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +	csr->vsiselect = csr_read(CSR_VSISELECT);
> +	csr->hviprio1 = csr_read(CSR_HVIPRIO1);
> +	csr->hviprio2 = csr_read(CSR_HVIPRIO2);
> +#ifdef CONFIG_32BIT
> +	csr->vsieh = csr_read(CSR_VSIEH);
> +	csr->hviph = csr_read(CSR_HVIPH);
> +	csr->hviprio1h = csr_read(CSR_HVIPRIO1H);
> +	csr->hviprio2h = csr_read(CSR_HVIPRIO2H);
> +#endif
> +}
> +
> +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> +			       unsigned long reg_num,
> +			       unsigned long *out_val)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +
> +	if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> +		return -EINVAL;
> +
> +	*out_val = 0;
> +	if (kvm_riscv_aia_available())
> +		*out_val = ((unsigned long *)csr)[reg_num];
> +
> +	return 0;
> +}
> +
> +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> +			       unsigned long reg_num,
> +			       unsigned long val)
> +{
> +	struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> +
> +	if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> +		return -EINVAL;
> +
> +	if (kvm_riscv_aia_available()) {
> +		((unsigned long *)csr)[reg_num] = val;
> +
> +#ifdef CONFIG_32BIT
> +		if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph))
> +			WRITE_ONCE(vcpu->arch.irqs_pending_mask[1], 0);
> +#endif
> +	}
> +
> +	return 0;
> +}
> +
> +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> +				 unsigned int csr_num,
> +				 unsigned long *val,
> +				 unsigned long new_val,
> +				 unsigned long wr_mask)
> +{
> +	/* If AIA not available then redirect trap */
> +	if (!kvm_riscv_aia_available())
> +		return KVM_INSN_ILLEGAL_TRAP;
> +
> +	/* If AIA not initialized then forward to user space */
> +	if (!kvm_riscv_aia_initialized(vcpu->kvm))
> +		return KVM_INSN_EXIT_TO_USER_SPACE;
> +
> +	return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, KVM_RISCV_AIA_IMSIC_TOPEI,
> +					    val, new_val, wr_mask);
> +}
> +
> +/*
> + * External IRQ priority always read-only zero. This means default
> + * priority order  is always preferred for external IRQs unless
> + * HVICTL.IID == 9 and HVICTL.IPRIO != 0
> + */
> +static int aia_irq2bitpos[] = {
> +0,     8,   -1,   -1,   16,   24,   -1,   -1, /* 0 - 7 */
> +32,   -1,   -1,   -1,   -1,   40,   48,   56, /* 8 - 15 */
> +64,   72,   80,   88,   96,  104,  112,  120, /* 16 - 23 */
> +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 24 - 31 */
> +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 32 - 39 */
> +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 40 - 47 */
> +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 48 - 55 */
> +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 56 - 63 */
> +};
> +
> +static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +	unsigned long hviprio;
> +	int bitpos = aia_irq2bitpos[irq];
> +
> +	if (bitpos < 0)
> +		return 0;
> +
> +	switch (bitpos / BITS_PER_LONG) {
> +	case 0:
> +		hviprio = csr_read(CSR_HVIPRIO1);
> +		break;
> +	case 1:
> +#ifndef CONFIG_32BIT
> +		hviprio = csr_read(CSR_HVIPRIO2);
> +		break;
> +#else
> +		hviprio = csr_read(CSR_HVIPRIO1H);
> +		break;
> +	case 2:
> +		hviprio = csr_read(CSR_HVIPRIO2);
> +		break;
> +	case 3:
> +		hviprio = csr_read(CSR_HVIPRIO2H);
> +		break;
> +#endif
> +	default:
> +		return 0;
> +	};
         ^ unnecessary ;
> +
> +	return (hviprio >> (bitpos % BITS_PER_LONG)) & TOPI_IPRIO_MASK;
> +}
> +
> +static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio)
> +{
> +	unsigned long hviprio;
> +	int bitpos = aia_irq2bitpos[irq];
> +
> +	if (bitpos < 0)
> +		return;
> +
> +	switch (bitpos / BITS_PER_LONG) {
> +	case 0:
> +		hviprio = csr_read(CSR_HVIPRIO1);
> +		break;
> +	case 1:
> +#ifndef CONFIG_32BIT
> +		hviprio = csr_read(CSR_HVIPRIO2);
> +		break;
> +#else
> +		hviprio = csr_read(CSR_HVIPRIO1H);
> +		break;
> +	case 2:
> +		hviprio = csr_read(CSR_HVIPRIO2);
> +		break;
> +	case 3:
> +		hviprio = csr_read(CSR_HVIPRIO2H);
> +		break;
> +#endif
> +	default:
> +		return;
> +	};
         ^ unnecessary ;

The csr read switch could be put in a helper and shared between the get
and set functions.

> +
> +	hviprio &= ~((unsigned long)TOPI_IPRIO_MASK <<

I don't think the (unsigned long) cast is necessary, as I believe
TOPI_IPRIO_MASK is already an unsigned long.

> +		     (bitpos % BITS_PER_LONG));
> +	hviprio |= (unsigned long)prio << (bitpos % BITS_PER_LONG);
> +
> +	switch (bitpos / BITS_PER_LONG) {
> +	case 0:
> +		csr_write(CSR_HVIPRIO1, hviprio);
> +		break;
> +	case 1:
> +#ifndef CONFIG_32BIT
> +		csr_write(CSR_HVIPRIO2, hviprio);
> +		break;
> +#else
> +		csr_write(CSR_HVIPRIO1H, hviprio);
> +		break;
> +	case 2:
> +		csr_write(CSR_HVIPRIO2, hviprio);
> +		break;
> +	case 3:
> +		csr_write(CSR_HVIPRIO2H, hviprio);
> +		break;
> +#endif
> +	default:
> +		return;
> +	};
         ^ unnecessary ;

> +}
> +
> +static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel,
> +			 unsigned long *val, unsigned long new_val,
> +			 unsigned long wr_mask)
> +{
> +	int i, firq, nirqs;

nit: I guessed 'f' is for 'first', but 'first_irq' would make that more
clear from the start.

> +	unsigned long old_val;
> +
> +#ifndef CONFIG_32BIT
> +	if (isel & 0x1)
> +		return KVM_INSN_ILLEGAL_TRAP;
> +#endif
> +
> +	nirqs = 4 * (BITS_PER_LONG / 32);
> +	firq = ((isel - ISELECT_IPRIO0) / (BITS_PER_LONG / 32)) * (nirqs);

This is just firq = 4 * (isel - ISELECT_IPRIO0);

> +
> +	old_val = 0;
> +	for (i = 0; i < nirqs; i++)
> +		old_val |= (unsigned long)aia_get_iprio8(vcpu, firq + i) <<
> +			   (TOPI_IPRIO_BITS * i);

nit: normally would indent to under the (

> +
> +	if (val)
> +		*val = old_val;
> +
> +	if (wr_mask) {
> +		new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
> +		for (i = 0; i < nirqs; i++)
> +			aia_set_iprio8(vcpu, firq + i,
> +			(new_val >> (TOPI_IPRIO_BITS * i)) & TOPI_IPRIO_MASK);

nit: normally would indent to under the (

> +	}
> +
> +	return KVM_INSN_CONTINUE_NEXT_SEPC;
> +}
> +
> +#define IMSIC_FIRST	0x70
> +#define IMSIC_LAST	0xff
> +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> +				unsigned long *val, unsigned long new_val,
> +				unsigned long wr_mask)
> +{
> +	unsigned int isel;
> +
> +	/* If AIA not available then redirect trap */
> +	if (!kvm_riscv_aia_available())
> +		return KVM_INSN_ILLEGAL_TRAP;
> +
> +	/* First try to emulate in kernel space */
> +	isel = csr_read(CSR_VSISELECT) & ISELECT_MASK;
> +	if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15)
> +		return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask);
> +	else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST &&
> +		 kvm_riscv_aia_initialized(vcpu->kvm))
> +		return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val,
> +						    wr_mask);
> +
> +	/* We can't handle it here so redirect to user space */
> +	return KVM_INSN_EXIT_TO_USER_SPACE;
> +}
> +
>  void kvm_riscv_aia_enable(void)
>  {
>  	if (!kvm_riscv_aia_available())
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 15507cd3a595..30acf3ebdc3d 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -141,8 +141,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>  
>  	kvm_riscv_vcpu_aia_reset(vcpu);
>  
> -	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> -	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +	bitmap_zero(vcpu->arch.irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> +	bitmap_zero(vcpu->arch.irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
>  
>  	kvm_riscv_vcpu_pmu_reset(vcpu);
>  
> @@ -474,6 +474,7 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
>  	if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
>  		kvm_riscv_vcpu_flush_interrupts(vcpu);
>  		*out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
> +		*out_val |= csr->hvip & ~IRQ_LOCAL_MASK;
>  	} else
>  		*out_val = ((unsigned long *)csr)[reg_num];
>  
> @@ -497,7 +498,7 @@ static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
>  	((unsigned long *)csr)[reg_num] = reg_val;
>  
>  	if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
> -		WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +		WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0);
>  
>  	return 0;
>  }
> @@ -799,9 +800,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
>  	struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
>  	unsigned long mask, val;
>  
> -	if (READ_ONCE(vcpu->arch.irqs_pending_mask)) {
> -		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0);
> -		val = READ_ONCE(vcpu->arch.irqs_pending) & mask;
> +	if (READ_ONCE(vcpu->arch.irqs_pending_mask[0])) {
> +		mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[0], 0);
> +		val = READ_ONCE(vcpu->arch.irqs_pending[0]) & mask;
>  
>  		csr->hvip &= ~mask;
>  		csr->hvip |= val;
> @@ -825,12 +826,12 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
>  	if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) {
>  		if (hvip & (1UL << IRQ_VS_SOFT)) {
>  			if (!test_and_set_bit(IRQ_VS_SOFT,
> -					      &v->irqs_pending_mask))
> -				set_bit(IRQ_VS_SOFT, &v->irqs_pending);
> +					      v->irqs_pending_mask))
> +				set_bit(IRQ_VS_SOFT, v->irqs_pending);
>  		} else {
>  			if (!test_and_set_bit(IRQ_VS_SOFT,
> -					      &v->irqs_pending_mask))
> -				clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
> +					      v->irqs_pending_mask))
> +				clear_bit(IRQ_VS_SOFT, v->irqs_pending);
>  		}
>  	}
>  
> @@ -843,14 +844,20 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
>  
>  int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
>  {
> -	if (irq != IRQ_VS_SOFT &&
> +	/*
> +	 * We only allow VS-mode software, timer, and external
> +	 * interrupts when irq is one of the local interrupts
> +	 * defined by RISC-V privilege specification.
> +	 */
> +	if (irq < IRQ_LOCAL_MAX &&
> +	    irq != IRQ_VS_SOFT &&
>  	    irq != IRQ_VS_TIMER &&
>  	    irq != IRQ_VS_EXT)
>  		return -EINVAL;
>  
> -	set_bit(irq, &vcpu->arch.irqs_pending);
> +	set_bit(irq, vcpu->arch.irqs_pending);
>  	smp_mb__before_atomic();
> -	set_bit(irq, &vcpu->arch.irqs_pending_mask);
> +	set_bit(irq, vcpu->arch.irqs_pending_mask);
>  
>  	kvm_vcpu_kick(vcpu);
>  
> @@ -859,25 +866,33 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
>  
>  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
>  {
> -	if (irq != IRQ_VS_SOFT &&
> +	/*
> +	 * We only allow VS-mode software, timer, and external
> +	 * interrupts when irq is one of the local interrupts
> +	 * defined by RISC-V privilege specification.
> +	 */
> +	if (irq < IRQ_LOCAL_MAX &&
> +	    irq != IRQ_VS_SOFT &&
>  	    irq != IRQ_VS_TIMER &&
>  	    irq != IRQ_VS_EXT)
>  		return -EINVAL;
>  
> -	clear_bit(irq, &vcpu->arch.irqs_pending);
> +	clear_bit(irq, vcpu->arch.irqs_pending);
>  	smp_mb__before_atomic();
> -	set_bit(irq, &vcpu->arch.irqs_pending_mask);
> +	set_bit(irq, vcpu->arch.irqs_pending_mask);
>  
>  	return 0;
>  }
>  
> -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
> +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
>  {
>  	unsigned long ie;
>  
>  	ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> -		<< VSIP_TO_HVIP_SHIFT) & mask;
> -	if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
> +		<< VSIP_TO_HVIP_SHIFT) & (unsigned long)mask;
> +	ie |= vcpu->arch.guest_csr.vsie & ~IRQ_LOCAL_MASK &
> +		(unsigned long)mask;
> +	if (READ_ONCE(vcpu->arch.irqs_pending[0]) & ie)
>  		return true;
>  
>  	/* Check AIA high interrupts */
> -- 
> 2.34.1
>

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA
  2023-04-03  9:33 ` [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA Anup Patel
  2023-04-03 12:00   ` Andrew Jones
@ 2023-04-03 23:49   ` Atish Patra
  2023-04-04  3:22     ` Anup Patel
  1 sibling, 1 reply; 30+ messages in thread
From: Atish Patra @ 2023-04-03 23:49 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Andrew Jones,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel,
	Atish Patra

On Mon, Apr 3, 2023 at 3:03 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> To incrementally implement AIA support, we first add minimal skeletal
> support which only compiles and detects AIA hardware support at the
> boot-time but does not provide any functionality.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> Reviewed-by: Atish Patra <atishp@rivosinc.com>
> ---
>  arch/riscv/include/asm/hwcap.h    |   6 ++
>  arch/riscv/include/asm/kvm_aia.h  | 109 ++++++++++++++++++++++++++++++
>  arch/riscv/include/asm/kvm_host.h |   7 ++
>  arch/riscv/kvm/Makefile           |   1 +
>  arch/riscv/kvm/aia.c              |  66 ++++++++++++++++++
>  arch/riscv/kvm/main.c             |  22 +++++-
>  arch/riscv/kvm/vcpu.c             |  40 ++++++++++-
>  arch/riscv/kvm/vcpu_insn.c        |   1 +
>  arch/riscv/kvm/vm.c               |   4 ++
>  9 files changed, 252 insertions(+), 4 deletions(-)
>  create mode 100644 arch/riscv/include/asm/kvm_aia.h
>  create mode 100644 arch/riscv/kvm/aia.c
>
> diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
> index 9c8ae4399565..8087e11a5cf8 100644
> --- a/arch/riscv/include/asm/hwcap.h
> +++ b/arch/riscv/include/asm/hwcap.h
> @@ -48,6 +48,12 @@
>  #define RISCV_ISA_EXT_MAX              64
>  #define RISCV_ISA_EXT_NAME_LEN_MAX     32
>
> +#ifdef CONFIG_RISCV_M_MODE
> +#define RISCV_ISA_EXT_SxAIA            RISCV_ISA_EXT_SMAIA
> +#else
> +#define RISCV_ISA_EXT_SxAIA            RISCV_ISA_EXT_SSAIA
> +#endif
> +
>  #ifndef __ASSEMBLY__
>
>  #include <linux/jump_label.h>
> diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> new file mode 100644
> index 000000000000..258a835d4c32
> --- /dev/null
> +++ b/arch/riscv/include/asm/kvm_aia.h
> @@ -0,0 +1,109 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2021 Western Digital Corporation or its affiliates.
> + * Copyright (C) 2022 Ventana Micro Systems Inc.
> + *
> + * Authors:
> + *     Anup Patel <apatel@ventanamicro.com>
> + */
> +
> +#ifndef __KVM_RISCV_AIA_H
> +#define __KVM_RISCV_AIA_H
> +
> +#include <linux/jump_label.h>
> +#include <linux/kvm_types.h>
> +
> +struct kvm_aia {
> +       /* In-kernel irqchip created */
> +       bool            in_kernel;
> +
> +       /* In-kernel irqchip initialized */
> +       bool            initialized;
> +};
> +
> +struct kvm_vcpu_aia {
> +};
> +
> +#define kvm_riscv_aia_initialized(k)   ((k)->arch.aia.initialized)
> +
> +#define irqchip_in_kernel(k)           ((k)->arch.aia.in_kernel)
> +
> +DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> +#define kvm_riscv_aia_available() \
> +       static_branch_unlikely(&kvm_riscv_aia_available)
> +
> +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
> +                                                    u64 mask)
> +{
> +       return false;
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> +{
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> +                                            unsigned long reg_num,
> +                                            unsigned long *out_val)
> +{
> +       *out_val = 0;
> +       return 0;
> +}
> +
> +static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> +                                            unsigned long reg_num,
> +                                            unsigned long val)
> +{
> +       return 0;
> +}
> +
> +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
> +
> +static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
> +{
> +       return 1;
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu)
> +{
> +       return 0;
> +}
> +
> +static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu)
> +{
> +}
> +
> +static inline void kvm_riscv_aia_init_vm(struct kvm *kvm)
> +{
> +}
> +
> +static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
> +{
> +}
> +
> +void kvm_riscv_aia_enable(void);
> +void kvm_riscv_aia_disable(void);
> +int kvm_riscv_aia_init(void);
> +void kvm_riscv_aia_exit(void);
> +
> +#endif
> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> index cc7da66ee0c0..3157cf748df1 100644
> --- a/arch/riscv/include/asm/kvm_host.h
> +++ b/arch/riscv/include/asm/kvm_host.h
> @@ -14,6 +14,7 @@
>  #include <linux/kvm_types.h>
>  #include <linux/spinlock.h>
>  #include <asm/hwcap.h>
> +#include <asm/kvm_aia.h>
>  #include <asm/kvm_vcpu_fp.h>
>  #include <asm/kvm_vcpu_insn.h>
>  #include <asm/kvm_vcpu_sbi.h>
> @@ -94,6 +95,9 @@ struct kvm_arch {
>
>         /* Guest Timer */
>         struct kvm_guest_timer timer;
> +
> +       /* AIA Guest/VM context */
> +       struct kvm_aia aia;
>  };
>
>  struct kvm_cpu_trap {
> @@ -221,6 +225,9 @@ struct kvm_vcpu_arch {
>         /* SBI context */
>         struct kvm_vcpu_sbi_context sbi_context;
>
> +       /* AIA VCPU context */
> +       struct kvm_vcpu_aia aia_context;
> +
>         /* Cache pages needed to program page tables with spinlock held */
>         struct kvm_mmu_memory_cache mmu_page_cache;
>
> diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
> index 278e97c06e0a..8031b8912a0d 100644
> --- a/arch/riscv/kvm/Makefile
> +++ b/arch/riscv/kvm/Makefile
> @@ -26,3 +26,4 @@ kvm-y += vcpu_sbi_replace.o
>  kvm-y += vcpu_sbi_hsm.o
>  kvm-y += vcpu_timer.o
>  kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o
> +kvm-y += aia.o
> diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> new file mode 100644
> index 000000000000..7a633331cd3e
> --- /dev/null
> +++ b/arch/riscv/kvm/aia.c
> @@ -0,0 +1,66 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021 Western Digital Corporation or its affiliates.
> + * Copyright (C) 2022 Ventana Micro Systems Inc.
> + *
> + * Authors:
> + *     Anup Patel <apatel@ventanamicro.com>
> + */
> +
> +#include <linux/kvm_host.h>
> +#include <asm/hwcap.h>
> +
> +DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> +
> +static void aia_set_hvictl(bool ext_irq_pending)
> +{
> +       unsigned long hvictl;
> +
> +       /*
> +        * HVICTL.IID == 9 and HVICTL.IPRIO == 0 represents
> +        * no interrupt in HVICTL.
> +        */
> +
> +       hvictl = (IRQ_S_EXT << HVICTL_IID_SHIFT) & HVICTL_IID;
> +       hvictl |= ext_irq_pending;
> +       csr_write(CSR_HVICTL, hvictl);
> +}
> +
> +void kvm_riscv_aia_enable(void)
> +{
> +       if (!kvm_riscv_aia_available())
> +               return;
> +
> +       aia_set_hvictl(false);
> +       csr_write(CSR_HVIPRIO1, 0x0);
> +       csr_write(CSR_HVIPRIO2, 0x0);
> +#ifdef CONFIG_32BIT
> +       csr_write(CSR_HVIPH, 0x0);
> +       csr_write(CSR_HIDELEGH, 0x0);
> +       csr_write(CSR_HVIPRIO1H, 0x0);
> +       csr_write(CSR_HVIPRIO2H, 0x0);
> +#endif
> +}
> +
> +void kvm_riscv_aia_disable(void)
> +{
> +       if (!kvm_riscv_aia_available())
> +               return;
> +
> +       aia_set_hvictl(false);
> +}
> +
> +int kvm_riscv_aia_init(void)
> +{
> +       if (!riscv_isa_extension_available(NULL, SxAIA))
> +               return -ENODEV;
> +
> +       /* Enable KVM AIA support */
> +       static_branch_enable(&kvm_riscv_aia_available);
> +
> +       return 0;
> +}
> +
> +void kvm_riscv_aia_exit(void)
> +{
> +}
> diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
> index 41ad7639a17b..6396352b4e4d 100644
> --- a/arch/riscv/kvm/main.c
> +++ b/arch/riscv/kvm/main.c
> @@ -44,11 +44,15 @@ int kvm_arch_hardware_enable(void)
>
>         csr_write(CSR_HVIP, 0);
>
> +       kvm_riscv_aia_enable();
> +
>         return 0;
>  }
>
>  void kvm_arch_hardware_disable(void)
>  {
> +       kvm_riscv_aia_disable();
> +
>         /*
>          * After clearing the hideleg CSR, the host kernel will receive
>          * spurious interrupts if hvip CSR has pending interrupts and the
> @@ -63,6 +67,7 @@ void kvm_arch_hardware_disable(void)
>
>  static int __init riscv_kvm_init(void)
>  {
> +       int rc;
>         const char *str;
>
>         if (!riscv_isa_extension_available(NULL, h)) {
> @@ -84,6 +89,10 @@ static int __init riscv_kvm_init(void)
>
>         kvm_riscv_gstage_vmid_detect();
>
> +       rc = kvm_riscv_aia_init();
> +       if (rc && rc != -ENODEV)
> +               return rc;
> +
>         kvm_info("hypervisor extension available\n");
>
>         switch (kvm_riscv_gstage_mode()) {
> @@ -106,12 +115,23 @@ static int __init riscv_kvm_init(void)
>
>         kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
>
> -       return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +       if (kvm_riscv_aia_available())
> +               kvm_info("AIA available\n");
> +
> +       rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> +       if (rc) {
> +               kvm_riscv_aia_exit();
> +               return rc;
> +       }
> +
> +       return 0;
>  }
>  module_init(riscv_kvm_init);
>
>  static void __exit riscv_kvm_exit(void)
>  {
> +       kvm_riscv_aia_exit();
> +
>         kvm_exit();
>  }
>  module_exit(riscv_kvm_exit);
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 02b49cb94561..1fd54ec15622 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -137,6 +137,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>
>         kvm_riscv_vcpu_timer_reset(vcpu);
>
> +       kvm_riscv_vcpu_aia_reset(vcpu);
> +
>         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
>         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
>
> @@ -159,6 +161,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
>
>  int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  {
> +       int rc;
>         struct kvm_cpu_context *cntx;
>         struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
>         unsigned long host_isa, i;
> @@ -201,6 +204,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>         /* setup performance monitoring */
>         kvm_riscv_vcpu_pmu_init(vcpu);
>
> +       /* Setup VCPU AIA */
> +       rc = kvm_riscv_vcpu_aia_init(vcpu);
> +       if (rc)
> +               return rc;
> +
>         /* Reset VCPU */
>         kvm_riscv_reset_vcpu(vcpu);
>
> @@ -220,6 +228,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
>
>  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
>  {
> +       /* Cleanup VCPU AIA context */
> +       kvm_riscv_vcpu_aia_deinit(vcpu);
> +
>         /* Cleanup VCPU timer */
>         kvm_riscv_vcpu_timer_deinit(vcpu);
>
> @@ -741,6 +752,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
>                 csr->hvip &= ~mask;
>                 csr->hvip |= val;
>         }
> +
> +       /* Flush AIA high interrupts */
> +       kvm_riscv_vcpu_aia_flush_interrupts(vcpu);
>  }
>
>  void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> @@ -766,6 +780,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
>                 }
>         }
>
> +       /* Sync-up AIA high interrupts */
> +       kvm_riscv_vcpu_aia_sync_interrupts(vcpu);
> +
>         /* Sync-up timer CSRs */
>         kvm_riscv_vcpu_timer_sync(vcpu);
>  }
> @@ -802,10 +819,15 @@ int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
>
>  bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
>  {
> -       unsigned long ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> -                           << VSIP_TO_HVIP_SHIFT) & mask;
> +       unsigned long ie;
> +
> +       ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> +               << VSIP_TO_HVIP_SHIFT) & mask;
> +       if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
> +               return true;
>
> -       return (READ_ONCE(vcpu->arch.irqs_pending) & ie) ? true : false;
> +       /* Check AIA high interrupts */
> +       return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask);
>  }
>
>  void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu)
> @@ -901,6 +923,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>         kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context,
>                                         vcpu->arch.isa);
>
> +       kvm_riscv_vcpu_aia_load(vcpu, cpu);
> +
>         vcpu->cpu = cpu;
>  }
>
> @@ -910,6 +934,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>
>         vcpu->cpu = -1;
>
> +       kvm_riscv_vcpu_aia_put(vcpu);
> +
>         kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context,
>                                      vcpu->arch.isa);
>         kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context);
> @@ -977,6 +1003,7 @@ static void kvm_riscv_update_hvip(struct kvm_vcpu *vcpu)
>         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
>
>         csr_write(CSR_HVIP, csr->hvip);
> +       kvm_riscv_vcpu_aia_update_hvip(vcpu);
>  }
>
>  /*
> @@ -1051,6 +1078,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>
>                 local_irq_disable();
>
> +               /* Update AIA HW state before entering guest */
> +               ret = kvm_riscv_vcpu_aia_update(vcpu);
> +               if (ret <= 0) {
> +                       local_irq_enable();
> +                       continue;
> +               }
> +

Can we update AIA hw state with only preemption disabled ?
For CoVE (aka AP-TEE), we need interrupts enabled to issue IPIs in
multiple scenarios.

>                 /*
>                  * Ensure we set mode to IN_GUEST_MODE after we disable
>                  * interrupts and before the final VCPU requests check.
> diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
> index f689337b78ff..7a6abed41bc1 100644
> --- a/arch/riscv/kvm/vcpu_insn.c
> +++ b/arch/riscv/kvm/vcpu_insn.c
> @@ -214,6 +214,7 @@ struct csr_func {
>  };
>
>  static const struct csr_func csr_funcs[] = {
> +       KVM_RISCV_VCPU_AIA_CSR_FUNCS
>         KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS
>  };
>
> diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c
> index 65a964d7e70d..bc03d2ddcb51 100644
> --- a/arch/riscv/kvm/vm.c
> +++ b/arch/riscv/kvm/vm.c
> @@ -41,6 +41,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>                 return r;
>         }
>
> +       kvm_riscv_aia_init_vm(kvm);
> +
>         kvm_riscv_guest_timer_init(kvm);
>
>         return 0;
> @@ -49,6 +51,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>  void kvm_arch_destroy_vm(struct kvm *kvm)
>  {
>         kvm_destroy_vcpus(kvm);
> +
> +       kvm_riscv_aia_destroy_vm(kvm);
>  }
>
>  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> --
> 2.34.1
>


-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface
  2023-04-03  9:33 ` [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface Anup Patel
  2023-04-03 12:18   ` Andrew Jones
@ 2023-04-04  0:54   ` Atish Patra
  1 sibling, 0 replies; 30+ messages in thread
From: Atish Patra @ 2023-04-04  0:54 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Andrew Jones,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 3, 2023 at 3:03 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> To make the CSR ONE_REG interface extensible, we implement subtype
> for the CSR ONE_REG IDs. The existing CSR ONE_REG IDs are treated
> as subtype = 0 (aka General CSRs).
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/uapi/asm/kvm.h |  3 +-
>  arch/riscv/kvm/vcpu.c             | 88 +++++++++++++++++++++++--------
>  2 files changed, 69 insertions(+), 22 deletions(-)
>
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index 47a7c3958229..182023dc9a51 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -65,7 +65,7 @@ struct kvm_riscv_core {
>  #define KVM_RISCV_MODE_S       1
>  #define KVM_RISCV_MODE_U       0
>
> -/* CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> +/* General CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
>  struct kvm_riscv_csr {
>         unsigned long sstatus;
>         unsigned long sie;
> @@ -152,6 +152,7 @@ enum KVM_RISCV_SBI_EXT_ID {
>
>  /* Control and status registers are mapped as type 3 */
>  #define KVM_REG_RISCV_CSR              (0x03 << KVM_REG_RISCV_TYPE_SHIFT)
> +#define KVM_REG_RISCV_CSR_GENERAL      (0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT)
>  #define KVM_REG_RISCV_CSR_REG(name)    \
>                 (offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long))
>
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 1fd54ec15622..aca6b4fb7519 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -460,27 +460,72 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu,
>         return 0;
>  }
>
> +static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
> +                                         unsigned long reg_num,
> +                                         unsigned long *out_val)
> +{
> +       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> +
> +       if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
> +               return -EINVAL;
> +
> +       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> +               kvm_riscv_vcpu_flush_interrupts(vcpu);
> +               *out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
> +       } else
> +               *out_val = ((unsigned long *)csr)[reg_num];
> +
> +       return 0;
> +}
> +
> +static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
> +                                                unsigned long reg_num,
> +                                                unsigned long reg_val)
> +{
> +       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> +
> +       if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
> +               return -EINVAL;
> +
> +       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> +               reg_val &= VSIP_VALID_MASK;
> +               reg_val <<= VSIP_TO_HVIP_SHIFT;
> +       }
> +
> +       ((unsigned long *)csr)[reg_num] = reg_val;
> +
> +       if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
> +               WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +
> +       return 0;
> +}
> +
>  static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
>                                       const struct kvm_one_reg *reg)
>  {
> -       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> +       int rc;
>         unsigned long __user *uaddr =
>                         (unsigned long __user *)(unsigned long)reg->addr;
>         unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
>                                             KVM_REG_SIZE_MASK |
>                                             KVM_REG_RISCV_CSR);
> -       unsigned long reg_val;
> +       unsigned long reg_val, reg_subtype;
>
>         if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
>                 return -EINVAL;
> -       if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
> -               return -EINVAL;
>
> -       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> -               kvm_riscv_vcpu_flush_interrupts(vcpu);
> -               reg_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
> -       } else
> -               reg_val = ((unsigned long *)csr)[reg_num];
> +       reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK;
> +       reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK;
> +       switch (reg_subtype) {
> +       case KVM_REG_RISCV_CSR_GENERAL:
> +               rc = kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, &reg_val);
> +               break;
> +       default:
> +               rc = -EINVAL;
> +               break;
> +       }
> +       if (rc)
> +               return rc;
>
>         if (copy_to_user(uaddr, &reg_val, KVM_REG_SIZE(reg->id)))
>                 return -EFAULT;
> @@ -491,31 +536,32 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
>  static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
>                                       const struct kvm_one_reg *reg)
>  {
> -       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> +       int rc;
>         unsigned long __user *uaddr =
>                         (unsigned long __user *)(unsigned long)reg->addr;
>         unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
>                                             KVM_REG_SIZE_MASK |
>                                             KVM_REG_RISCV_CSR);
> -       unsigned long reg_val;
> +       unsigned long reg_val, reg_subtype;
>
>         if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
>                 return -EINVAL;
> -       if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
> -               return -EINVAL;
>
>         if (copy_from_user(&reg_val, uaddr, KVM_REG_SIZE(reg->id)))
>                 return -EFAULT;
>
> -       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> -               reg_val &= VSIP_VALID_MASK;
> -               reg_val <<= VSIP_TO_HVIP_SHIFT;
> +       reg_subtype = reg_num & KVM_REG_RISCV_SUBTYPE_MASK;
> +       reg_num &= ~KVM_REG_RISCV_SUBTYPE_MASK;
> +       switch (reg_subtype) {
> +       case KVM_REG_RISCV_CSR_GENERAL:
> +               rc = kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val);
> +               break;
> +       default:
> +               rc = -EINVAL;
> +               break;
>         }
> -
> -       ((unsigned long *)csr)[reg_num] = reg_val;
> -
> -       if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
> -               WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +       if (rc)
> +               return rc;
>
>         return 0;
>  }
> --
> 2.34.1
>

Reviewed-by: Atish Patra <atishp@rivosinc.com>

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
  2023-04-03 11:31   ` Andrew Jones
  2023-04-03 12:27   ` Andrew Jones
@ 2023-04-04  0:55   ` Atish Patra
  2 siblings, 0 replies; 30+ messages in thread
From: Atish Patra @ 2023-04-04  0:55 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Palmer Dabbelt, Paul Walmsley, Andrew Jones,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 3, 2023 at 3:03 PM Anup Patel <apatel@ventanamicro.com> wrote:
>
> We implement ONE_REG interface for AIA CSRs as a separate subtype
> under the CSR ONE_REG interface.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
>  arch/riscv/kvm/vcpu.c             | 8 ++++++++
>  2 files changed, 16 insertions(+)
>
> diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> index 182023dc9a51..cbc3e74fa670 100644
> --- a/arch/riscv/include/uapi/asm/kvm.h
> +++ b/arch/riscv/include/uapi/asm/kvm.h
> @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
>         unsigned long scounteren;
>  };
>
> +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> +struct kvm_riscv_aia_csr {
> +};
> +
>  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
>  struct kvm_riscv_timer {
>         __u64 frequency;
> @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
>         KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
>         KVM_RISCV_ISA_EXT_ZICBOM,
>         KVM_RISCV_ISA_EXT_ZBB,
> +       KVM_RISCV_ISA_EXT_SSAIA,
>         KVM_RISCV_ISA_EXT_MAX,
>  };
>
> @@ -153,8 +158,11 @@ enum KVM_RISCV_SBI_EXT_ID {
>  /* Control and status registers are mapped as type 3 */
>  #define KVM_REG_RISCV_CSR              (0x03 << KVM_REG_RISCV_TYPE_SHIFT)
>  #define KVM_REG_RISCV_CSR_GENERAL      (0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT)
> +#define KVM_REG_RISCV_CSR_AIA          (0x1 << KVM_REG_RISCV_SUBTYPE_SHIFT)
>  #define KVM_REG_RISCV_CSR_REG(name)    \
>                 (offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long))
> +#define KVM_REG_RISCV_CSR_AIA_REG(name)        \
> +       (offsetof(struct kvm_riscv_aia_csr, name) / sizeof(unsigned long))
>
>  /* Timer registers are mapped as type 4 */
>  #define KVM_REG_RISCV_TIMER            (0x04 << KVM_REG_RISCV_TYPE_SHIFT)
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index aca6b4fb7519..15507cd3a595 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -58,6 +58,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
>         [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
>         [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
>
> +       KVM_ISA_EXT_ARR(SSAIA),
>         KVM_ISA_EXT_ARR(SSTC),
>         KVM_ISA_EXT_ARR(SVINVAL),
>         KVM_ISA_EXT_ARR(SVPBMT),
> @@ -97,6 +98,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
>         case KVM_RISCV_ISA_EXT_C:
>         case KVM_RISCV_ISA_EXT_I:
>         case KVM_RISCV_ISA_EXT_M:
> +       case KVM_RISCV_ISA_EXT_SSAIA:
>         case KVM_RISCV_ISA_EXT_SSTC:
>         case KVM_RISCV_ISA_EXT_SVINVAL:
>         case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
> @@ -520,6 +522,9 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
>         case KVM_REG_RISCV_CSR_GENERAL:
>                 rc = kvm_riscv_vcpu_general_get_csr(vcpu, reg_num, &reg_val);
>                 break;
> +       case KVM_REG_RISCV_CSR_AIA:
> +               rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, &reg_val);
> +               break;
>         default:
>                 rc = -EINVAL;
>                 break;
> @@ -556,6 +561,9 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
>         case KVM_REG_RISCV_CSR_GENERAL:
>                 rc = kvm_riscv_vcpu_general_set_csr(vcpu, reg_num, reg_val);
>                 break;
> +       case KVM_REG_RISCV_CSR_AIA:
> +               rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val);
> +               break;
>         default:
>                 rc = -EINVAL;
>                 break;
> --
> 2.34.1
>


Reviewed-by: Atish Patra <atishp@rivosinc.com>
-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA
  2023-04-03 23:49   ` Atish Patra
@ 2023-04-04  3:22     ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-04  3:22 UTC (permalink / raw)
  To: Atish Patra
  Cc: Anup Patel, Paolo Bonzini, Palmer Dabbelt, Paul Walmsley,
	Andrew Jones, kvm, kvm-riscv, linux-riscv, linux-kernel,
	Atish Patra

On Tue, Apr 4, 2023 at 5:20 AM Atish Patra <atishp@atishpatra.org> wrote:
>
> On Mon, Apr 3, 2023 at 3:03 PM Anup Patel <apatel@ventanamicro.com> wrote:
> >
> > To incrementally implement AIA support, we first add minimal skeletal
> > support which only compiles and detects AIA hardware support at the
> > boot-time but does not provide any functionality.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > Reviewed-by: Atish Patra <atishp@rivosinc.com>
> > ---
> >  arch/riscv/include/asm/hwcap.h    |   6 ++
> >  arch/riscv/include/asm/kvm_aia.h  | 109 ++++++++++++++++++++++++++++++
> >  arch/riscv/include/asm/kvm_host.h |   7 ++
> >  arch/riscv/kvm/Makefile           |   1 +
> >  arch/riscv/kvm/aia.c              |  66 ++++++++++++++++++
> >  arch/riscv/kvm/main.c             |  22 +++++-
> >  arch/riscv/kvm/vcpu.c             |  40 ++++++++++-
> >  arch/riscv/kvm/vcpu_insn.c        |   1 +
> >  arch/riscv/kvm/vm.c               |   4 ++
> >  9 files changed, 252 insertions(+), 4 deletions(-)
> >  create mode 100644 arch/riscv/include/asm/kvm_aia.h
> >  create mode 100644 arch/riscv/kvm/aia.c
> >
> > diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h
> > index 9c8ae4399565..8087e11a5cf8 100644
> > --- a/arch/riscv/include/asm/hwcap.h
> > +++ b/arch/riscv/include/asm/hwcap.h
> > @@ -48,6 +48,12 @@
> >  #define RISCV_ISA_EXT_MAX              64
> >  #define RISCV_ISA_EXT_NAME_LEN_MAX     32
> >
> > +#ifdef CONFIG_RISCV_M_MODE
> > +#define RISCV_ISA_EXT_SxAIA            RISCV_ISA_EXT_SMAIA
> > +#else
> > +#define RISCV_ISA_EXT_SxAIA            RISCV_ISA_EXT_SSAIA
> > +#endif
> > +
> >  #ifndef __ASSEMBLY__
> >
> >  #include <linux/jump_label.h>
> > diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> > new file mode 100644
> > index 000000000000..258a835d4c32
> > --- /dev/null
> > +++ b/arch/riscv/include/asm/kvm_aia.h
> > @@ -0,0 +1,109 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * Copyright (C) 2021 Western Digital Corporation or its affiliates.
> > + * Copyright (C) 2022 Ventana Micro Systems Inc.
> > + *
> > + * Authors:
> > + *     Anup Patel <apatel@ventanamicro.com>
> > + */
> > +
> > +#ifndef __KVM_RISCV_AIA_H
> > +#define __KVM_RISCV_AIA_H
> > +
> > +#include <linux/jump_label.h>
> > +#include <linux/kvm_types.h>
> > +
> > +struct kvm_aia {
> > +       /* In-kernel irqchip created */
> > +       bool            in_kernel;
> > +
> > +       /* In-kernel irqchip initialized */
> > +       bool            initialized;
> > +};
> > +
> > +struct kvm_vcpu_aia {
> > +};
> > +
> > +#define kvm_riscv_aia_initialized(k)   ((k)->arch.aia.initialized)
> > +
> > +#define irqchip_in_kernel(k)           ((k)->arch.aia.in_kernel)
> > +
> > +DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> > +#define kvm_riscv_aia_available() \
> > +       static_branch_unlikely(&kvm_riscv_aia_available)
> > +
> > +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
> > +                                                    u64 mask)
> > +{
> > +       return false;
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> > +{
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > +                                            unsigned long reg_num,
> > +                                            unsigned long *out_val)
> > +{
> > +       *out_val = 0;
> > +       return 0;
> > +}
> > +
> > +static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > +                                            unsigned long reg_num,
> > +                                            unsigned long val)
> > +{
> > +       return 0;
> > +}
> > +
> > +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
> > +
> > +static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
> > +{
> > +       return 1;
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu)
> > +{
> > +       return 0;
> > +}
> > +
> > +static inline void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> > +static inline void kvm_riscv_aia_init_vm(struct kvm *kvm)
> > +{
> > +}
> > +
> > +static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
> > +{
> > +}
> > +
> > +void kvm_riscv_aia_enable(void);
> > +void kvm_riscv_aia_disable(void);
> > +int kvm_riscv_aia_init(void);
> > +void kvm_riscv_aia_exit(void);
> > +
> > +#endif
> > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> > index cc7da66ee0c0..3157cf748df1 100644
> > --- a/arch/riscv/include/asm/kvm_host.h
> > +++ b/arch/riscv/include/asm/kvm_host.h
> > @@ -14,6 +14,7 @@
> >  #include <linux/kvm_types.h>
> >  #include <linux/spinlock.h>
> >  #include <asm/hwcap.h>
> > +#include <asm/kvm_aia.h>
> >  #include <asm/kvm_vcpu_fp.h>
> >  #include <asm/kvm_vcpu_insn.h>
> >  #include <asm/kvm_vcpu_sbi.h>
> > @@ -94,6 +95,9 @@ struct kvm_arch {
> >
> >         /* Guest Timer */
> >         struct kvm_guest_timer timer;
> > +
> > +       /* AIA Guest/VM context */
> > +       struct kvm_aia aia;
> >  };
> >
> >  struct kvm_cpu_trap {
> > @@ -221,6 +225,9 @@ struct kvm_vcpu_arch {
> >         /* SBI context */
> >         struct kvm_vcpu_sbi_context sbi_context;
> >
> > +       /* AIA VCPU context */
> > +       struct kvm_vcpu_aia aia_context;
> > +
> >         /* Cache pages needed to program page tables with spinlock held */
> >         struct kvm_mmu_memory_cache mmu_page_cache;
> >
> > diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
> > index 278e97c06e0a..8031b8912a0d 100644
> > --- a/arch/riscv/kvm/Makefile
> > +++ b/arch/riscv/kvm/Makefile
> > @@ -26,3 +26,4 @@ kvm-y += vcpu_sbi_replace.o
> >  kvm-y += vcpu_sbi_hsm.o
> >  kvm-y += vcpu_timer.o
> >  kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o vcpu_sbi_pmu.o
> > +kvm-y += aia.o
> > diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> > new file mode 100644
> > index 000000000000..7a633331cd3e
> > --- /dev/null
> > +++ b/arch/riscv/kvm/aia.c
> > @@ -0,0 +1,66 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2021 Western Digital Corporation or its affiliates.
> > + * Copyright (C) 2022 Ventana Micro Systems Inc.
> > + *
> > + * Authors:
> > + *     Anup Patel <apatel@ventanamicro.com>
> > + */
> > +
> > +#include <linux/kvm_host.h>
> > +#include <asm/hwcap.h>
> > +
> > +DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> > +
> > +static void aia_set_hvictl(bool ext_irq_pending)
> > +{
> > +       unsigned long hvictl;
> > +
> > +       /*
> > +        * HVICTL.IID == 9 and HVICTL.IPRIO == 0 represents
> > +        * no interrupt in HVICTL.
> > +        */
> > +
> > +       hvictl = (IRQ_S_EXT << HVICTL_IID_SHIFT) & HVICTL_IID;
> > +       hvictl |= ext_irq_pending;
> > +       csr_write(CSR_HVICTL, hvictl);
> > +}
> > +
> > +void kvm_riscv_aia_enable(void)
> > +{
> > +       if (!kvm_riscv_aia_available())
> > +               return;
> > +
> > +       aia_set_hvictl(false);
> > +       csr_write(CSR_HVIPRIO1, 0x0);
> > +       csr_write(CSR_HVIPRIO2, 0x0);
> > +#ifdef CONFIG_32BIT
> > +       csr_write(CSR_HVIPH, 0x0);
> > +       csr_write(CSR_HIDELEGH, 0x0);
> > +       csr_write(CSR_HVIPRIO1H, 0x0);
> > +       csr_write(CSR_HVIPRIO2H, 0x0);
> > +#endif
> > +}
> > +
> > +void kvm_riscv_aia_disable(void)
> > +{
> > +       if (!kvm_riscv_aia_available())
> > +               return;
> > +
> > +       aia_set_hvictl(false);
> > +}
> > +
> > +int kvm_riscv_aia_init(void)
> > +{
> > +       if (!riscv_isa_extension_available(NULL, SxAIA))
> > +               return -ENODEV;
> > +
> > +       /* Enable KVM AIA support */
> > +       static_branch_enable(&kvm_riscv_aia_available);
> > +
> > +       return 0;
> > +}
> > +
> > +void kvm_riscv_aia_exit(void)
> > +{
> > +}
> > diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
> > index 41ad7639a17b..6396352b4e4d 100644
> > --- a/arch/riscv/kvm/main.c
> > +++ b/arch/riscv/kvm/main.c
> > @@ -44,11 +44,15 @@ int kvm_arch_hardware_enable(void)
> >
> >         csr_write(CSR_HVIP, 0);
> >
> > +       kvm_riscv_aia_enable();
> > +
> >         return 0;
> >  }
> >
> >  void kvm_arch_hardware_disable(void)
> >  {
> > +       kvm_riscv_aia_disable();
> > +
> >         /*
> >          * After clearing the hideleg CSR, the host kernel will receive
> >          * spurious interrupts if hvip CSR has pending interrupts and the
> > @@ -63,6 +67,7 @@ void kvm_arch_hardware_disable(void)
> >
> >  static int __init riscv_kvm_init(void)
> >  {
> > +       int rc;
> >         const char *str;
> >
> >         if (!riscv_isa_extension_available(NULL, h)) {
> > @@ -84,6 +89,10 @@ static int __init riscv_kvm_init(void)
> >
> >         kvm_riscv_gstage_vmid_detect();
> >
> > +       rc = kvm_riscv_aia_init();
> > +       if (rc && rc != -ENODEV)
> > +               return rc;
> > +
> >         kvm_info("hypervisor extension available\n");
> >
> >         switch (kvm_riscv_gstage_mode()) {
> > @@ -106,12 +115,23 @@ static int __init riscv_kvm_init(void)
> >
> >         kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
> >
> > -       return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> > +       if (kvm_riscv_aia_available())
> > +               kvm_info("AIA available\n");
> > +
> > +       rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> > +       if (rc) {
> > +               kvm_riscv_aia_exit();
> > +               return rc;
> > +       }
> > +
> > +       return 0;
> >  }
> >  module_init(riscv_kvm_init);
> >
> >  static void __exit riscv_kvm_exit(void)
> >  {
> > +       kvm_riscv_aia_exit();
> > +
> >         kvm_exit();
> >  }
> >  module_exit(riscv_kvm_exit);
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 02b49cb94561..1fd54ec15622 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -137,6 +137,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
> >
> >         kvm_riscv_vcpu_timer_reset(vcpu);
> >
> > +       kvm_riscv_vcpu_aia_reset(vcpu);
> > +
> >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> >
> > @@ -159,6 +161,7 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> >
> >  int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >  {
> > +       int rc;
> >         struct kvm_cpu_context *cntx;
> >         struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
> >         unsigned long host_isa, i;
> > @@ -201,6 +204,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> >         /* setup performance monitoring */
> >         kvm_riscv_vcpu_pmu_init(vcpu);
> >
> > +       /* Setup VCPU AIA */
> > +       rc = kvm_riscv_vcpu_aia_init(vcpu);
> > +       if (rc)
> > +               return rc;
> > +
> >         /* Reset VCPU */
> >         kvm_riscv_reset_vcpu(vcpu);
> >
> > @@ -220,6 +228,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> >
> >  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> >  {
> > +       /* Cleanup VCPU AIA context */
> > +       kvm_riscv_vcpu_aia_deinit(vcpu);
> > +
> >         /* Cleanup VCPU timer */
> >         kvm_riscv_vcpu_timer_deinit(vcpu);
> >
> > @@ -741,6 +752,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
> >                 csr->hvip &= ~mask;
> >                 csr->hvip |= val;
> >         }
> > +
> > +       /* Flush AIA high interrupts */
> > +       kvm_riscv_vcpu_aia_flush_interrupts(vcpu);
> >  }
> >
> >  void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> > @@ -766,6 +780,9 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> >                 }
> >         }
> >
> > +       /* Sync-up AIA high interrupts */
> > +       kvm_riscv_vcpu_aia_sync_interrupts(vcpu);
> > +
> >         /* Sync-up timer CSRs */
> >         kvm_riscv_vcpu_timer_sync(vcpu);
> >  }
> > @@ -802,10 +819,15 @@ int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >
> >  bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
> >  {
> > -       unsigned long ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> > -                           << VSIP_TO_HVIP_SHIFT) & mask;
> > +       unsigned long ie;
> > +
> > +       ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> > +               << VSIP_TO_HVIP_SHIFT) & mask;
> > +       if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
> > +               return true;
> >
> > -       return (READ_ONCE(vcpu->arch.irqs_pending) & ie) ? true : false;
> > +       /* Check AIA high interrupts */
> > +       return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask);
> >  }
> >
> >  void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu)
> > @@ -901,6 +923,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> >         kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context,
> >                                         vcpu->arch.isa);
> >
> > +       kvm_riscv_vcpu_aia_load(vcpu, cpu);
> > +
> >         vcpu->cpu = cpu;
> >  }
> >
> > @@ -910,6 +934,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> >
> >         vcpu->cpu = -1;
> >
> > +       kvm_riscv_vcpu_aia_put(vcpu);
> > +
> >         kvm_riscv_vcpu_guest_fp_save(&vcpu->arch.guest_context,
> >                                      vcpu->arch.isa);
> >         kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context);
> > @@ -977,6 +1003,7 @@ static void kvm_riscv_update_hvip(struct kvm_vcpu *vcpu)
> >         struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> >
> >         csr_write(CSR_HVIP, csr->hvip);
> > +       kvm_riscv_vcpu_aia_update_hvip(vcpu);
> >  }
> >
> >  /*
> > @@ -1051,6 +1078,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> >
> >                 local_irq_disable();
> >
> > +               /* Update AIA HW state before entering guest */
> > +               ret = kvm_riscv_vcpu_aia_update(vcpu);
> > +               if (ret <= 0) {
> > +                       local_irq_enable();
> > +                       continue;
> > +               }
> > +
>
> Can we update AIA hw state with only preemption disabled ?
> For CoVE (aka AP-TEE), we need interrupts enabled to issue IPIs in
> multiple scenarios.

Okay, I will update in the next revision.

Regards,
Anup

>
> >                 /*
> >                  * Ensure we set mode to IN_GUEST_MODE after we disable
> >                  * interrupts and before the final VCPU requests check.
> > diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
> > index f689337b78ff..7a6abed41bc1 100644
> > --- a/arch/riscv/kvm/vcpu_insn.c
> > +++ b/arch/riscv/kvm/vcpu_insn.c
> > @@ -214,6 +214,7 @@ struct csr_func {
> >  };
> >
> >  static const struct csr_func csr_funcs[] = {
> > +       KVM_RISCV_VCPU_AIA_CSR_FUNCS
> >         KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS
> >  };
> >
> > diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c
> > index 65a964d7e70d..bc03d2ddcb51 100644
> > --- a/arch/riscv/kvm/vm.c
> > +++ b/arch/riscv/kvm/vm.c
> > @@ -41,6 +41,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >                 return r;
> >         }
> >
> > +       kvm_riscv_aia_init_vm(kvm);
> > +
> >         kvm_riscv_guest_timer_init(kvm);
> >
> >         return 0;
> > @@ -49,6 +51,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> >  void kvm_arch_destroy_vm(struct kvm *kvm)
> >  {
> >         kvm_destroy_vcpus(kvm);
> > +
> > +       kvm_riscv_aia_destroy_vm(kvm);
> >  }
> >
> >  int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> > --
> > 2.34.1
> >
>
>
> --
> Regards,
> Atish

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-03 12:23       ` Andrew Jones
@ 2023-04-04 11:52         ` Andrew Jones
  2023-04-04 11:58           ` Conor Dooley
  2023-04-04 12:03           ` Andrew Jones
  0 siblings, 2 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-04 11:52 UTC (permalink / raw)
  To: Anup Patel
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 02:23:01PM +0200, Andrew Jones wrote:
> On Mon, Apr 03, 2023 at 05:34:57PM +0530, Anup Patel wrote:
> > On Mon, Apr 3, 2023 at 5:01 PM Andrew Jones <ajones@ventanamicro.com> wrote:
> > >
> > > On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> > > > We implement ONE_REG interface for AIA CSRs as a separate subtype
> > > > under the CSR ONE_REG interface.
> > > >
> > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > ---
> > > >  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
> > > >  arch/riscv/kvm/vcpu.c             | 8 ++++++++
> > > >  2 files changed, 16 insertions(+)
> > > >
> > > > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > > > index 182023dc9a51..cbc3e74fa670 100644
> > > > --- a/arch/riscv/include/uapi/asm/kvm.h
> > > > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > > > @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
> > > >       unsigned long scounteren;
> > > >  };
> > > >
> > > > +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > > > +struct kvm_riscv_aia_csr {
> > > > +};
> > > > +
> > > >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > > >  struct kvm_riscv_timer {
> > > >       __u64 frequency;
> > > > @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> > > >       KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
> > > >       KVM_RISCV_ISA_EXT_ZICBOM,
> > > >       KVM_RISCV_ISA_EXT_ZBB,
> > >
> > > Looks like this patch is also based on "[PATCH] RISC-V: KVM: Allow Zbb
> > > extension for Guest/VM"
> > 
> > Yes, do you want me to change the order of dependency?
> 
> It's probably best if neither depend on each other, since they're
> independent, but otherwise the order doesn't matter. It'd be nice to call
> the order out in the cover letter to give patchwork a chance at automatic
> build testing, though. To call it out, I believe adding
> 
> Based-on: 20230401112730.2105240-1-apatel@ventanamicro.com
> 
> to the cover letter should work.

I also just noticed that this based on "RISC-V: KVM: Add ONE_REG
interface to enable/disable SBI extensions"[1] and it needs to be
in order to pick up the KVM_REG_RISCV_SUBTYPE_MASK and
KVM_REG_RISCV_SUBTYPE_SHIFT defines. It'd be good to call that
patch out with Based-on.

[1]: 20230331174542.2067560-2-apatel@ventanamicro.com

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-04 11:52         ` Andrew Jones
@ 2023-04-04 11:58           ` Conor Dooley
  2023-04-05  9:28             ` Conor Dooley
  2023-04-04 12:03           ` Andrew Jones
  1 sibling, 1 reply; 30+ messages in thread
From: Conor Dooley @ 2023-04-04 11:58 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Anup Patel, Paolo Bonzini, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, kvm, kvm-riscv, linux-riscv,
	linux-kernel, bjorn

[-- Attachment #1: Type: text/plain, Size: 1008 bytes --]

On Tue, Apr 04, 2023 at 01:52:43PM +0200, Andrew Jones wrote:
> On Mon, Apr 03, 2023 at 02:23:01PM +0200, Andrew Jones wrote:

> > It's probably best if neither depend on each other, since they're
> > independent, but otherwise the order doesn't matter. It'd be nice to call
> > the order out in the cover letter to give patchwork a chance at automatic
> > build testing, though. To call it out, I believe adding
> > 
> > Based-on: 20230401112730.2105240-1-apatel@ventanamicro.com
> > 
> > to the cover letter should work.
> 
> I also just noticed that this based on "RISC-V: KVM: Add ONE_REG
> interface to enable/disable SBI extensions"[1] and it needs to be
> in order to pick up the KVM_REG_RISCV_SUBTYPE_MASK and
> KVM_REG_RISCV_SUBTYPE_SHIFT defines. It'd be good to call that
> patch out with Based-on.
> 
> [1]: 20230331174542.2067560-2-apatel@ventanamicro.com

I've been waiting for a review on that for a while.. It's been 3
weeks, so just gonna merge it and see what breaks!

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-04 11:52         ` Andrew Jones
  2023-04-04 11:58           ` Conor Dooley
@ 2023-04-04 12:03           ` Andrew Jones
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2023-04-04 12:03 UTC (permalink / raw)
  To: Anup Patel
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Tue, Apr 04, 2023 at 01:52:43PM +0200, Andrew Jones wrote:
> On Mon, Apr 03, 2023 at 02:23:01PM +0200, Andrew Jones wrote:
> > On Mon, Apr 03, 2023 at 05:34:57PM +0530, Anup Patel wrote:
> > > On Mon, Apr 3, 2023 at 5:01 PM Andrew Jones <ajones@ventanamicro.com> wrote:
> > > >
> > > > On Mon, Apr 03, 2023 at 03:03:08PM +0530, Anup Patel wrote:
> > > > > We implement ONE_REG interface for AIA CSRs as a separate subtype
> > > > > under the CSR ONE_REG interface.
> > > > >
> > > > > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > > > > ---
> > > > >  arch/riscv/include/uapi/asm/kvm.h | 8 ++++++++
> > > > >  arch/riscv/kvm/vcpu.c             | 8 ++++++++
> > > > >  2 files changed, 16 insertions(+)
> > > > >
> > > > > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > > > > index 182023dc9a51..cbc3e74fa670 100644
> > > > > --- a/arch/riscv/include/uapi/asm/kvm.h
> > > > > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > > > > @@ -79,6 +79,10 @@ struct kvm_riscv_csr {
> > > > >       unsigned long scounteren;
> > > > >  };
> > > > >
> > > > > +/* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > > > > +struct kvm_riscv_aia_csr {
> > > > > +};
> > > > > +
> > > > >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > > > >  struct kvm_riscv_timer {
> > > > >       __u64 frequency;
> > > > > @@ -107,6 +111,7 @@ enum KVM_RISCV_ISA_EXT_ID {
> > > > >       KVM_RISCV_ISA_EXT_ZIHINTPAUSE,
> > > > >       KVM_RISCV_ISA_EXT_ZICBOM,
> > > > >       KVM_RISCV_ISA_EXT_ZBB,
> > > >
> > > > Looks like this patch is also based on "[PATCH] RISC-V: KVM: Allow Zbb
> > > > extension for Guest/VM"
> > > 
> > > Yes, do you want me to change the order of dependency?
> > 
> > It's probably best if neither depend on each other, since they're
> > independent, but otherwise the order doesn't matter. It'd be nice to call
> > the order out in the cover letter to give patchwork a chance at automatic
> > build testing, though. To call it out, I believe adding
> > 
> > Based-on: 20230401112730.2105240-1-apatel@ventanamicro.com
> > 
> > to the cover letter should work.
> 
> I also just noticed that this based on "RISC-V: KVM: Add ONE_REG
> interface to enable/disable SBI extensions"[1] and it needs to be
> in order to pick up the KVM_REG_RISCV_SUBTYPE_MASK and
> KVM_REG_RISCV_SUBTYPE_SHIFT defines. It'd be good to call that
> patch out with Based-on.
> 
> [1]: 20230331174542.2067560-2-apatel@ventanamicro.com

And "RISC-V IPI Improvements",
20230328035223.1480939-1-apatel@ventanamicro.com, which is required
for riscv_get_intc_hwnode()

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management
  2023-04-03  9:33 ` [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management Anup Patel
@ 2023-04-04 12:45   ` Andrew Jones
  2023-04-04 13:52     ` Anup Patel
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Jones @ 2023-04-04 12:45 UTC (permalink / raw)
  To: Anup Patel
  Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
	Anup Patel, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 03, 2023 at 03:03:10PM +0530, Anup Patel wrote:
> The RISC-V host will have one guest external interrupt line for each
> VS-level IMSICs associated with a HART. The guest external interrupt
> lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and
> HIE CSRs to manage these guest external interrupt lines.
> 
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
>  arch/riscv/include/asm/kvm_aia.h |  10 ++
>  arch/riscv/kvm/aia.c             | 241 +++++++++++++++++++++++++++++++
>  arch/riscv/kvm/main.c            |   3 +-
>  arch/riscv/kvm/vcpu.c            |   2 +
>  4 files changed, 255 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> index 1de0717112e5..0938e0cadf80 100644
> --- a/arch/riscv/include/asm/kvm_aia.h
> +++ b/arch/riscv/include/asm/kvm_aia.h
> @@ -44,10 +44,15 @@ struct kvm_vcpu_aia {
>  
>  #define irqchip_in_kernel(k)		((k)->arch.aia.in_kernel)
>  
> +extern unsigned int kvm_riscv_aia_nr_hgei;
>  DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
>  #define kvm_riscv_aia_available() \
>  	static_branch_unlikely(&kvm_riscv_aia_available)
>  
> +static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu)
> +{
> +}
> +
>  #define KVM_RISCV_AIA_IMSIC_TOPEI	(ISELECT_MASK + 1)
>  static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
>  					       unsigned long isel,
> @@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
>  {
>  }
>  
> +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
> +			     void __iomem **hgei_va, phys_addr_t *hgei_pa);
> +void kvm_riscv_aia_free_hgei(int cpu, int hgei);
> +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable);
> +
>  void kvm_riscv_aia_enable(void);
>  void kvm_riscv_aia_disable(void);
>  int kvm_riscv_aia_init(void);
> diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> index d530912f28bc..1264783e7c4d 100644
> --- a/arch/riscv/kvm/aia.c
> +++ b/arch/riscv/kvm/aia.c
> @@ -7,11 +7,46 @@
>   *	Anup Patel <apatel@ventanamicro.com>
>   */
>  
> +#include <linux/bitops.h>
> +#include <linux/irq.h>
> +#include <linux/irqdomain.h>
>  #include <linux/kvm_host.h>
> +#include <linux/percpu.h>
> +#include <linux/spinlock.h>
>  #include <asm/hwcap.h>
>  
> +struct aia_hgei_control {
> +	raw_spinlock_t lock;
> +	unsigned long free_bitmap;
> +	struct kvm_vcpu *owners[BITS_PER_LONG];
> +};
> +static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei);
> +static int hgei_parent_irq;
> +
> +unsigned int kvm_riscv_aia_nr_hgei;
>  DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
>  
> +static int aia_find_hgei(struct kvm_vcpu *owner)
> +{
> +	int i, hgei;
> +	unsigned long flags;
> +	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> +
> +	raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +
> +	hgei = -1;
> +	for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) {
> +		if (hgctrl->owners[i] == owner) {
> +			hgei = i;
> +			break;
> +		}
> +	}
> +
> +	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> +
> +	return hgei;
> +}
> +
>  static void aia_set_hvictl(bool ext_irq_pending)
>  {
>  	unsigned long hvictl;
> @@ -55,6 +90,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
>  
>  bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
>  {
> +	int hgei;
>  	unsigned long seip;
>  
>  	if (!kvm_riscv_aia_available())
> @@ -72,6 +108,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
>  	if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
>  		return false;
>  
> +	hgei = aia_find_hgei(vcpu);
> +	if (hgei > 0)
> +		return (csr_read(CSR_HGEIP) & BIT(hgei)) ? true : false;

nit: return !!(csr_read(CSR_HGEIP) & BIT(hgei)) is a bit less verbose.

> +
>  	return false;
>  }
>  
> @@ -343,6 +383,144 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
>  	return KVM_INSN_EXIT_TO_USER_SPACE;
>  }
>  
> +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
> +			     void __iomem **hgei_va, phys_addr_t *hgei_pa)
> +{
> +	int ret = -ENOENT;
> +	unsigned long flags;
> +	struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> +
> +	if (!kvm_riscv_aia_available())
> +		return -ENODEV;
> +	if (!hgctrl)
> +		return -ENODEV;

nit:

if (!kvm_riscv_aia_available() || !hgctrl)
   return -ENODEV;

> +
> +	raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +
> +	if (hgctrl->free_bitmap) {
> +		ret = __ffs(hgctrl->free_bitmap);
> +		hgctrl->free_bitmap &= ~BIT(ret);
> +		hgctrl->owners[ret] = owner;
> +	}
> +
> +	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> +
> +	/* TODO: To be updated later by AIA in-kernel irqchip support */
> +	if (hgei_va)
> +		*hgei_va = NULL;
> +	if (hgei_pa)
> +		*hgei_pa = 0;
> +
> +	return ret;
> +}
> +
> +void kvm_riscv_aia_free_hgei(int cpu, int hgei)
> +{
> +	unsigned long flags;
> +	struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> +
> +	if (!kvm_riscv_aia_available() || !hgctrl)
> +		return;
> +
> +	raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +
> +	if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) {
> +		if (!(hgctrl->free_bitmap & BIT(hgei))) {
> +			hgctrl->free_bitmap |= BIT(hgei);
> +			hgctrl->owners[hgei] = NULL;
> +		}
> +	}
> +
> +	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> +}
> +
> +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable)
> +{
> +	int hgei;
> +
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +	hgei = aia_find_hgei(owner);
> +	if (hgei > 0) {
> +		if (enable)
> +			csr_set(CSR_HGEIE, BIT(hgei));
> +		else
> +			csr_clear(CSR_HGEIE, BIT(hgei));
> +	}
> +}
> +
> +static irqreturn_t hgei_interrupt(int irq, void *dev_id)
> +{
> +	int i;
> +	unsigned long hgei_mask, flags;
> +	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> +
> +	hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE);
> +	csr_clear(CSR_HGEIE, hgei_mask);
> +
> +	raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +
> +	for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) {
> +		if (hgctrl->owners[i])
> +			kvm_vcpu_kick(hgctrl->owners[i]);
> +	}
> +
> +	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int aia_hgei_init(void)
> +{
> +	int cpu, rc;
> +	struct irq_domain *domain;
> +	struct aia_hgei_control *hgctrl;
> +
> +	/* Initialize per-CPU guest external interrupt line management */
> +	for_each_possible_cpu(cpu) {
> +		hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> +		raw_spin_lock_init(&hgctrl->lock);
> +		if (kvm_riscv_aia_nr_hgei) {
> +			hgctrl->free_bitmap =
> +				BIT(kvm_riscv_aia_nr_hgei + 1) - 1;
> +			hgctrl->free_bitmap &= ~BIT(0);
> +		} else
> +			hgctrl->free_bitmap = 0;
> +	}
> +
> +	/* Find INTC irq domain */
> +	domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
> +					  DOMAIN_BUS_ANY);
> +	if (!domain) {
> +		kvm_err("unable to find INTC domain\n");
> +		return -ENOENT;
> +	}
> +
> +	/* Map per-CPU SGEI interrupt from INTC domain */
> +	hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT);
> +	if (!hgei_parent_irq) {
> +		kvm_err("unable to map SGEI IRQ\n");
> +		return -ENOMEM;
> +	}
> +
> +	/* Request per-CPU SGEI interrupt */
> +	rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt,
> +				"riscv-kvm", &aia_hgei);
> +	if (rc) {
> +		kvm_err("failed to request SGEI IRQ\n");
> +		return rc;
> +	}
> +
> +	return 0;
> +}
> +
> +static void aia_hgei_exit(void)
> +{
> +	/* Free per-CPU SGEI interrupt */
> +	free_percpu_irq(hgei_parent_irq, &aia_hgei);
> +}
> +
>  void kvm_riscv_aia_enable(void)
>  {
>  	if (!kvm_riscv_aia_available())
> @@ -357,21 +535,79 @@ void kvm_riscv_aia_enable(void)
>  	csr_write(CSR_HVIPRIO1H, 0x0);
>  	csr_write(CSR_HVIPRIO2H, 0x0);
>  #endif
> +
> +	/* Enable per-CPU SGEI interrupt */
> +	enable_percpu_irq(hgei_parent_irq,
> +			  irq_get_trigger_type(hgei_parent_irq));
> +	csr_set(CSR_HIE, BIT(IRQ_S_GEXT));
>  }
>  
>  void kvm_riscv_aia_disable(void)
>  {
> +	int i;
> +	unsigned long flags;
> +	struct kvm_vcpu *vcpu;
> +	struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> +
>  	if (!kvm_riscv_aia_available())
>  		return;
>  
> +	/* Disable per-CPU SGEI interrupt */
> +	csr_clear(CSR_HIE, BIT(IRQ_S_GEXT));
> +	disable_percpu_irq(hgei_parent_irq);
> +
>  	aia_set_hvictl(false);
> +
> +	raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +
> +	for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) {

I guess this should start at i = 1, but in this case it doesn't
matter since hgctrl->owners[0] should always be NULL.

> +		vcpu = hgctrl->owners[i];
> +		if (!vcpu)
> +			continue;
> +
> +		/*
> +		 * We release hgctrl->lock before notifying IMSIC
> +		 * so that we don't have lock ordering issues.
> +		 */
> +		raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> +
> +		/* Notify IMSIC */
> +		kvm_riscv_vcpu_aia_imsic_release(vcpu);
> +
> +		/*
> +		 * Wakeup VCPU if it was blocked so that it can
> +		 * run on other HARTs
> +		 */
> +		if (csr_read(CSR_HGEIE) & BIT(i)) {
> +			csr_clear(CSR_HGEIE, BIT(i));
> +			kvm_vcpu_kick(vcpu);

Doing all this outside the lock makes me wonder what happens when 'vcpu'
is no longer the owner at the time of kvm_riscv_vcpu_aia_imsic_release()
or kvm_vcpu_kick(). Even if the calls on the wrong vcpu are just noise,
then don't we still need to confirm that we release/kick the real owner
before we return from this function?

It appears safe to call kvm_vcpu_kick() while holding the lock and
hgei_interrupt() does that. So, since there's currently no
implementation of kvm_riscv_vcpu_aia_imsic_release(), I'm not sure what
lock ordering issues we need to avoid.

> +		}
> +
> +		raw_spin_lock_irqsave(&hgctrl->lock, flags);
> +	}
> +
> +	raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
>  }
>  
>  int kvm_riscv_aia_init(void)
>  {
> +	int rc;
> +
>  	if (!riscv_isa_extension_available(NULL, SxAIA))
>  		return -ENODEV;
>  
> +	/* Figure-out number of bits in HGEIE */
> +	csr_write(CSR_HGEIE, -1UL);
> +	kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE));
> +	csr_write(CSR_HGEIE, 0);
> +	if (kvm_riscv_aia_nr_hgei)
> +		kvm_riscv_aia_nr_hgei--;
> +
> +	/* Initialize guest external interrupt line management */
> +	rc = aia_hgei_init();
> +	if (rc)
> +		return rc;
> +
>  	/* Enable KVM AIA support */
>  	static_branch_enable(&kvm_riscv_aia_available);
>  
> @@ -380,4 +616,9 @@ int kvm_riscv_aia_init(void)
>  
>  void kvm_riscv_aia_exit(void)
>  {
> +	if (!kvm_riscv_aia_available())
> +		return;
> +
> +	/* Cleanup the HGEI state */
> +	aia_hgei_exit();
>  }
> diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
> index 6396352b4e4d..b0b46f48f31e 100644
> --- a/arch/riscv/kvm/main.c
> +++ b/arch/riscv/kvm/main.c
> @@ -116,7 +116,8 @@ static int __init riscv_kvm_init(void)
>  	kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
>  
>  	if (kvm_riscv_aia_available())
> -		kvm_info("AIA available\n");
> +		kvm_info("AIA available with %d guest external interrupts\n",
> +			 kvm_riscv_aia_nr_hgei);
>  
>  	rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
>  	if (rc) {
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index 30acf3ebdc3d..eace51dd896f 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -249,10 +249,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  
>  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
>  {
> +	kvm_riscv_aia_wakeon_hgei(vcpu, true);
>  }
>  
>  void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
>  {
> +	kvm_riscv_aia_wakeon_hgei(vcpu, false);
>  }
>  
>  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> -- 
> 2.34.1
>

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART AIA CSRs
  2023-04-03 16:37   ` Andrew Jones
@ 2023-04-04 13:31     ` Anup Patel
  2023-04-04 13:54     ` Anup Patel
  1 sibling, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-04 13:31 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 3, 2023 at 10:07 PM Andrew Jones <ajones@ventanamicro.com> wrote:
>
> On Mon, Apr 03, 2023 at 03:03:09PM +0530, Anup Patel wrote:
> > The AIA specification introduce per-HART AIA CSRs which primarily
> > support:
> > * 64 local interrupts on both RV64 and RV32
> > * priority for each of the 64 local interrupts
> > * interrupt filtering for local interrupts
> >
> > This patch virtualize above mentioned AIA CSRs and also extend
> > ONE_REG interface to allow user-space save/restore Guest/VM
> > view of these CSRs.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/kvm_aia.h  |  88 +++++----
> >  arch/riscv/include/asm/kvm_host.h |   7 +-
> >  arch/riscv/include/uapi/asm/kvm.h |   7 +
> >  arch/riscv/kvm/aia.c              | 317 ++++++++++++++++++++++++++++++
> >  arch/riscv/kvm/vcpu.c             |  53 +++--
> >  5 files changed, 415 insertions(+), 57 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> > index 258a835d4c32..1de0717112e5 100644
> > --- a/arch/riscv/include/asm/kvm_aia.h
> > +++ b/arch/riscv/include/asm/kvm_aia.h
>
> nit: Generating the diff with --patience makes this a bit easier to read,
> and/or several of the stub functions could have been directly put in
> arch/riscv/kvm/aia.c in the skeleton patch to avoid so many changes in
> this one.

If we have stubs as inline functions in the header then these are
optimized out by the compiler until actual function definition is available.

>
> > @@ -12,6 +12,7 @@
> >
> >  #include <linux/jump_label.h>
> >  #include <linux/kvm_types.h>
> > +#include <asm/csr.h>
> >
> >  struct kvm_aia {
> >       /* In-kernel irqchip created */
> > @@ -21,7 +22,22 @@ struct kvm_aia {
> >       bool            initialized;
> >  };
> >
> > +struct kvm_vcpu_aia_csr {
> > +     unsigned long vsiselect;
> > +     unsigned long hviprio1;
> > +     unsigned long hviprio2;
> > +     unsigned long vsieh;
> > +     unsigned long hviph;
> > +     unsigned long hviprio1h;
> > +     unsigned long hviprio2h;
> > +};
> > +
> >  struct kvm_vcpu_aia {
> > +     /* CPU AIA CSR context of Guest VCPU */
> > +     struct kvm_vcpu_aia_csr guest_csr;
> > +
> > +     /* CPU AIA CSR context upon Guest VCPU reset */
> > +     struct kvm_vcpu_aia_csr guest_reset_csr;
> >  };
> >
> >  #define kvm_riscv_aia_initialized(k) ((k)->arch.aia.initialized)
> > @@ -32,48 +48,50 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> >  #define kvm_riscv_aia_available() \
> >       static_branch_unlikely(&kvm_riscv_aia_available)
> >
> > -static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
> > -                                                  u64 mask)
> > -{
> > -     return false;
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> > +#define KVM_RISCV_AIA_IMSIC_TOPEI    (ISELECT_MASK + 1)
> > +static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
> > +                                            unsigned long isel,
> > +                                            unsigned long *val,
> > +                                            unsigned long new_val,
> > +                                            unsigned long wr_mask)
> >  {
> > +     return 0;
> >  }
> >
> > -static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > -                                          unsigned long reg_num,
> > -                                          unsigned long *out_val)
> > +#ifdef CONFIG_32BIT
> > +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu);
> > +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu);
> > +#else
> > +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> >  {
> > -     *out_val = 0;
> > -     return 0;
> >  }
> > -
> > -static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > -                                          unsigned long reg_num,
> > -                                          unsigned long val)
> > +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> >  {
> > -     return 0;
> >  }
> > -
> > -#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
> > +#endif
> > +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
> > +
> > +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu);
> > +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu);
> > +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu);
> > +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long *out_val);
> > +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long val);
> > +
> > +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> > +                              unsigned int csr_num,
> > +                              unsigned long *val,
> > +                              unsigned long new_val,
> > +                              unsigned long wr_mask);
> > +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> > +                             unsigned long *val, unsigned long new_val,
> > +                             unsigned long wr_mask);
> > +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS \
> > +{ .base = CSR_SIREG,      .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \
> > +{ .base = CSR_STOPEI,     .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei },
> >
> >  static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
> >  {
> > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> > index 3157cf748df1..ee0acccb1d3b 100644
> > --- a/arch/riscv/include/asm/kvm_host.h
> > +++ b/arch/riscv/include/asm/kvm_host.h
> > @@ -204,8 +204,9 @@ struct kvm_vcpu_arch {
> >        * in irqs_pending. Our approach is modeled around multiple producer
> >        * and single consumer problem where the consumer is the VCPU itself.
> >        */
> > -     unsigned long irqs_pending;
> > -     unsigned long irqs_pending_mask;
> > +#define KVM_RISCV_VCPU_NR_IRQS       64
> > +     DECLARE_BITMAP(irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> > +     DECLARE_BITMAP(irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
>
> I'd prefer this ulong to bitmap change, and all its repercussions, be done
> in a separate patch.

Okay, I will create a separate patch.

>
> >
> >       /* VCPU Timer */
> >       struct kvm_vcpu_timer timer;
> > @@ -334,7 +335,7 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
> >  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
> >  void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu);
> >  void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu);
> > -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask);
> > +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
> >  void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu);
> >  void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu);
> >
> > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > index cbc3e74fa670..c517e70ddcd6 100644
> > --- a/arch/riscv/include/uapi/asm/kvm.h
> > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > @@ -81,6 +81,13 @@ struct kvm_riscv_csr {
> >
> >  /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> >  struct kvm_riscv_aia_csr {
> > +     unsigned long siselect;
> > +     unsigned long siprio1;
> > +     unsigned long siprio2;
> > +     unsigned long sieh;
> > +     unsigned long siph;
> > +     unsigned long siprio1h;
> > +     unsigned long siprio2h;
> >  };
> >
> >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> > index 7a633331cd3e..d530912f28bc 100644
> > --- a/arch/riscv/kvm/aia.c
> > +++ b/arch/riscv/kvm/aia.c
> > @@ -26,6 +26,323 @@ static void aia_set_hvictl(bool ext_irq_pending)
> >       csr_write(CSR_HVICTL, hvictl);
> >  }
> >
> > +#ifdef CONFIG_32BIT
> > +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +     unsigned long mask, val;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     if (READ_ONCE(vcpu->arch.irqs_pending_mask[1])) {
> > +             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[1], 0);
> > +             val = READ_ONCE(vcpu->arch.irqs_pending[1]) & mask;
> > +
> > +             csr->hviph &= ~mask;
> > +             csr->hviph |= val;
> > +     }
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (kvm_riscv_aia_available())
> > +             csr->vsieh = csr_read(CSR_VSIEH);
> > +}
> > +#endif
> > +
> > +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> > +{
> > +     unsigned long seip;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return false;
> > +
> > +#ifdef CONFIG_32BIT
> > +     if (READ_ONCE(vcpu->arch.irqs_pending[1]) &
> > +         (vcpu->arch.aia_context.guest_csr.vsieh & (unsigned long)(mask >> 32)))
>
> upper_32_bits()

Okay, I will update.

>
> > +             return true;
> > +#endif
> > +
> > +     seip = vcpu->arch.guest_csr.vsie;
> > +     seip &= (unsigned long)mask;
> > +     seip &= BIT(IRQ_S_EXT);
>
> Please add a blank line above the if-statement.
>
> > +     if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
>
> Shouldn't we check kvm_riscv_aia_initialized() at the top of this
> function?

It is possible that Guest/VM uses AIA CSRs but does not
use the Guest external interrupts so kvm_riscv_aia_initialized()
should be checked afterwards.

>
> > +             return false;
> > +
> > +     return false;
>
> return true
>
> But if we move kvm_riscv_aia_initialized() up, then we instead can do
>
>  return !!seip;

Yes, this can be optimized but kvm_riscv_aia_initialized() should be
checked in middle.

>
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +#ifdef CONFIG_32BIT
> > +     csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph);
> > +#endif
> > +     aia_set_hvictl((csr->hvip & BIT(IRQ_VS_EXT)) ? true : false);
>
> The compiler will manage the conversion of csr->hvip & BIT(IRQ_VS_EXT)
> to a 1 or 0 since it's getting passed in as a boolean parameter.

Okay, I will update.

>
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     csr_write(CSR_VSISELECT, csr->vsiselect);
> > +     csr_write(CSR_HVIPRIO1, csr->hviprio1);
> > +     csr_write(CSR_HVIPRIO2, csr->hviprio2);
> > +#ifdef CONFIG_32BIT
> > +     csr_write(CSR_VSIEH, csr->vsieh);
> > +     csr_write(CSR_HVIPH, csr->hviph);
> > +     csr_write(CSR_HVIPRIO1H, csr->hviprio1h);
> > +     csr_write(CSR_HVIPRIO2H, csr->hviprio2h);
> > +#endif
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     csr->vsiselect = csr_read(CSR_VSISELECT);
> > +     csr->hviprio1 = csr_read(CSR_HVIPRIO1);
> > +     csr->hviprio2 = csr_read(CSR_HVIPRIO2);
> > +#ifdef CONFIG_32BIT
> > +     csr->vsieh = csr_read(CSR_VSIEH);
> > +     csr->hviph = csr_read(CSR_HVIPH);
> > +     csr->hviprio1h = csr_read(CSR_HVIPRIO1H);
> > +     csr->hviprio2h = csr_read(CSR_HVIPRIO2H);
> > +#endif
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long *out_val)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> > +             return -EINVAL;
> > +
> > +     *out_val = 0;
> > +     if (kvm_riscv_aia_available())
> > +             *out_val = ((unsigned long *)csr)[reg_num];
> > +
> > +     return 0;
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long val)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> > +             return -EINVAL;
> > +
> > +     if (kvm_riscv_aia_available()) {
> > +             ((unsigned long *)csr)[reg_num] = val;
> > +
> > +#ifdef CONFIG_32BIT
> > +             if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph))
> > +                     WRITE_ONCE(vcpu->arch.irqs_pending_mask[1], 0);
> > +#endif
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> > +                              unsigned int csr_num,
> > +                              unsigned long *val,
> > +                              unsigned long new_val,
> > +                              unsigned long wr_mask)
> > +{
> > +     /* If AIA not available then redirect trap */
> > +     if (!kvm_riscv_aia_available())
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +
> > +     /* If AIA not initialized then forward to user space */
> > +     if (!kvm_riscv_aia_initialized(vcpu->kvm))
> > +             return KVM_INSN_EXIT_TO_USER_SPACE;
> > +
> > +     return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, KVM_RISCV_AIA_IMSIC_TOPEI,
> > +                                         val, new_val, wr_mask);
> > +}
> > +
> > +/*
> > + * External IRQ priority always read-only zero. This means default
> > + * priority order  is always preferred for external IRQs unless
> > + * HVICTL.IID == 9 and HVICTL.IPRIO != 0
> > + */
> > +static int aia_irq2bitpos[] = {
> > +0,     8,   -1,   -1,   16,   24,   -1,   -1, /* 0 - 7 */
> > +32,   -1,   -1,   -1,   -1,   40,   48,   56, /* 8 - 15 */
> > +64,   72,   80,   88,   96,  104,  112,  120, /* 16 - 23 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 24 - 31 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 32 - 39 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 40 - 47 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 48 - 55 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 56 - 63 */
> > +};
> > +
> > +static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq)
> > +{
> > +     unsigned long hviprio;
> > +     int bitpos = aia_irq2bitpos[irq];
> > +
> > +     if (bitpos < 0)
> > +             return 0;
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             hviprio = csr_read(CSR_HVIPRIO1);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +#else
> > +             hviprio = csr_read(CSR_HVIPRIO1H);
> > +             break;
> > +     case 2:
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +     case 3:
> > +             hviprio = csr_read(CSR_HVIPRIO2H);
> > +             break;
> > +#endif
> > +     default:
> > +             return 0;
> > +     };
>          ^ unnecessary ;

Okay, I will update.

> > +
> > +     return (hviprio >> (bitpos % BITS_PER_LONG)) & TOPI_IPRIO_MASK;
> > +}
> > +
> > +static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio)
> > +{
> > +     unsigned long hviprio;
> > +     int bitpos = aia_irq2bitpos[irq];
> > +
> > +     if (bitpos < 0)
> > +             return;
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             hviprio = csr_read(CSR_HVIPRIO1);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +#else
> > +             hviprio = csr_read(CSR_HVIPRIO1H);
> > +             break;
> > +     case 2:
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +     case 3:
> > +             hviprio = csr_read(CSR_HVIPRIO2H);
> > +             break;
> > +#endif
> > +     default:
> > +             return;
> > +     };
>          ^ unnecessary ;

Okay, I will update.

>
> The csr read switch could be put in a helper and shared between the get
> and set functions.

I think that's unnecessary.

>
> > +
> > +     hviprio &= ~((unsigned long)TOPI_IPRIO_MASK <<
>
> I don't think the (unsigned long) cast is necessary, as I believe
> TOPI_IPRIO_MASK is already an unsigned long.

Okay, I will update.

>
> > +                  (bitpos % BITS_PER_LONG));
> > +     hviprio |= (unsigned long)prio << (bitpos % BITS_PER_LONG);
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             csr_write(CSR_HVIPRIO1, hviprio);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             csr_write(CSR_HVIPRIO2, hviprio);
> > +             break;
> > +#else
> > +             csr_write(CSR_HVIPRIO1H, hviprio);
> > +             break;
> > +     case 2:
> > +             csr_write(CSR_HVIPRIO2, hviprio);
> > +             break;
> > +     case 3:
> > +             csr_write(CSR_HVIPRIO2H, hviprio);
> > +             break;
> > +#endif
> > +     default:
> > +             return;
> > +     };
>          ^ unnecessary ;

Okay, I will update.

>
> > +}
> > +
> > +static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel,
> > +                      unsigned long *val, unsigned long new_val,
> > +                      unsigned long wr_mask)
> > +{
> > +     int i, firq, nirqs;
>
> nit: I guessed 'f' is for 'first', but 'first_irq' would make that more
> clear from the start.

Okay, I will rename firq to first_irq.

>
> > +     unsigned long old_val;
> > +
> > +#ifndef CONFIG_32BIT
> > +     if (isel & 0x1)
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +#endif
> > +
> > +     nirqs = 4 * (BITS_PER_LONG / 32);
> > +     firq = ((isel - ISELECT_IPRIO0) / (BITS_PER_LONG / 32)) * (nirqs);
>
> This is just firq = 4 * (isel - ISELECT_IPRIO0);

I agree. The current computation can be much simpler. Probably, I was
over thinking.

>
> > +
> > +     old_val = 0;
> > +     for (i = 0; i < nirqs; i++)
> > +             old_val |= (unsigned long)aia_get_iprio8(vcpu, firq + i) <<
> > +                        (TOPI_IPRIO_BITS * i);
>
> nit: normally would indent to under the (

This is not part of the parameter passed to aia_get_iprio8().

>
> > +
> > +     if (val)
> > +             *val = old_val;
> > +
> > +     if (wr_mask) {
> > +             new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
> > +             for (i = 0; i < nirqs; i++)
> > +                     aia_set_iprio8(vcpu, firq + i,
> > +                     (new_val >> (TOPI_IPRIO_BITS * i)) & TOPI_IPRIO_MASK);
>
> nit: normally would indent to under the (

Okay, I will update.

>
> > +     }
> > +
> > +     return KVM_INSN_CONTINUE_NEXT_SEPC;
> > +}
> > +
> > +#define IMSIC_FIRST  0x70
> > +#define IMSIC_LAST   0xff
> > +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> > +                             unsigned long *val, unsigned long new_val,
> > +                             unsigned long wr_mask)
> > +{
> > +     unsigned int isel;
> > +
> > +     /* If AIA not available then redirect trap */
> > +     if (!kvm_riscv_aia_available())
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +
> > +     /* First try to emulate in kernel space */
> > +     isel = csr_read(CSR_VSISELECT) & ISELECT_MASK;
> > +     if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15)
> > +             return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask);
> > +     else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST &&
> > +              kvm_riscv_aia_initialized(vcpu->kvm))
> > +             return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val,
> > +                                                 wr_mask);
> > +
> > +     /* We can't handle it here so redirect to user space */
> > +     return KVM_INSN_EXIT_TO_USER_SPACE;
> > +}
> > +
> >  void kvm_riscv_aia_enable(void)
> >  {
> >       if (!kvm_riscv_aia_available())
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 15507cd3a595..30acf3ebdc3d 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -141,8 +141,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
> >
> >       kvm_riscv_vcpu_aia_reset(vcpu);
> >
> > -     WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > -     WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +     bitmap_zero(vcpu->arch.irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> > +     bitmap_zero(vcpu->arch.irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
> >
> >       kvm_riscv_vcpu_pmu_reset(vcpu);
> >
> > @@ -474,6 +474,7 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
> >       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> >               kvm_riscv_vcpu_flush_interrupts(vcpu);
> >               *out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
> > +             *out_val |= csr->hvip & ~IRQ_LOCAL_MASK;
> >       } else
> >               *out_val = ((unsigned long *)csr)[reg_num];
> >
> > @@ -497,7 +498,7 @@ static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
> >       ((unsigned long *)csr)[reg_num] = reg_val;
> >
> >       if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
> > -             WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +             WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0);
> >
> >       return 0;
> >  }
> > @@ -799,9 +800,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
> >       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> >       unsigned long mask, val;
> >
> > -     if (READ_ONCE(vcpu->arch.irqs_pending_mask)) {
> > -             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0);
> > -             val = READ_ONCE(vcpu->arch.irqs_pending) & mask;
> > +     if (READ_ONCE(vcpu->arch.irqs_pending_mask[0])) {
> > +             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[0], 0);
> > +             val = READ_ONCE(vcpu->arch.irqs_pending[0]) & mask;
> >
> >               csr->hvip &= ~mask;
> >               csr->hvip |= val;
> > @@ -825,12 +826,12 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> >       if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) {
> >               if (hvip & (1UL << IRQ_VS_SOFT)) {
> >                       if (!test_and_set_bit(IRQ_VS_SOFT,
> > -                                           &v->irqs_pending_mask))
> > -                             set_bit(IRQ_VS_SOFT, &v->irqs_pending);
> > +                                           v->irqs_pending_mask))
> > +                             set_bit(IRQ_VS_SOFT, v->irqs_pending);
> >               } else {
> >                       if (!test_and_set_bit(IRQ_VS_SOFT,
> > -                                           &v->irqs_pending_mask))
> > -                             clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
> > +                                           v->irqs_pending_mask))
> > +                             clear_bit(IRQ_VS_SOFT, v->irqs_pending);
> >               }
> >       }
> >
> > @@ -843,14 +844,20 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> >
> >  int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >  {
> > -     if (irq != IRQ_VS_SOFT &&
> > +     /*
> > +      * We only allow VS-mode software, timer, and external
> > +      * interrupts when irq is one of the local interrupts
> > +      * defined by RISC-V privilege specification.
> > +      */
> > +     if (irq < IRQ_LOCAL_MAX &&
> > +         irq != IRQ_VS_SOFT &&
> >           irq != IRQ_VS_TIMER &&
> >           irq != IRQ_VS_EXT)
> >               return -EINVAL;
> >
> > -     set_bit(irq, &vcpu->arch.irqs_pending);
> > +     set_bit(irq, vcpu->arch.irqs_pending);
> >       smp_mb__before_atomic();
> > -     set_bit(irq, &vcpu->arch.irqs_pending_mask);
> > +     set_bit(irq, vcpu->arch.irqs_pending_mask);
> >
> >       kvm_vcpu_kick(vcpu);
> >
> > @@ -859,25 +866,33 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >
> >  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >  {
> > -     if (irq != IRQ_VS_SOFT &&
> > +     /*
> > +      * We only allow VS-mode software, timer, and external
> > +      * interrupts when irq is one of the local interrupts
> > +      * defined by RISC-V privilege specification.
> > +      */
> > +     if (irq < IRQ_LOCAL_MAX &&
> > +         irq != IRQ_VS_SOFT &&
> >           irq != IRQ_VS_TIMER &&
> >           irq != IRQ_VS_EXT)
> >               return -EINVAL;
> >
> > -     clear_bit(irq, &vcpu->arch.irqs_pending);
> > +     clear_bit(irq, vcpu->arch.irqs_pending);
> >       smp_mb__before_atomic();
> > -     set_bit(irq, &vcpu->arch.irqs_pending_mask);
> > +     set_bit(irq, vcpu->arch.irqs_pending_mask);
> >
> >       return 0;
> >  }
> >
> > -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
> > +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> >  {
> >       unsigned long ie;
> >
> >       ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> > -             << VSIP_TO_HVIP_SHIFT) & mask;
> > -     if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
> > +             << VSIP_TO_HVIP_SHIFT) & (unsigned long)mask;
> > +     ie |= vcpu->arch.guest_csr.vsie & ~IRQ_LOCAL_MASK &
> > +             (unsigned long)mask;
> > +     if (READ_ONCE(vcpu->arch.irqs_pending[0]) & ie)
> >               return true;
> >
> >       /* Check AIA high interrupts */
> > --
> > 2.34.1
> >
>
> Thanks,
> drew

Regards,
Anup

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management
  2023-04-04 12:45   ` Andrew Jones
@ 2023-04-04 13:52     ` Anup Patel
  0 siblings, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-04 13:52 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Tue, Apr 4, 2023 at 6:15 PM Andrew Jones <ajones@ventanamicro.com> wrote:
>
> On Mon, Apr 03, 2023 at 03:03:10PM +0530, Anup Patel wrote:
> > The RISC-V host will have one guest external interrupt line for each
> > VS-level IMSICs associated with a HART. The guest external interrupt
> > lines are per-HART resources and hypervisor can use HGEIE, HGEIP, and
> > HIE CSRs to manage these guest external interrupt lines.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/kvm_aia.h |  10 ++
> >  arch/riscv/kvm/aia.c             | 241 +++++++++++++++++++++++++++++++
> >  arch/riscv/kvm/main.c            |   3 +-
> >  arch/riscv/kvm/vcpu.c            |   2 +
> >  4 files changed, 255 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> > index 1de0717112e5..0938e0cadf80 100644
> > --- a/arch/riscv/include/asm/kvm_aia.h
> > +++ b/arch/riscv/include/asm/kvm_aia.h
> > @@ -44,10 +44,15 @@ struct kvm_vcpu_aia {
> >
> >  #define irqchip_in_kernel(k)         ((k)->arch.aia.in_kernel)
> >
> > +extern unsigned int kvm_riscv_aia_nr_hgei;
> >  DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> >  #define kvm_riscv_aia_available() \
> >       static_branch_unlikely(&kvm_riscv_aia_available)
> >
> > +static inline void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu)
> > +{
> > +}
> > +
> >  #define KVM_RISCV_AIA_IMSIC_TOPEI    (ISELECT_MASK + 1)
> >  static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
> >                                              unsigned long isel,
> > @@ -119,6 +124,11 @@ static inline void kvm_riscv_aia_destroy_vm(struct kvm *kvm)
> >  {
> >  }
> >
> > +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
> > +                          void __iomem **hgei_va, phys_addr_t *hgei_pa);
> > +void kvm_riscv_aia_free_hgei(int cpu, int hgei);
> > +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable);
> > +
> >  void kvm_riscv_aia_enable(void);
> >  void kvm_riscv_aia_disable(void);
> >  int kvm_riscv_aia_init(void);
> > diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> > index d530912f28bc..1264783e7c4d 100644
> > --- a/arch/riscv/kvm/aia.c
> > +++ b/arch/riscv/kvm/aia.c
> > @@ -7,11 +7,46 @@
> >   *   Anup Patel <apatel@ventanamicro.com>
> >   */
> >
> > +#include <linux/bitops.h>
> > +#include <linux/irq.h>
> > +#include <linux/irqdomain.h>
> >  #include <linux/kvm_host.h>
> > +#include <linux/percpu.h>
> > +#include <linux/spinlock.h>
> >  #include <asm/hwcap.h>
> >
> > +struct aia_hgei_control {
> > +     raw_spinlock_t lock;
> > +     unsigned long free_bitmap;
> > +     struct kvm_vcpu *owners[BITS_PER_LONG];
> > +};
> > +static DEFINE_PER_CPU(struct aia_hgei_control, aia_hgei);
> > +static int hgei_parent_irq;
> > +
> > +unsigned int kvm_riscv_aia_nr_hgei;
> >  DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> >
> > +static int aia_find_hgei(struct kvm_vcpu *owner)
> > +{
> > +     int i, hgei;
> > +     unsigned long flags;
> > +     struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> > +
> > +     raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +
> > +     hgei = -1;
> > +     for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) {
> > +             if (hgctrl->owners[i] == owner) {
> > +                     hgei = i;
> > +                     break;
> > +             }
> > +     }
> > +
> > +     raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> > +
> > +     return hgei;
> > +}
> > +
> >  static void aia_set_hvictl(bool ext_irq_pending)
> >  {
> >       unsigned long hvictl;
> > @@ -55,6 +90,7 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> >
> >  bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> >  {
> > +     int hgei;
> >       unsigned long seip;
> >
> >       if (!kvm_riscv_aia_available())
> > @@ -72,6 +108,10 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> >       if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
> >               return false;
> >
> > +     hgei = aia_find_hgei(vcpu);
> > +     if (hgei > 0)
> > +             return (csr_read(CSR_HGEIP) & BIT(hgei)) ? true : false;
>
> nit: return !!(csr_read(CSR_HGEIP) & BIT(hgei)) is a bit less verbose.

Okay, I will update.

>
> > +
> >       return false;
> >  }
> >
> > @@ -343,6 +383,144 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> >       return KVM_INSN_EXIT_TO_USER_SPACE;
> >  }
> >
> > +int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
> > +                          void __iomem **hgei_va, phys_addr_t *hgei_pa)
> > +{
> > +     int ret = -ENOENT;
> > +     unsigned long flags;
> > +     struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return -ENODEV;
> > +     if (!hgctrl)
> > +             return -ENODEV;
>
> nit:
>
> if (!kvm_riscv_aia_available() || !hgctrl)
>    return -ENODEV;

Okay, I will update.

>
> > +
> > +     raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +
> > +     if (hgctrl->free_bitmap) {
> > +             ret = __ffs(hgctrl->free_bitmap);
> > +             hgctrl->free_bitmap &= ~BIT(ret);
> > +             hgctrl->owners[ret] = owner;
> > +     }
> > +
> > +     raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> > +
> > +     /* TODO: To be updated later by AIA in-kernel irqchip support */
> > +     if (hgei_va)
> > +             *hgei_va = NULL;
> > +     if (hgei_pa)
> > +             *hgei_pa = 0;
> > +
> > +     return ret;
> > +}
> > +
> > +void kvm_riscv_aia_free_hgei(int cpu, int hgei)
> > +{
> > +     unsigned long flags;
> > +     struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> > +
> > +     if (!kvm_riscv_aia_available() || !hgctrl)
> > +             return;
> > +
> > +     raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +
> > +     if (hgei > 0 && hgei <= kvm_riscv_aia_nr_hgei) {
> > +             if (!(hgctrl->free_bitmap & BIT(hgei))) {
> > +                     hgctrl->free_bitmap |= BIT(hgei);
> > +                     hgctrl->owners[hgei] = NULL;
> > +             }
> > +     }
> > +
> > +     raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> > +}
> > +
> > +void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable)
> > +{
> > +     int hgei;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     hgei = aia_find_hgei(owner);
> > +     if (hgei > 0) {
> > +             if (enable)
> > +                     csr_set(CSR_HGEIE, BIT(hgei));
> > +             else
> > +                     csr_clear(CSR_HGEIE, BIT(hgei));
> > +     }
> > +}
> > +
> > +static irqreturn_t hgei_interrupt(int irq, void *dev_id)
> > +{
> > +     int i;
> > +     unsigned long hgei_mask, flags;
> > +     struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> > +
> > +     hgei_mask = csr_read(CSR_HGEIP) & csr_read(CSR_HGEIE);
> > +     csr_clear(CSR_HGEIE, hgei_mask);
> > +
> > +     raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +
> > +     for_each_set_bit(i, &hgei_mask, BITS_PER_LONG) {
> > +             if (hgctrl->owners[i])
> > +                     kvm_vcpu_kick(hgctrl->owners[i]);
> > +     }
> > +
> > +     raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> > +
> > +     return IRQ_HANDLED;
> > +}
> > +
> > +static int aia_hgei_init(void)
> > +{
> > +     int cpu, rc;
> > +     struct irq_domain *domain;
> > +     struct aia_hgei_control *hgctrl;
> > +
> > +     /* Initialize per-CPU guest external interrupt line management */
> > +     for_each_possible_cpu(cpu) {
> > +             hgctrl = per_cpu_ptr(&aia_hgei, cpu);
> > +             raw_spin_lock_init(&hgctrl->lock);
> > +             if (kvm_riscv_aia_nr_hgei) {
> > +                     hgctrl->free_bitmap =
> > +                             BIT(kvm_riscv_aia_nr_hgei + 1) - 1;
> > +                     hgctrl->free_bitmap &= ~BIT(0);
> > +             } else
> > +                     hgctrl->free_bitmap = 0;
> > +     }
> > +
> > +     /* Find INTC irq domain */
> > +     domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
> > +                                       DOMAIN_BUS_ANY);
> > +     if (!domain) {
> > +             kvm_err("unable to find INTC domain\n");
> > +             return -ENOENT;
> > +     }
> > +
> > +     /* Map per-CPU SGEI interrupt from INTC domain */
> > +     hgei_parent_irq = irq_create_mapping(domain, IRQ_S_GEXT);
> > +     if (!hgei_parent_irq) {
> > +             kvm_err("unable to map SGEI IRQ\n");
> > +             return -ENOMEM;
> > +     }
> > +
> > +     /* Request per-CPU SGEI interrupt */
> > +     rc = request_percpu_irq(hgei_parent_irq, hgei_interrupt,
> > +                             "riscv-kvm", &aia_hgei);
> > +     if (rc) {
> > +             kvm_err("failed to request SGEI IRQ\n");
> > +             return rc;
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static void aia_hgei_exit(void)
> > +{
> > +     /* Free per-CPU SGEI interrupt */
> > +     free_percpu_irq(hgei_parent_irq, &aia_hgei);
> > +}
> > +
> >  void kvm_riscv_aia_enable(void)
> >  {
> >       if (!kvm_riscv_aia_available())
> > @@ -357,21 +535,79 @@ void kvm_riscv_aia_enable(void)
> >       csr_write(CSR_HVIPRIO1H, 0x0);
> >       csr_write(CSR_HVIPRIO2H, 0x0);
> >  #endif
> > +
> > +     /* Enable per-CPU SGEI interrupt */
> > +     enable_percpu_irq(hgei_parent_irq,
> > +                       irq_get_trigger_type(hgei_parent_irq));
> > +     csr_set(CSR_HIE, BIT(IRQ_S_GEXT));
> >  }
> >
> >  void kvm_riscv_aia_disable(void)
> >  {
> > +     int i;
> > +     unsigned long flags;
> > +     struct kvm_vcpu *vcpu;
> > +     struct aia_hgei_control *hgctrl = this_cpu_ptr(&aia_hgei);
> > +
> >       if (!kvm_riscv_aia_available())
> >               return;
> >
> > +     /* Disable per-CPU SGEI interrupt */
> > +     csr_clear(CSR_HIE, BIT(IRQ_S_GEXT));
> > +     disable_percpu_irq(hgei_parent_irq);
> > +
> >       aia_set_hvictl(false);
> > +
> > +     raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +
> > +     for (i = 0; i <= kvm_riscv_aia_nr_hgei; i++) {
>
> I guess this should start at i = 1, but in this case it doesn't
> matter since hgctrl->owners[0] should always be NULL.

Yes, it doesn't matter in this case.

>
> > +             vcpu = hgctrl->owners[i];
> > +             if (!vcpu)
> > +                     continue;
> > +
> > +             /*
> > +              * We release hgctrl->lock before notifying IMSIC
> > +              * so that we don't have lock ordering issues.
> > +              */
> > +             raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> > +
> > +             /* Notify IMSIC */
> > +             kvm_riscv_vcpu_aia_imsic_release(vcpu);
> > +
> > +             /*
> > +              * Wakeup VCPU if it was blocked so that it can
> > +              * run on other HARTs
> > +              */
> > +             if (csr_read(CSR_HGEIE) & BIT(i)) {
> > +                     csr_clear(CSR_HGEIE, BIT(i));
> > +                     kvm_vcpu_kick(vcpu);
>
> Doing all this outside the lock makes me wonder what happens when 'vcpu'
> is no longer the owner at the time of kvm_riscv_vcpu_aia_imsic_release()
> or kvm_vcpu_kick(). Even if the calls on the wrong vcpu are just noise,
> then don't we still need to confirm that we release/kick the real owner
> before we return from this function?
>
> It appears safe to call kvm_vcpu_kick() while holding the lock and
> hgei_interrupt() does that. So, since there's currently no
> implementation of kvm_riscv_vcpu_aia_imsic_release(), I'm not sure what
> lock ordering issues we need to avoid.

Next batch of AIA patches has kvm_riscv_vcpu_aia_imsic_release() where
kvm_riscv_vcpu_aia_imsic_release() function will release the HGEI interrupt
line.

This is not to protect the kvm_vcpu_kick() call.

Regards,
Anup

>
> > +             }
> > +
> > +             raw_spin_lock_irqsave(&hgctrl->lock, flags);
> > +     }
> > +
> > +     raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
> >  }
> >
> >  int kvm_riscv_aia_init(void)
> >  {
> > +     int rc;
> > +
> >       if (!riscv_isa_extension_available(NULL, SxAIA))
> >               return -ENODEV;
> >
> > +     /* Figure-out number of bits in HGEIE */
> > +     csr_write(CSR_HGEIE, -1UL);
> > +     kvm_riscv_aia_nr_hgei = fls_long(csr_read(CSR_HGEIE));
> > +     csr_write(CSR_HGEIE, 0);
> > +     if (kvm_riscv_aia_nr_hgei)
> > +             kvm_riscv_aia_nr_hgei--;
> > +
> > +     /* Initialize guest external interrupt line management */
> > +     rc = aia_hgei_init();
> > +     if (rc)
> > +             return rc;
> > +
> >       /* Enable KVM AIA support */
> >       static_branch_enable(&kvm_riscv_aia_available);
> >
> > @@ -380,4 +616,9 @@ int kvm_riscv_aia_init(void)
> >
> >  void kvm_riscv_aia_exit(void)
> >  {
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     /* Cleanup the HGEI state */
> > +     aia_hgei_exit();
> >  }
> > diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
> > index 6396352b4e4d..b0b46f48f31e 100644
> > --- a/arch/riscv/kvm/main.c
> > +++ b/arch/riscv/kvm/main.c
> > @@ -116,7 +116,8 @@ static int __init riscv_kvm_init(void)
> >       kvm_info("VMID %ld bits available\n", kvm_riscv_gstage_vmid_bits());
> >
> >       if (kvm_riscv_aia_available())
> > -             kvm_info("AIA available\n");
> > +             kvm_info("AIA available with %d guest external interrupts\n",
> > +                      kvm_riscv_aia_nr_hgei);
> >
> >       rc = kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
> >       if (rc) {
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 30acf3ebdc3d..eace51dd896f 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -249,10 +249,12 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
> >
> >  void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
> >  {
> > +     kvm_riscv_aia_wakeon_hgei(vcpu, true);
> >  }
> >
> >  void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
> >  {
> > +     kvm_riscv_aia_wakeon_hgei(vcpu, false);
> >  }
> >
> >  int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > --
> > 2.34.1
> >
>
> Thanks,
> drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART AIA CSRs
  2023-04-03 16:37   ` Andrew Jones
  2023-04-04 13:31     ` Anup Patel
@ 2023-04-04 13:54     ` Anup Patel
  1 sibling, 0 replies; 30+ messages in thread
From: Anup Patel @ 2023-04-04 13:54 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, kvm, kvm-riscv, linux-riscv, linux-kernel

On Mon, Apr 3, 2023 at 10:07 PM Andrew Jones <ajones@ventanamicro.com> wrote:
>
> On Mon, Apr 03, 2023 at 03:03:09PM +0530, Anup Patel wrote:
> > The AIA specification introduce per-HART AIA CSRs which primarily
> > support:
> > * 64 local interrupts on both RV64 and RV32
> > * priority for each of the 64 local interrupts
> > * interrupt filtering for local interrupts
> >
> > This patch virtualize above mentioned AIA CSRs and also extend
> > ONE_REG interface to allow user-space save/restore Guest/VM
> > view of these CSRs.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> >  arch/riscv/include/asm/kvm_aia.h  |  88 +++++----
> >  arch/riscv/include/asm/kvm_host.h |   7 +-
> >  arch/riscv/include/uapi/asm/kvm.h |   7 +
> >  arch/riscv/kvm/aia.c              | 317 ++++++++++++++++++++++++++++++
> >  arch/riscv/kvm/vcpu.c             |  53 +++--
> >  5 files changed, 415 insertions(+), 57 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_aia.h
> > index 258a835d4c32..1de0717112e5 100644
> > --- a/arch/riscv/include/asm/kvm_aia.h
> > +++ b/arch/riscv/include/asm/kvm_aia.h
>
> nit: Generating the diff with --patience makes this a bit easier to read,
> and/or several of the stub functions could have been directly put in
> arch/riscv/kvm/aia.c in the skeleton patch to avoid so many changes in
> this one.
>
> > @@ -12,6 +12,7 @@
> >
> >  #include <linux/jump_label.h>
> >  #include <linux/kvm_types.h>
> > +#include <asm/csr.h>
> >
> >  struct kvm_aia {
> >       /* In-kernel irqchip created */
> > @@ -21,7 +22,22 @@ struct kvm_aia {
> >       bool            initialized;
> >  };
> >
> > +struct kvm_vcpu_aia_csr {
> > +     unsigned long vsiselect;
> > +     unsigned long hviprio1;
> > +     unsigned long hviprio2;
> > +     unsigned long vsieh;
> > +     unsigned long hviph;
> > +     unsigned long hviprio1h;
> > +     unsigned long hviprio2h;
> > +};
> > +
> >  struct kvm_vcpu_aia {
> > +     /* CPU AIA CSR context of Guest VCPU */
> > +     struct kvm_vcpu_aia_csr guest_csr;
> > +
> > +     /* CPU AIA CSR context upon Guest VCPU reset */
> > +     struct kvm_vcpu_aia_csr guest_reset_csr;
> >  };
> >
> >  #define kvm_riscv_aia_initialized(k) ((k)->arch.aia.initialized)
> > @@ -32,48 +48,50 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
> >  #define kvm_riscv_aia_available() \
> >       static_branch_unlikely(&kvm_riscv_aia_available)
> >
> > -static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu,
> > -                                                  u64 mask)
> > -{
> > -     return false;
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> > -{
> > -}
> > -
> > -static inline void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> > +#define KVM_RISCV_AIA_IMSIC_TOPEI    (ISELECT_MASK + 1)
> > +static inline int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu,
> > +                                            unsigned long isel,
> > +                                            unsigned long *val,
> > +                                            unsigned long new_val,
> > +                                            unsigned long wr_mask)
> >  {
> > +     return 0;
> >  }
> >
> > -static inline int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > -                                          unsigned long reg_num,
> > -                                          unsigned long *out_val)
> > +#ifdef CONFIG_32BIT
> > +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu);
> > +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu);
> > +#else
> > +static inline void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> >  {
> > -     *out_val = 0;
> > -     return 0;
> >  }
> > -
> > -static inline int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > -                                          unsigned long reg_num,
> > -                                          unsigned long val)
> > +static inline void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> >  {
> > -     return 0;
> >  }
> > -
> > -#define KVM_RISCV_VCPU_AIA_CSR_FUNCS
> > +#endif
> > +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
> > +
> > +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu);
> > +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu);
> > +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu);
> > +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long *out_val);
> > +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long val);
> > +
> > +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> > +                              unsigned int csr_num,
> > +                              unsigned long *val,
> > +                              unsigned long new_val,
> > +                              unsigned long wr_mask);
> > +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> > +                             unsigned long *val, unsigned long new_val,
> > +                             unsigned long wr_mask);
> > +#define KVM_RISCV_VCPU_AIA_CSR_FUNCS \
> > +{ .base = CSR_SIREG,      .count = 1, .func = kvm_riscv_vcpu_aia_rmw_ireg }, \
> > +{ .base = CSR_STOPEI,     .count = 1, .func = kvm_riscv_vcpu_aia_rmw_topei },
> >
> >  static inline int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu)
> >  {
> > diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
> > index 3157cf748df1..ee0acccb1d3b 100644
> > --- a/arch/riscv/include/asm/kvm_host.h
> > +++ b/arch/riscv/include/asm/kvm_host.h
> > @@ -204,8 +204,9 @@ struct kvm_vcpu_arch {
> >        * in irqs_pending. Our approach is modeled around multiple producer
> >        * and single consumer problem where the consumer is the VCPU itself.
> >        */
> > -     unsigned long irqs_pending;
> > -     unsigned long irqs_pending_mask;
> > +#define KVM_RISCV_VCPU_NR_IRQS       64
> > +     DECLARE_BITMAP(irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> > +     DECLARE_BITMAP(irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
>
> I'd prefer this ulong to bitmap change, and all its repercussions, be done
> in a separate patch.
>
> >
> >       /* VCPU Timer */
> >       struct kvm_vcpu_timer timer;
> > @@ -334,7 +335,7 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
> >  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq);
> >  void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu);
> >  void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu);
> > -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask);
> > +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask);
> >  void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu);
> >  void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu);
> >
> > diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
> > index cbc3e74fa670..c517e70ddcd6 100644
> > --- a/arch/riscv/include/uapi/asm/kvm.h
> > +++ b/arch/riscv/include/uapi/asm/kvm.h
> > @@ -81,6 +81,13 @@ struct kvm_riscv_csr {
> >
> >  /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> >  struct kvm_riscv_aia_csr {
> > +     unsigned long siselect;
> > +     unsigned long siprio1;
> > +     unsigned long siprio2;
> > +     unsigned long sieh;
> > +     unsigned long siph;
> > +     unsigned long siprio1h;
> > +     unsigned long siprio2h;
> >  };
> >
> >  /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
> > diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> > index 7a633331cd3e..d530912f28bc 100644
> > --- a/arch/riscv/kvm/aia.c
> > +++ b/arch/riscv/kvm/aia.c
> > @@ -26,6 +26,323 @@ static void aia_set_hvictl(bool ext_irq_pending)
> >       csr_write(CSR_HVICTL, hvictl);
> >  }
> >
> > +#ifdef CONFIG_32BIT
> > +void kvm_riscv_vcpu_aia_flush_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +     unsigned long mask, val;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     if (READ_ONCE(vcpu->arch.irqs_pending_mask[1])) {
> > +             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[1], 0);
> > +             val = READ_ONCE(vcpu->arch.irqs_pending[1]) & mask;
> > +
> > +             csr->hviph &= ~mask;
> > +             csr->hviph |= val;
> > +     }
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (kvm_riscv_aia_available())
> > +             csr->vsieh = csr_read(CSR_VSIEH);
> > +}
> > +#endif
> > +
> > +bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> > +{
> > +     unsigned long seip;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return false;
> > +
> > +#ifdef CONFIG_32BIT
> > +     if (READ_ONCE(vcpu->arch.irqs_pending[1]) &
> > +         (vcpu->arch.aia_context.guest_csr.vsieh & (unsigned long)(mask >> 32)))
>
> upper_32_bits()
>
> > +             return true;
> > +#endif
> > +
> > +     seip = vcpu->arch.guest_csr.vsie;
> > +     seip &= (unsigned long)mask;
> > +     seip &= BIT(IRQ_S_EXT);
>
> Please add a blank line above the if-statement.
>
> > +     if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
>
> Shouldn't we check kvm_riscv_aia_initialized() at the top of this
> function?
>
> > +             return false;
> > +
> > +     return false;
>
> return true
>
> But if we move kvm_riscv_aia_initialized() up, then we instead can do
>
>  return !!seip;

This logic is correct. It is looking weird because it is incomplete
and PATCH8 completes it.

It took a while to refresh my memory because I wrote this code
almost 2 years back.

Regards,
Anup

>
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +#ifdef CONFIG_32BIT
> > +     csr_write(CSR_HVIPH, vcpu->arch.aia_context.guest_csr.hviph);
> > +#endif
> > +     aia_set_hvictl((csr->hvip & BIT(IRQ_VS_EXT)) ? true : false);
>
> The compiler will manage the conversion of csr->hvip & BIT(IRQ_VS_EXT)
> to a 1 or 0 since it's getting passed in as a boolean parameter.
>
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     csr_write(CSR_VSISELECT, csr->vsiselect);
> > +     csr_write(CSR_HVIPRIO1, csr->hviprio1);
> > +     csr_write(CSR_HVIPRIO2, csr->hviprio2);
> > +#ifdef CONFIG_32BIT
> > +     csr_write(CSR_VSIEH, csr->vsieh);
> > +     csr_write(CSR_HVIPH, csr->hviph);
> > +     csr_write(CSR_HVIPRIO1H, csr->hviprio1h);
> > +     csr_write(CSR_HVIPRIO2H, csr->hviprio2h);
> > +#endif
> > +}
> > +
> > +void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (!kvm_riscv_aia_available())
> > +             return;
> > +
> > +     csr->vsiselect = csr_read(CSR_VSISELECT);
> > +     csr->hviprio1 = csr_read(CSR_HVIPRIO1);
> > +     csr->hviprio2 = csr_read(CSR_HVIPRIO2);
> > +#ifdef CONFIG_32BIT
> > +     csr->vsieh = csr_read(CSR_VSIEH);
> > +     csr->hviph = csr_read(CSR_HVIPH);
> > +     csr->hviprio1h = csr_read(CSR_HVIPRIO1H);
> > +     csr->hviprio2h = csr_read(CSR_HVIPRIO2H);
> > +#endif
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long *out_val)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> > +             return -EINVAL;
> > +
> > +     *out_val = 0;
> > +     if (kvm_riscv_aia_available())
> > +             *out_val = ((unsigned long *)csr)[reg_num];
> > +
> > +     return 0;
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> > +                            unsigned long reg_num,
> > +                            unsigned long val)
> > +{
> > +     struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
> > +
> > +     if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> > +             return -EINVAL;
> > +
> > +     if (kvm_riscv_aia_available()) {
> > +             ((unsigned long *)csr)[reg_num] = val;
> > +
> > +#ifdef CONFIG_32BIT
> > +             if (reg_num == KVM_REG_RISCV_CSR_AIA_REG(siph))
> > +                     WRITE_ONCE(vcpu->arch.irqs_pending_mask[1], 0);
> > +#endif
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +int kvm_riscv_vcpu_aia_rmw_topei(struct kvm_vcpu *vcpu,
> > +                              unsigned int csr_num,
> > +                              unsigned long *val,
> > +                              unsigned long new_val,
> > +                              unsigned long wr_mask)
> > +{
> > +     /* If AIA not available then redirect trap */
> > +     if (!kvm_riscv_aia_available())
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +
> > +     /* If AIA not initialized then forward to user space */
> > +     if (!kvm_riscv_aia_initialized(vcpu->kvm))
> > +             return KVM_INSN_EXIT_TO_USER_SPACE;
> > +
> > +     return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, KVM_RISCV_AIA_IMSIC_TOPEI,
> > +                                         val, new_val, wr_mask);
> > +}
> > +
> > +/*
> > + * External IRQ priority always read-only zero. This means default
> > + * priority order  is always preferred for external IRQs unless
> > + * HVICTL.IID == 9 and HVICTL.IPRIO != 0
> > + */
> > +static int aia_irq2bitpos[] = {
> > +0,     8,   -1,   -1,   16,   24,   -1,   -1, /* 0 - 7 */
> > +32,   -1,   -1,   -1,   -1,   40,   48,   56, /* 8 - 15 */
> > +64,   72,   80,   88,   96,  104,  112,  120, /* 16 - 23 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 24 - 31 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 32 - 39 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 40 - 47 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 48 - 55 */
> > +-1,   -1,   -1,   -1,   -1,   -1,   -1,   -1, /* 56 - 63 */
> > +};
> > +
> > +static u8 aia_get_iprio8(struct kvm_vcpu *vcpu, unsigned int irq)
> > +{
> > +     unsigned long hviprio;
> > +     int bitpos = aia_irq2bitpos[irq];
> > +
> > +     if (bitpos < 0)
> > +             return 0;
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             hviprio = csr_read(CSR_HVIPRIO1);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +#else
> > +             hviprio = csr_read(CSR_HVIPRIO1H);
> > +             break;
> > +     case 2:
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +     case 3:
> > +             hviprio = csr_read(CSR_HVIPRIO2H);
> > +             break;
> > +#endif
> > +     default:
> > +             return 0;
> > +     };
>          ^ unnecessary ;
> > +
> > +     return (hviprio >> (bitpos % BITS_PER_LONG)) & TOPI_IPRIO_MASK;
> > +}
> > +
> > +static void aia_set_iprio8(struct kvm_vcpu *vcpu, unsigned int irq, u8 prio)
> > +{
> > +     unsigned long hviprio;
> > +     int bitpos = aia_irq2bitpos[irq];
> > +
> > +     if (bitpos < 0)
> > +             return;
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             hviprio = csr_read(CSR_HVIPRIO1);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +#else
> > +             hviprio = csr_read(CSR_HVIPRIO1H);
> > +             break;
> > +     case 2:
> > +             hviprio = csr_read(CSR_HVIPRIO2);
> > +             break;
> > +     case 3:
> > +             hviprio = csr_read(CSR_HVIPRIO2H);
> > +             break;
> > +#endif
> > +     default:
> > +             return;
> > +     };
>          ^ unnecessary ;
>
> The csr read switch could be put in a helper and shared between the get
> and set functions.
>
> > +
> > +     hviprio &= ~((unsigned long)TOPI_IPRIO_MASK <<
>
> I don't think the (unsigned long) cast is necessary, as I believe
> TOPI_IPRIO_MASK is already an unsigned long.
>
> > +                  (bitpos % BITS_PER_LONG));
> > +     hviprio |= (unsigned long)prio << (bitpos % BITS_PER_LONG);
> > +
> > +     switch (bitpos / BITS_PER_LONG) {
> > +     case 0:
> > +             csr_write(CSR_HVIPRIO1, hviprio);
> > +             break;
> > +     case 1:
> > +#ifndef CONFIG_32BIT
> > +             csr_write(CSR_HVIPRIO2, hviprio);
> > +             break;
> > +#else
> > +             csr_write(CSR_HVIPRIO1H, hviprio);
> > +             break;
> > +     case 2:
> > +             csr_write(CSR_HVIPRIO2, hviprio);
> > +             break;
> > +     case 3:
> > +             csr_write(CSR_HVIPRIO2H, hviprio);
> > +             break;
> > +#endif
> > +     default:
> > +             return;
> > +     };
>          ^ unnecessary ;
>
> > +}
> > +
> > +static int aia_rmw_iprio(struct kvm_vcpu *vcpu, unsigned int isel,
> > +                      unsigned long *val, unsigned long new_val,
> > +                      unsigned long wr_mask)
> > +{
> > +     int i, firq, nirqs;
>
> nit: I guessed 'f' is for 'first', but 'first_irq' would make that more
> clear from the start.
>
> > +     unsigned long old_val;
> > +
> > +#ifndef CONFIG_32BIT
> > +     if (isel & 0x1)
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +#endif
> > +
> > +     nirqs = 4 * (BITS_PER_LONG / 32);
> > +     firq = ((isel - ISELECT_IPRIO0) / (BITS_PER_LONG / 32)) * (nirqs);
>
> This is just firq = 4 * (isel - ISELECT_IPRIO0);
>
> > +
> > +     old_val = 0;
> > +     for (i = 0; i < nirqs; i++)
> > +             old_val |= (unsigned long)aia_get_iprio8(vcpu, firq + i) <<
> > +                        (TOPI_IPRIO_BITS * i);
>
> nit: normally would indent to under the (
>
> > +
> > +     if (val)
> > +             *val = old_val;
> > +
> > +     if (wr_mask) {
> > +             new_val = (old_val & ~wr_mask) | (new_val & wr_mask);
> > +             for (i = 0; i < nirqs; i++)
> > +                     aia_set_iprio8(vcpu, firq + i,
> > +                     (new_val >> (TOPI_IPRIO_BITS * i)) & TOPI_IPRIO_MASK);
>
> nit: normally would indent to under the (
>
> > +     }
> > +
> > +     return KVM_INSN_CONTINUE_NEXT_SEPC;
> > +}
> > +
> > +#define IMSIC_FIRST  0x70
> > +#define IMSIC_LAST   0xff
> > +int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, unsigned int csr_num,
> > +                             unsigned long *val, unsigned long new_val,
> > +                             unsigned long wr_mask)
> > +{
> > +     unsigned int isel;
> > +
> > +     /* If AIA not available then redirect trap */
> > +     if (!kvm_riscv_aia_available())
> > +             return KVM_INSN_ILLEGAL_TRAP;
> > +
> > +     /* First try to emulate in kernel space */
> > +     isel = csr_read(CSR_VSISELECT) & ISELECT_MASK;
> > +     if (isel >= ISELECT_IPRIO0 && isel <= ISELECT_IPRIO15)
> > +             return aia_rmw_iprio(vcpu, isel, val, new_val, wr_mask);
> > +     else if (isel >= IMSIC_FIRST && isel <= IMSIC_LAST &&
> > +              kvm_riscv_aia_initialized(vcpu->kvm))
> > +             return kvm_riscv_vcpu_aia_imsic_rmw(vcpu, isel, val, new_val,
> > +                                                 wr_mask);
> > +
> > +     /* We can't handle it here so redirect to user space */
> > +     return KVM_INSN_EXIT_TO_USER_SPACE;
> > +}
> > +
> >  void kvm_riscv_aia_enable(void)
> >  {
> >       if (!kvm_riscv_aia_available())
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index 15507cd3a595..30acf3ebdc3d 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -141,8 +141,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
> >
> >       kvm_riscv_vcpu_aia_reset(vcpu);
> >
> > -     WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > -     WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +     bitmap_zero(vcpu->arch.irqs_pending, KVM_RISCV_VCPU_NR_IRQS);
> > +     bitmap_zero(vcpu->arch.irqs_pending_mask, KVM_RISCV_VCPU_NR_IRQS);
> >
> >       kvm_riscv_vcpu_pmu_reset(vcpu);
> >
> > @@ -474,6 +474,7 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
> >       if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
> >               kvm_riscv_vcpu_flush_interrupts(vcpu);
> >               *out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
> > +             *out_val |= csr->hvip & ~IRQ_LOCAL_MASK;
> >       } else
> >               *out_val = ((unsigned long *)csr)[reg_num];
> >
> > @@ -497,7 +498,7 @@ static inline int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
> >       ((unsigned long *)csr)[reg_num] = reg_val;
> >
> >       if (reg_num == KVM_REG_RISCV_CSR_REG(sip))
> > -             WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +             WRITE_ONCE(vcpu->arch.irqs_pending_mask[0], 0);
> >
> >       return 0;
> >  }
> > @@ -799,9 +800,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu)
> >       struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
> >       unsigned long mask, val;
> >
> > -     if (READ_ONCE(vcpu->arch.irqs_pending_mask)) {
> > -             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0);
> > -             val = READ_ONCE(vcpu->arch.irqs_pending) & mask;
> > +     if (READ_ONCE(vcpu->arch.irqs_pending_mask[0])) {
> > +             mask = xchg_acquire(&vcpu->arch.irqs_pending_mask[0], 0);
> > +             val = READ_ONCE(vcpu->arch.irqs_pending[0]) & mask;
> >
> >               csr->hvip &= ~mask;
> >               csr->hvip |= val;
> > @@ -825,12 +826,12 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> >       if ((csr->hvip ^ hvip) & (1UL << IRQ_VS_SOFT)) {
> >               if (hvip & (1UL << IRQ_VS_SOFT)) {
> >                       if (!test_and_set_bit(IRQ_VS_SOFT,
> > -                                           &v->irqs_pending_mask))
> > -                             set_bit(IRQ_VS_SOFT, &v->irqs_pending);
> > +                                           v->irqs_pending_mask))
> > +                             set_bit(IRQ_VS_SOFT, v->irqs_pending);
> >               } else {
> >                       if (!test_and_set_bit(IRQ_VS_SOFT,
> > -                                           &v->irqs_pending_mask))
> > -                             clear_bit(IRQ_VS_SOFT, &v->irqs_pending);
> > +                                           v->irqs_pending_mask))
> > +                             clear_bit(IRQ_VS_SOFT, v->irqs_pending);
> >               }
> >       }
> >
> > @@ -843,14 +844,20 @@ void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu)
> >
> >  int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >  {
> > -     if (irq != IRQ_VS_SOFT &&
> > +     /*
> > +      * We only allow VS-mode software, timer, and external
> > +      * interrupts when irq is one of the local interrupts
> > +      * defined by RISC-V privilege specification.
> > +      */
> > +     if (irq < IRQ_LOCAL_MAX &&
> > +         irq != IRQ_VS_SOFT &&
> >           irq != IRQ_VS_TIMER &&
> >           irq != IRQ_VS_EXT)
> >               return -EINVAL;
> >
> > -     set_bit(irq, &vcpu->arch.irqs_pending);
> > +     set_bit(irq, vcpu->arch.irqs_pending);
> >       smp_mb__before_atomic();
> > -     set_bit(irq, &vcpu->arch.irqs_pending_mask);
> > +     set_bit(irq, vcpu->arch.irqs_pending_mask);
> >
> >       kvm_vcpu_kick(vcpu);
> >
> > @@ -859,25 +866,33 @@ int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >
> >  int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
> >  {
> > -     if (irq != IRQ_VS_SOFT &&
> > +     /*
> > +      * We only allow VS-mode software, timer, and external
> > +      * interrupts when irq is one of the local interrupts
> > +      * defined by RISC-V privilege specification.
> > +      */
> > +     if (irq < IRQ_LOCAL_MAX &&
> > +         irq != IRQ_VS_SOFT &&
> >           irq != IRQ_VS_TIMER &&
> >           irq != IRQ_VS_EXT)
> >               return -EINVAL;
> >
> > -     clear_bit(irq, &vcpu->arch.irqs_pending);
> > +     clear_bit(irq, vcpu->arch.irqs_pending);
> >       smp_mb__before_atomic();
> > -     set_bit(irq, &vcpu->arch.irqs_pending_mask);
> > +     set_bit(irq, vcpu->arch.irqs_pending_mask);
> >
> >       return 0;
> >  }
> >
> > -bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, unsigned long mask)
> > +bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
> >  {
> >       unsigned long ie;
> >
> >       ie = ((vcpu->arch.guest_csr.vsie & VSIP_VALID_MASK)
> > -             << VSIP_TO_HVIP_SHIFT) & mask;
> > -     if (READ_ONCE(vcpu->arch.irqs_pending) & ie)
> > +             << VSIP_TO_HVIP_SHIFT) & (unsigned long)mask;
> > +     ie |= vcpu->arch.guest_csr.vsie & ~IRQ_LOCAL_MASK &
> > +             (unsigned long)mask;
> > +     if (READ_ONCE(vcpu->arch.irqs_pending[0]) & ie)
> >               return true;
> >
> >       /* Check AIA high interrupts */
> > --
> > 2.34.1
> >
>
> Thanks,
> drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs
  2023-04-04 11:58           ` Conor Dooley
@ 2023-04-05  9:28             ` Conor Dooley
  0 siblings, 0 replies; 30+ messages in thread
From: Conor Dooley @ 2023-04-05  9:28 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Anup Patel, Anup Patel, Paolo Bonzini, Atish Patra,
	Palmer Dabbelt, Paul Walmsley, kvm, kvm-riscv, linux-riscv,
	linux-kernel, bjorn

[-- Attachment #1: Type: text/plain, Size: 1358 bytes --]

On Tue, Apr 04, 2023 at 12:58:41PM +0100, Conor Dooley wrote:
> On Tue, Apr 04, 2023 at 01:52:43PM +0200, Andrew Jones wrote:
> > On Mon, Apr 03, 2023 at 02:23:01PM +0200, Andrew Jones wrote:
> 
> > > It's probably best if neither depend on each other, since they're
> > > independent, but otherwise the order doesn't matter. It'd be nice to call
> > > the order out in the cover letter to give patchwork a chance at automatic
> > > build testing, though. To call it out, I believe adding
> > > 
> > > Based-on: 20230401112730.2105240-1-apatel@ventanamicro.com
> > > 
> > > to the cover letter should work.
> > 
> > I also just noticed that this based on "RISC-V: KVM: Add ONE_REG
> > interface to enable/disable SBI extensions"[1] and it needs to be
> > in order to pick up the KVM_REG_RISCV_SUBTYPE_MASK and
> > KVM_REG_RISCV_SUBTYPE_SHIFT defines. It'd be good to call that
> > patch out with Based-on.
> > 
> > [1]: 20230331174542.2067560-2-apatel@ventanamicro.com
> 
> I've been waiting for a review on that for a while.. It's been 3
> weeks, so just gonna merge it and see what breaks!

I did in fact break some stuff, but the output was no worse than if the
dependencies had not been specified...
I've fixed it (I think!) and told it to ignore the old state, so it'll
re-run against the stuff it missed.

Cheers,
Conor.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2023-04-05  9:29 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-03  9:33 [PATCH v3 0/8] RISC-V KVM virtualize AIA CSRs Anup Patel
2023-04-03  9:33 ` [PATCH v3 1/8] RISC-V: Add AIA related CSR defines Anup Patel
2023-04-03  9:33 ` [PATCH v3 2/8] RISC-V: Detect AIA CSRs from ISA string Anup Patel
2023-04-03  9:39   ` Conor Dooley
2023-04-03 12:05     ` Anup Patel
2023-04-03  9:33 ` [PATCH v3 3/8] RISC-V: KVM: Drop the _MASK suffix from hgatp.VMID mask defines Anup Patel
2023-04-03  9:33 ` [PATCH v3 4/8] RISC-V: KVM: Initial skeletal support for AIA Anup Patel
2023-04-03 12:00   ` Andrew Jones
2023-04-03 23:49   ` Atish Patra
2023-04-04  3:22     ` Anup Patel
2023-04-03  9:33 ` [PATCH v3 5/8] RISC-V: KVM: Implement subtype for CSR ONE_REG interface Anup Patel
2023-04-03 12:18   ` Andrew Jones
2023-04-04  0:54   ` Atish Patra
2023-04-03  9:33 ` [PATCH v3 6/8] RISC-V: KVM: Add ONE_REG interface for AIA CSRs Anup Patel
2023-04-03 11:31   ` Andrew Jones
2023-04-03 12:04     ` Anup Patel
2023-04-03 12:23       ` Andrew Jones
2023-04-04 11:52         ` Andrew Jones
2023-04-04 11:58           ` Conor Dooley
2023-04-05  9:28             ` Conor Dooley
2023-04-04 12:03           ` Andrew Jones
2023-04-03 12:27   ` Andrew Jones
2023-04-04  0:55   ` Atish Patra
2023-04-03  9:33 ` [PATCH v3 7/8] RISC-V: KVM: Virtualize per-HART " Anup Patel
2023-04-03 16:37   ` Andrew Jones
2023-04-04 13:31     ` Anup Patel
2023-04-04 13:54     ` Anup Patel
2023-04-03  9:33 ` [PATCH v3 8/8] RISC-V: KVM: Implement guest external interrupt line management Anup Patel
2023-04-04 12:45   ` Andrew Jones
2023-04-04 13:52     ` Anup Patel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).