All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Add SBI v0.2 support for KVM
@ 2021-10-08  3:20 ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The Supervisor Binary Interface(SBI) specification[1] now defines a
base extension that provides extendability to add future extensions
while maintaining backward compatibility with previous versions.
The new version is defined as 0.2 and older version is marked as 0.1.

This series adds following features to RISC-V Linux KVM.
1. Adds support for SBI v0.2 in KVM
2. SBI Hart state management extension (HSM) in KVM
3. Ordered booting of guest vcpus in guest Linux

This series is based on base KVM series which is already part of the kvm-next[2]. 

Guest kernel needs to also support SBI v0.2 and HSM extension in Kernel
to boot multiple vcpus. Linux kernel supports both starting v5.7.
In absense of that, guest can only boot 1 vcpu.

Changes from v2->v3:
1. Rebased on the latest merged kvm series.
2. Dropped the reset extension patch because reset extension is not merged in kernel. 
However, my tree[3] still contains it in case anybody wants to test it.

Changes from v1->v2:
1. Sent the patch 1 separately as it can merged independently.
2. Added Reset extension functionality.

Tested on Qemu and Rocket core FPGA.

[1] https://github.com/riscv/riscv-sbi-doc/blob/master/riscv-sbi.adoc
[2] https://git.kernel.org/pub/scm/virt/kvm/kvm.git/log/?h=next
[3] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02_reset
[4] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02

Atish Patra (5):
RISC-V: Mark the existing SBI v0.1 implementation as legacy
RISC-V: Reorganize SBI code by moving legacy SBI to its own file
RISC-V: Add SBI v0.2 base extension
RISC-V: Add v0.1 replacement SBI extensions defined in v02
RISC-V: Add SBI HSM extension in KVM

arch/riscv/include/asm/kvm_vcpu_sbi.h |  33 ++++
arch/riscv/include/asm/sbi.h          |   9 ++
arch/riscv/kvm/Makefile               |   4 +
arch/riscv/kvm/vcpu.c                 |  19 +++
arch/riscv/kvm/vcpu_sbi.c             | 208 ++++++++++++--------------
arch/riscv/kvm/vcpu_sbi_base.c        |  73 +++++++++
arch/riscv/kvm/vcpu_sbi_hsm.c         | 109 ++++++++++++++
arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++
arch/riscv/kvm/vcpu_sbi_replace.c     | 136 +++++++++++++++++
9 files changed, 608 insertions(+), 112 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c

--
2.31.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 0/5] Add SBI v0.2 support for KVM
@ 2021-10-08  3:20 ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The Supervisor Binary Interface(SBI) specification[1] now defines a
base extension that provides extendability to add future extensions
while maintaining backward compatibility with previous versions.
The new version is defined as 0.2 and older version is marked as 0.1.

This series adds following features to RISC-V Linux KVM.
1. Adds support for SBI v0.2 in KVM
2. SBI Hart state management extension (HSM) in KVM
3. Ordered booting of guest vcpus in guest Linux

This series is based on base KVM series which is already part of the kvm-next[2]. 

Guest kernel needs to also support SBI v0.2 and HSM extension in Kernel
to boot multiple vcpus. Linux kernel supports both starting v5.7.
In absense of that, guest can only boot 1 vcpu.

Changes from v2->v3:
1. Rebased on the latest merged kvm series.
2. Dropped the reset extension patch because reset extension is not merged in kernel. 
However, my tree[3] still contains it in case anybody wants to test it.

Changes from v1->v2:
1. Sent the patch 1 separately as it can merged independently.
2. Added Reset extension functionality.

Tested on Qemu and Rocket core FPGA.

[1] https://github.com/riscv/riscv-sbi-doc/blob/master/riscv-sbi.adoc
[2] https://git.kernel.org/pub/scm/virt/kvm/kvm.git/log/?h=next
[3] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02_reset
[4] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02

Atish Patra (5):
RISC-V: Mark the existing SBI v0.1 implementation as legacy
RISC-V: Reorganize SBI code by moving legacy SBI to its own file
RISC-V: Add SBI v0.2 base extension
RISC-V: Add v0.1 replacement SBI extensions defined in v02
RISC-V: Add SBI HSM extension in KVM

arch/riscv/include/asm/kvm_vcpu_sbi.h |  33 ++++
arch/riscv/include/asm/sbi.h          |   9 ++
arch/riscv/kvm/Makefile               |   4 +
arch/riscv/kvm/vcpu.c                 |  19 +++
arch/riscv/kvm/vcpu_sbi.c             | 208 ++++++++++++--------------
arch/riscv/kvm/vcpu_sbi_base.c        |  73 +++++++++
arch/riscv/kvm/vcpu_sbi_hsm.c         | 109 ++++++++++++++
arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++
arch/riscv/kvm/vcpu_sbi_replace.c     | 136 +++++++++++++++++
9 files changed, 608 insertions(+), 112 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c
create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c

--
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-08  3:20   ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The existing SBI specification impelementation follows v0.1 or legacy
specification. The latest specification known as v0.2 allows more
scalability and performance improvements.

Rename the existing implementation as legacy and provide a way to allow
future extensions.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
 arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
 2 files changed, 148 insertions(+), 30 deletions(-)
 create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
new file mode 100644
index 000000000000..1a4cb0db2d0b
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/**
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#ifndef __RISCV_KVM_VCPU_SBI_H__
+#define __RISCV_KVM_VCPU_SBI_H__
+
+#define KVM_SBI_VERSION_MAJOR 0
+#define KVM_SBI_VERSION_MINOR 2
+
+struct kvm_vcpu_sbi_extension {
+	unsigned long extid_start;
+	unsigned long extid_end;
+	/**
+	 * SBI extension handler. It can be defined for a given extension or group of
+	 * extension. But it should always return linux error codes rather than SBI
+	 * specific error codes.
+	 */
+	int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run,
+		       unsigned long *out_val, struct kvm_cpu_trap *utrap,
+		       bool *exit);
+};
+
+const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
+#endif /* __RISCV_KVM_VCPU_SBI_H__ */
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index ebdcdbade9c6..8c168d305763 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -12,9 +12,25 @@
 #include <asm/csr.h>
 #include <asm/sbi.h>
 #include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
 
-#define SBI_VERSION_MAJOR			0
-#define SBI_VERSION_MINOR			1
+static int kvm_linux_err_map_sbi(int err)
+{
+	switch (err) {
+	case 0:
+		return SBI_SUCCESS;
+	case -EPERM:
+		return SBI_ERR_DENIED;
+	case -EINVAL:
+		return SBI_ERR_INVALID_PARAM;
+	case -EFAULT:
+		return SBI_ERR_INVALID_ADDRESS;
+	case -EOPNOTSUPP:
+		return SBI_ERR_NOT_SUPPORTED;
+	default:
+		return SBI_ERR_FAILURE;
+	};
+}
 
 static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
 				       struct kvm_run *run)
@@ -72,16 +88,17 @@ static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
 	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
 }
 
-int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap,
+				      bool *exit)
 {
 	ulong hmask;
-	int i, ret = 1;
+	int i, ret = 0;
 	u64 next_cycle;
 	struct kvm_vcpu *rvcpu;
-	bool next_sepc = true;
 	struct cpumask cm, hm;
 	struct kvm *kvm = vcpu->kvm;
-	struct kvm_cpu_trap utrap = { 0 };
 	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
 
 	if (!cp)
@@ -95,8 +112,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * handled in kernel so we forward these to user-space
 		 */
 		kvm_riscv_vcpu_sbi_forward(vcpu, run);
-		next_sepc = false;
-		ret = 0;
+		*exit = true;
 		break;
 	case SBI_EXT_0_1_SET_TIMER:
 #if __riscv_xlen == 32
@@ -104,47 +120,42 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 #else
 		next_cycle = (u64)cp->a0;
 #endif
-		kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
 		break;
 	case SBI_EXT_0_1_CLEAR_IPI:
-		kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
+		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
 		break;
 	case SBI_EXT_0_1_SEND_IPI:
 		if (cp->a0)
 			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   &utrap);
+							   utrap);
 		else
 			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap.scause) {
-			utrap.sepc = cp->sepc;
-			kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
-			next_sepc = false;
+		if (utrap->scause)
 			break;
-		}
+
 		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
 			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			if (ret < 0)
+				break;
 		}
 		break;
 	case SBI_EXT_0_1_SHUTDOWN:
 		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
-		next_sepc = false;
-		ret = 0;
+		*exit = true;
 		break;
 	case SBI_EXT_0_1_REMOTE_FENCE_I:
 	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
 	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
 		if (cp->a0)
 			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   &utrap);
+							   utrap);
 		else
 			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap.scause) {
-			utrap.sepc = cp->sepc;
-			kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
-			next_sepc = false;
+		if (utrap->scause)
 			break;
-		}
+
 		cpumask_clear(&cm);
 		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
 			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
@@ -154,22 +165,100 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		}
 		riscv_cpuid_to_hartid_mask(&cm, &hm);
 		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
-			sbi_remote_fence_i(cpumask_bits(&hm));
+			ret = sbi_remote_fence_i(cpumask_bits(&hm));
 		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
-			sbi_remote_hfence_vvma(cpumask_bits(&hm),
+			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
 						cp->a1, cp->a2);
 		else
-			sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
+			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
 						cp->a1, cp->a2, cp->a3);
 		break;
 	default:
-		/* Return error for unsupported SBI calls */
-		cp->a0 = SBI_ERR_NOT_SUPPORTED;
+		ret = -EINVAL;
 		break;
 	};
 
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = SBI_EXT_0_1_SET_TIMER,
+	.extid_end = SBI_EXT_0_1_SHUTDOWN,
+	.handler = kvm_sbi_ext_legacy_handler,
+};
+
+static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
+	&vcpu_sbi_ext_legacy,
+};
+
+const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
+{
+	int i = 0;
+
+	for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
+		if (sbi_ext[i]->extid_start <= extid &&
+		    sbi_ext[i]->extid_end >= extid)
+			return sbi_ext[i];
+	}
+
+	return NULL;
+}
+
+int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	int ret = 1;
+	bool next_sepc = true;
+	bool userspace_exit = false;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	const struct kvm_vcpu_sbi_extension *sbi_ext;
+	struct kvm_cpu_trap utrap = { 0 };
+	unsigned long out_val = 0;
+	bool ext_is_v01 = false;
+
+	if (!cp)
+		return -EINVAL;
+
+	sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
+	if (sbi_ext && sbi_ext->handler) {
+		if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
+		    cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
+			ext_is_v01 = true;
+		ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
+	} else {
+		/* Return error for unsupported SBI calls */
+		cp->a0 = SBI_ERR_NOT_SUPPORTED;
+		goto ecall_done;
+	}
+
+	/* Handle special error cases i.e trap, exit or userspace forward */
+	if (utrap.scause) {
+		/* No need to increment sepc or exit ioctl loop */
+		ret = 1;
+		utrap.sepc = cp->sepc;
+		kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
+		next_sepc = false;
+		goto ecall_done;
+	}
+
+	/* Exit ioctl loop or Propagate the error code the guest */
+	if (userspace_exit) {
+		next_sepc = false;
+		ret = 0;
+	} else {
+		/**
+		 * SBI extension handler always returns an Linux error code. Convert
+		 * it to the SBI specific error code that can be propagated the SBI
+		 * caller.
+		 */
+		ret = kvm_linux_err_map_sbi(ret);
+		cp->a0 = ret;
+		ret = 1;
+	}
+ecall_done:
 	if (next_sepc)
 		cp->sepc += 4;
+	if (!ext_is_v01)
+		cp->a1 = out_val;
 
 	return ret;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
@ 2021-10-08  3:20   ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The existing SBI specification impelementation follows v0.1 or legacy
specification. The latest specification known as v0.2 allows more
scalability and performance improvements.

Rename the existing implementation as legacy and provide a way to allow
future extensions.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
 arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
 2 files changed, 148 insertions(+), 30 deletions(-)
 create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
new file mode 100644
index 000000000000..1a4cb0db2d0b
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/**
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#ifndef __RISCV_KVM_VCPU_SBI_H__
+#define __RISCV_KVM_VCPU_SBI_H__
+
+#define KVM_SBI_VERSION_MAJOR 0
+#define KVM_SBI_VERSION_MINOR 2
+
+struct kvm_vcpu_sbi_extension {
+	unsigned long extid_start;
+	unsigned long extid_end;
+	/**
+	 * SBI extension handler. It can be defined for a given extension or group of
+	 * extension. But it should always return linux error codes rather than SBI
+	 * specific error codes.
+	 */
+	int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run,
+		       unsigned long *out_val, struct kvm_cpu_trap *utrap,
+		       bool *exit);
+};
+
+const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
+#endif /* __RISCV_KVM_VCPU_SBI_H__ */
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index ebdcdbade9c6..8c168d305763 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -12,9 +12,25 @@
 #include <asm/csr.h>
 #include <asm/sbi.h>
 #include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
 
-#define SBI_VERSION_MAJOR			0
-#define SBI_VERSION_MINOR			1
+static int kvm_linux_err_map_sbi(int err)
+{
+	switch (err) {
+	case 0:
+		return SBI_SUCCESS;
+	case -EPERM:
+		return SBI_ERR_DENIED;
+	case -EINVAL:
+		return SBI_ERR_INVALID_PARAM;
+	case -EFAULT:
+		return SBI_ERR_INVALID_ADDRESS;
+	case -EOPNOTSUPP:
+		return SBI_ERR_NOT_SUPPORTED;
+	default:
+		return SBI_ERR_FAILURE;
+	};
+}
 
 static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
 				       struct kvm_run *run)
@@ -72,16 +88,17 @@ static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
 	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
 }
 
-int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
+static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap,
+				      bool *exit)
 {
 	ulong hmask;
-	int i, ret = 1;
+	int i, ret = 0;
 	u64 next_cycle;
 	struct kvm_vcpu *rvcpu;
-	bool next_sepc = true;
 	struct cpumask cm, hm;
 	struct kvm *kvm = vcpu->kvm;
-	struct kvm_cpu_trap utrap = { 0 };
 	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
 
 	if (!cp)
@@ -95,8 +112,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * handled in kernel so we forward these to user-space
 		 */
 		kvm_riscv_vcpu_sbi_forward(vcpu, run);
-		next_sepc = false;
-		ret = 0;
+		*exit = true;
 		break;
 	case SBI_EXT_0_1_SET_TIMER:
 #if __riscv_xlen == 32
@@ -104,47 +120,42 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 #else
 		next_cycle = (u64)cp->a0;
 #endif
-		kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
 		break;
 	case SBI_EXT_0_1_CLEAR_IPI:
-		kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
+		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
 		break;
 	case SBI_EXT_0_1_SEND_IPI:
 		if (cp->a0)
 			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   &utrap);
+							   utrap);
 		else
 			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap.scause) {
-			utrap.sepc = cp->sepc;
-			kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
-			next_sepc = false;
+		if (utrap->scause)
 			break;
-		}
+
 		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
 			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			if (ret < 0)
+				break;
 		}
 		break;
 	case SBI_EXT_0_1_SHUTDOWN:
 		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
-		next_sepc = false;
-		ret = 0;
+		*exit = true;
 		break;
 	case SBI_EXT_0_1_REMOTE_FENCE_I:
 	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
 	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
 		if (cp->a0)
 			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   &utrap);
+							   utrap);
 		else
 			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap.scause) {
-			utrap.sepc = cp->sepc;
-			kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
-			next_sepc = false;
+		if (utrap->scause)
 			break;
-		}
+
 		cpumask_clear(&cm);
 		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
 			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
@@ -154,22 +165,100 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		}
 		riscv_cpuid_to_hartid_mask(&cm, &hm);
 		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
-			sbi_remote_fence_i(cpumask_bits(&hm));
+			ret = sbi_remote_fence_i(cpumask_bits(&hm));
 		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
-			sbi_remote_hfence_vvma(cpumask_bits(&hm),
+			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
 						cp->a1, cp->a2);
 		else
-			sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
+			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
 						cp->a1, cp->a2, cp->a3);
 		break;
 	default:
-		/* Return error for unsupported SBI calls */
-		cp->a0 = SBI_ERR_NOT_SUPPORTED;
+		ret = -EINVAL;
 		break;
 	};
 
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = SBI_EXT_0_1_SET_TIMER,
+	.extid_end = SBI_EXT_0_1_SHUTDOWN,
+	.handler = kvm_sbi_ext_legacy_handler,
+};
+
+static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
+	&vcpu_sbi_ext_legacy,
+};
+
+const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
+{
+	int i = 0;
+
+	for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
+		if (sbi_ext[i]->extid_start <= extid &&
+		    sbi_ext[i]->extid_end >= extid)
+			return sbi_ext[i];
+	}
+
+	return NULL;
+}
+
+int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	int ret = 1;
+	bool next_sepc = true;
+	bool userspace_exit = false;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	const struct kvm_vcpu_sbi_extension *sbi_ext;
+	struct kvm_cpu_trap utrap = { 0 };
+	unsigned long out_val = 0;
+	bool ext_is_v01 = false;
+
+	if (!cp)
+		return -EINVAL;
+
+	sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
+	if (sbi_ext && sbi_ext->handler) {
+		if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
+		    cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
+			ext_is_v01 = true;
+		ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
+	} else {
+		/* Return error for unsupported SBI calls */
+		cp->a0 = SBI_ERR_NOT_SUPPORTED;
+		goto ecall_done;
+	}
+
+	/* Handle special error cases i.e trap, exit or userspace forward */
+	if (utrap.scause) {
+		/* No need to increment sepc or exit ioctl loop */
+		ret = 1;
+		utrap.sepc = cp->sepc;
+		kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
+		next_sepc = false;
+		goto ecall_done;
+	}
+
+	/* Exit ioctl loop or Propagate the error code the guest */
+	if (userspace_exit) {
+		next_sepc = false;
+		ret = 0;
+	} else {
+		/**
+		 * SBI extension handler always returns an Linux error code. Convert
+		 * it to the SBI specific error code that can be propagated the SBI
+		 * caller.
+		 */
+		ret = kvm_linux_err_map_sbi(ret);
+		cp->a0 = ret;
+		ret = 1;
+	}
+ecall_done:
 	if (next_sepc)
 		cp->sepc += 4;
+	if (!ext_is_v01)
+		cp->a1 = out_val;
 
 	return ret;
 }
-- 
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 2/5] RISC-V: Reorganize SBI code by moving legacy SBI to its own file
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-08  3:20   ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

With SBI v0.2, there may be more SBI extensions in future. It makes more
sense to group related extensions in separate files. Guest kernel will
choose appropriate SBI version dynamically.

Move the existing implementation to a separate file so that it can be
removed in future without much conflict.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |   2 +
 arch/riscv/kvm/Makefile               |   1 +
 arch/riscv/kvm/vcpu_sbi.c             | 153 +++-----------------------
 arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++++++++
 4 files changed, 150 insertions(+), 135 deletions(-)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
index 1a4cb0db2d0b..704151969ceb 100644
--- a/arch/riscv/include/asm/kvm_vcpu_sbi.h
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -25,5 +25,7 @@ struct kvm_vcpu_sbi_extension {
 		       bool *exit);
 };
 
+void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run);
 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
+
 #endif /* __RISCV_KVM_VCPU_SBI_H__ */
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 3226696b8340..53cbecc44c4c 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -22,4 +22,5 @@ kvm-y += vcpu.o
 kvm-y += vcpu_exit.o
 kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
+kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index 8c168d305763..e51de3e4526a 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -9,9 +9,7 @@
 #include <linux/errno.h>
 #include <linux/err.h>
 #include <linux/kvm_host.h>
-#include <asm/csr.h>
 #include <asm/sbi.h>
-#include <asm/kvm_vcpu_timer.h>
 #include <asm/kvm_vcpu_sbi.h>
 
 static int kvm_linux_err_map_sbi(int err)
@@ -32,8 +30,21 @@ static int kvm_linux_err_map_sbi(int err)
 	};
 }
 
-static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
-				       struct kvm_run *run)
+#ifdef CONFIG_RISCV_SBI_V01
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy;
+#else
+static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = -1UL,
+	.extid_end = -1UL,
+	.handler = NULL,
+};
+#endif
+
+static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
+	&vcpu_sbi_ext_legacy,
+};
+
+void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
 	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
 
@@ -71,126 +82,6 @@ int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 0;
 }
 
-#ifdef CONFIG_RISCV_SBI_V01
-
-static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
-				    struct kvm_run *run, u32 type)
-{
-	int i;
-	struct kvm_vcpu *tmp;
-
-	kvm_for_each_vcpu(i, tmp, vcpu->kvm)
-		tmp->arch.power_off = true;
-	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
-
-	memset(&run->system_event, 0, sizeof(run->system_event));
-	run->system_event.type = type;
-	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
-}
-
-static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
-				      unsigned long *out_val,
-				      struct kvm_cpu_trap *utrap,
-				      bool *exit)
-{
-	ulong hmask;
-	int i, ret = 0;
-	u64 next_cycle;
-	struct kvm_vcpu *rvcpu;
-	struct cpumask cm, hm;
-	struct kvm *kvm = vcpu->kvm;
-	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
-
-	if (!cp)
-		return -EINVAL;
-
-	switch (cp->a7) {
-	case SBI_EXT_0_1_CONSOLE_GETCHAR:
-	case SBI_EXT_0_1_CONSOLE_PUTCHAR:
-		/*
-		 * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be
-		 * handled in kernel so we forward these to user-space
-		 */
-		kvm_riscv_vcpu_sbi_forward(vcpu, run);
-		*exit = true;
-		break;
-	case SBI_EXT_0_1_SET_TIMER:
-#if __riscv_xlen == 32
-		next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
-#else
-		next_cycle = (u64)cp->a0;
-#endif
-		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
-		break;
-	case SBI_EXT_0_1_CLEAR_IPI:
-		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
-		break;
-	case SBI_EXT_0_1_SEND_IPI:
-		if (cp->a0)
-			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   utrap);
-		else
-			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap->scause)
-			break;
-
-		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
-			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
-			if (ret < 0)
-				break;
-		}
-		break;
-	case SBI_EXT_0_1_SHUTDOWN:
-		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
-		*exit = true;
-		break;
-	case SBI_EXT_0_1_REMOTE_FENCE_I:
-	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
-	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
-		if (cp->a0)
-			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   utrap);
-		else
-			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap->scause)
-			break;
-
-		cpumask_clear(&cm);
-		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
-			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			if (rvcpu->cpu < 0)
-				continue;
-			cpumask_set_cpu(rvcpu->cpu, &cm);
-		}
-		riscv_cpuid_to_hartid_mask(&cm, &hm);
-		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
-			ret = sbi_remote_fence_i(cpumask_bits(&hm));
-		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
-			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
-						cp->a1, cp->a2);
-		else
-			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
-						cp->a1, cp->a2, cp->a3);
-		break;
-	default:
-		ret = -EINVAL;
-		break;
-	};
-
-	return ret;
-}
-
-const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
-	.extid_start = SBI_EXT_0_1_SET_TIMER,
-	.extid_end = SBI_EXT_0_1_SHUTDOWN,
-	.handler = kvm_sbi_ext_legacy_handler,
-};
-
-static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
-	&vcpu_sbi_ext_legacy,
-};
-
 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
 {
 	int i = 0;
@@ -220,9 +111,11 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
 	if (sbi_ext && sbi_ext->handler) {
-		if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
+#ifdef CONFIG_RISCV_SBI_V01
+		    if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
 		    cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
 			ext_is_v01 = true;
+#endif
 		ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
 	} else {
 		/* Return error for unsupported SBI calls */
@@ -262,13 +155,3 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	return ret;
 }
-
-#else
-
-int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_riscv_vcpu_sbi_forward(vcpu, run);
-	return 0;
-}
-
-#endif
diff --git a/arch/riscv/kvm/vcpu_sbi_legacy.c b/arch/riscv/kvm/vcpu_sbi_legacy.c
new file mode 100644
index 000000000000..fb386d227232
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_legacy.c
@@ -0,0 +1,129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
+				    struct kvm_run *run, u32 type)
+{
+	int i;
+	struct kvm_vcpu *tmp;
+
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm)
+		tmp->arch.power_off = true;
+	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
+
+	memset(&run->system_event, 0, sizeof(run->system_event));
+	run->system_event.type = type;
+	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+}
+
+static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap,
+				      bool *exit)
+{
+	ulong hmask;
+	int i, ret = 0;
+	u64 next_cycle;
+	struct kvm_vcpu *rvcpu;
+	struct cpumask cm, hm;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+
+	if (!cp)
+		return -EINVAL;
+
+	switch (cp->a7) {
+	case SBI_EXT_0_1_CONSOLE_GETCHAR:
+	case SBI_EXT_0_1_CONSOLE_PUTCHAR:
+		/*
+		 * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be
+		 * handled in kernel so we forward these to user-space
+		 */
+		kvm_riscv_vcpu_sbi_forward(vcpu, run);
+		*exit = true;
+		break;
+	case SBI_EXT_0_1_SET_TIMER:
+#if __riscv_xlen == 32
+		next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
+#else
+		next_cycle = (u64)cp->a0;
+#endif
+		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+		break;
+	case SBI_EXT_0_1_CLEAR_IPI:
+		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
+		break;
+	case SBI_EXT_0_1_SEND_IPI:
+		if (cp->a0)
+			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
+							   utrap);
+		else
+			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
+		if (utrap->scause)
+			break;
+
+		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
+			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
+			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			if (ret < 0)
+				break;
+		}
+		break;
+	case SBI_EXT_0_1_SHUTDOWN:
+		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
+		*exit = true;
+		break;
+	case SBI_EXT_0_1_REMOTE_FENCE_I:
+	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
+	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
+		if (cp->a0)
+			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
+							   utrap);
+		else
+			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
+		if (utrap->scause)
+			break;
+
+		cpumask_clear(&cm);
+		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
+			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
+			if (rvcpu->cpu < 0)
+				continue;
+			cpumask_set_cpu(rvcpu->cpu, &cm);
+		}
+		riscv_cpuid_to_hartid_mask(&cm, &hm);
+		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
+			ret = sbi_remote_fence_i(cpumask_bits(&hm));
+		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
+			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
+						cp->a1, cp->a2);
+		else
+			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
+						cp->a1, cp->a2, cp->a3);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	};
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = SBI_EXT_0_1_SET_TIMER,
+	.extid_end = SBI_EXT_0_1_SHUTDOWN,
+	.handler = kvm_sbi_ext_legacy_handler,
+};
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 2/5] RISC-V: Reorganize SBI code by moving legacy SBI to its own file
@ 2021-10-08  3:20   ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

With SBI v0.2, there may be more SBI extensions in future. It makes more
sense to group related extensions in separate files. Guest kernel will
choose appropriate SBI version dynamically.

Move the existing implementation to a separate file so that it can be
removed in future without much conflict.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |   2 +
 arch/riscv/kvm/Makefile               |   1 +
 arch/riscv/kvm/vcpu_sbi.c             | 153 +++-----------------------
 arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++++++++
 4 files changed, 150 insertions(+), 135 deletions(-)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
index 1a4cb0db2d0b..704151969ceb 100644
--- a/arch/riscv/include/asm/kvm_vcpu_sbi.h
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -25,5 +25,7 @@ struct kvm_vcpu_sbi_extension {
 		       bool *exit);
 };
 
+void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run);
 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
+
 #endif /* __RISCV_KVM_VCPU_SBI_H__ */
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 3226696b8340..53cbecc44c4c 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -22,4 +22,5 @@ kvm-y += vcpu.o
 kvm-y += vcpu_exit.o
 kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
+kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index 8c168d305763..e51de3e4526a 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -9,9 +9,7 @@
 #include <linux/errno.h>
 #include <linux/err.h>
 #include <linux/kvm_host.h>
-#include <asm/csr.h>
 #include <asm/sbi.h>
-#include <asm/kvm_vcpu_timer.h>
 #include <asm/kvm_vcpu_sbi.h>
 
 static int kvm_linux_err_map_sbi(int err)
@@ -32,8 +30,21 @@ static int kvm_linux_err_map_sbi(int err)
 	};
 }
 
-static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
-				       struct kvm_run *run)
+#ifdef CONFIG_RISCV_SBI_V01
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy;
+#else
+static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = -1UL,
+	.extid_end = -1UL,
+	.handler = NULL,
+};
+#endif
+
+static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
+	&vcpu_sbi_ext_legacy,
+};
+
+void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
 	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
 
@@ -71,126 +82,6 @@ int kvm_riscv_vcpu_sbi_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 0;
 }
 
-#ifdef CONFIG_RISCV_SBI_V01
-
-static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
-				    struct kvm_run *run, u32 type)
-{
-	int i;
-	struct kvm_vcpu *tmp;
-
-	kvm_for_each_vcpu(i, tmp, vcpu->kvm)
-		tmp->arch.power_off = true;
-	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
-
-	memset(&run->system_event, 0, sizeof(run->system_event));
-	run->system_event.type = type;
-	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
-}
-
-static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
-				      unsigned long *out_val,
-				      struct kvm_cpu_trap *utrap,
-				      bool *exit)
-{
-	ulong hmask;
-	int i, ret = 0;
-	u64 next_cycle;
-	struct kvm_vcpu *rvcpu;
-	struct cpumask cm, hm;
-	struct kvm *kvm = vcpu->kvm;
-	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
-
-	if (!cp)
-		return -EINVAL;
-
-	switch (cp->a7) {
-	case SBI_EXT_0_1_CONSOLE_GETCHAR:
-	case SBI_EXT_0_1_CONSOLE_PUTCHAR:
-		/*
-		 * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be
-		 * handled in kernel so we forward these to user-space
-		 */
-		kvm_riscv_vcpu_sbi_forward(vcpu, run);
-		*exit = true;
-		break;
-	case SBI_EXT_0_1_SET_TIMER:
-#if __riscv_xlen == 32
-		next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
-#else
-		next_cycle = (u64)cp->a0;
-#endif
-		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
-		break;
-	case SBI_EXT_0_1_CLEAR_IPI:
-		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
-		break;
-	case SBI_EXT_0_1_SEND_IPI:
-		if (cp->a0)
-			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   utrap);
-		else
-			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap->scause)
-			break;
-
-		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
-			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
-			if (ret < 0)
-				break;
-		}
-		break;
-	case SBI_EXT_0_1_SHUTDOWN:
-		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
-		*exit = true;
-		break;
-	case SBI_EXT_0_1_REMOTE_FENCE_I:
-	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
-	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
-		if (cp->a0)
-			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
-							   utrap);
-		else
-			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
-		if (utrap->scause)
-			break;
-
-		cpumask_clear(&cm);
-		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
-			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
-			if (rvcpu->cpu < 0)
-				continue;
-			cpumask_set_cpu(rvcpu->cpu, &cm);
-		}
-		riscv_cpuid_to_hartid_mask(&cm, &hm);
-		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
-			ret = sbi_remote_fence_i(cpumask_bits(&hm));
-		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
-			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
-						cp->a1, cp->a2);
-		else
-			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
-						cp->a1, cp->a2, cp->a3);
-		break;
-	default:
-		ret = -EINVAL;
-		break;
-	};
-
-	return ret;
-}
-
-const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
-	.extid_start = SBI_EXT_0_1_SET_TIMER,
-	.extid_end = SBI_EXT_0_1_SHUTDOWN,
-	.handler = kvm_sbi_ext_legacy_handler,
-};
-
-static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
-	&vcpu_sbi_ext_legacy,
-};
-
 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
 {
 	int i = 0;
@@ -220,9 +111,11 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
 	if (sbi_ext && sbi_ext->handler) {
-		if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
+#ifdef CONFIG_RISCV_SBI_V01
+		    if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
 		    cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
 			ext_is_v01 = true;
+#endif
 		ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
 	} else {
 		/* Return error for unsupported SBI calls */
@@ -262,13 +155,3 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	return ret;
 }
-
-#else
-
-int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_riscv_vcpu_sbi_forward(vcpu, run);
-	return 0;
-}
-
-#endif
diff --git a/arch/riscv/kvm/vcpu_sbi_legacy.c b/arch/riscv/kvm/vcpu_sbi_legacy.c
new file mode 100644
index 000000000000..fb386d227232
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_legacy.c
@@ -0,0 +1,129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
+				    struct kvm_run *run, u32 type)
+{
+	int i;
+	struct kvm_vcpu *tmp;
+
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm)
+		tmp->arch.power_off = true;
+	kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP);
+
+	memset(&run->system_event, 0, sizeof(run->system_event));
+	run->system_event.type = type;
+	run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+}
+
+static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap,
+				      bool *exit)
+{
+	ulong hmask;
+	int i, ret = 0;
+	u64 next_cycle;
+	struct kvm_vcpu *rvcpu;
+	struct cpumask cm, hm;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+
+	if (!cp)
+		return -EINVAL;
+
+	switch (cp->a7) {
+	case SBI_EXT_0_1_CONSOLE_GETCHAR:
+	case SBI_EXT_0_1_CONSOLE_PUTCHAR:
+		/*
+		 * The CONSOLE_GETCHAR/CONSOLE_PUTCHAR SBI calls cannot be
+		 * handled in kernel so we forward these to user-space
+		 */
+		kvm_riscv_vcpu_sbi_forward(vcpu, run);
+		*exit = true;
+		break;
+	case SBI_EXT_0_1_SET_TIMER:
+#if __riscv_xlen == 32
+		next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
+#else
+		next_cycle = (u64)cp->a0;
+#endif
+		ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+		break;
+	case SBI_EXT_0_1_CLEAR_IPI:
+		ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
+		break;
+	case SBI_EXT_0_1_SEND_IPI:
+		if (cp->a0)
+			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
+							   utrap);
+		else
+			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
+		if (utrap->scause)
+			break;
+
+		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
+			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
+			ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
+			if (ret < 0)
+				break;
+		}
+		break;
+	case SBI_EXT_0_1_SHUTDOWN:
+		kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
+		*exit = true;
+		break;
+	case SBI_EXT_0_1_REMOTE_FENCE_I:
+	case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
+	case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
+		if (cp->a0)
+			hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
+							   utrap);
+		else
+			hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
+		if (utrap->scause)
+			break;
+
+		cpumask_clear(&cm);
+		for_each_set_bit(i, &hmask, BITS_PER_LONG) {
+			rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
+			if (rvcpu->cpu < 0)
+				continue;
+			cpumask_set_cpu(rvcpu->cpu, &cm);
+		}
+		riscv_cpuid_to_hartid_mask(&cm, &hm);
+		if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
+			ret = sbi_remote_fence_i(cpumask_bits(&hm));
+		else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
+			ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
+						cp->a1, cp->a2);
+		else
+			ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
+						cp->a1, cp->a2, cp->a3);
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	};
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
+	.extid_start = SBI_EXT_0_1_SET_TIMER,
+	.extid_end = SBI_EXT_0_1_SHUTDOWN,
+	.handler = kvm_sbi_ext_legacy_handler,
+};
-- 
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 3/5] RISC-V: Add SBI v0.2 base extension
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-08  3:20   ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

SBI v0.2 base extension defined to allow backward compatibility and
probing of future extensions. This is also the only mandatory SBI
extension that must be implemented by SBI implementors.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |  2 +
 arch/riscv/include/asm/sbi.h          |  8 +++
 arch/riscv/kvm/Makefile               |  1 +
 arch/riscv/kvm/vcpu_sbi.c             |  3 +-
 arch/riscv/kvm/vcpu_sbi_base.c        | 73 +++++++++++++++++++++++++++
 5 files changed, 86 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
index 704151969ceb..76e4e17a3e00 100644
--- a/arch/riscv/include/asm/kvm_vcpu_sbi.h
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -9,6 +9,8 @@
 #ifndef __RISCV_KVM_VCPU_SBI_H__
 #define __RISCV_KVM_VCPU_SBI_H__
 
+#define KVM_SBI_IMPID 3
+
 #define KVM_SBI_VERSION_MAJOR 0
 #define KVM_SBI_VERSION_MINOR 2
 
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
index 289621da4a2a..20e049857e98 100644
--- a/arch/riscv/include/asm/sbi.h
+++ b/arch/riscv/include/asm/sbi.h
@@ -28,6 +28,14 @@ enum sbi_ext_id {
 	SBI_EXT_RFENCE = 0x52464E43,
 	SBI_EXT_HSM = 0x48534D,
 	SBI_EXT_SRST = 0x53525354,
+
+	/* Experimentals extensions must lie within this range */
+	SBI_EXT_EXPERIMENTAL_START = 0x0800000,
+	SBI_EXT_EXPERIMENTAL_END = 0x08FFFFFF,
+
+	/* Vendor extensions must lie within this range */
+	SBI_EXT_VENDOR_START = 0x09000000,
+	SBI_EXT_VENDOR_END = 0x09FFFFFF,
 };
 
 enum sbi_ext_base_fid {
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 53cbecc44c4c..aaf02efafc0f 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -23,4 +23,5 @@ kvm-y += vcpu_exit.o
 kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
+kvm-y += vcpu_sbi_base.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index e51de3e4526a..5533ffc25ed0 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -39,9 +39,10 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
 	.handler = NULL,
 };
 #endif
-
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
+	&vcpu_sbi_ext_base,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c
new file mode 100644
index 000000000000..1aeda3e10e7c
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_base.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				    unsigned long *out_val,
+				    struct kvm_cpu_trap *trap, bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct sbiret ecall_ret;
+
+	if (!cp)
+		return -EINVAL;
+
+	switch (cp->a6) {
+	case SBI_EXT_BASE_GET_SPEC_VERSION:
+		*out_val = (KVM_SBI_VERSION_MAJOR <<
+			    SBI_SPEC_VERSION_MAJOR_SHIFT) |
+			    KVM_SBI_VERSION_MINOR;
+		break;
+	case SBI_EXT_BASE_GET_IMP_ID:
+		*out_val = KVM_SBI_IMPID;
+		break;
+	case SBI_EXT_BASE_GET_IMP_VERSION:
+		*out_val = 0;
+		break;
+	case SBI_EXT_BASE_PROBE_EXT:
+		*out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0;
+		if ((!*out_val) &&
+		    ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START &&
+		     cp->a0 <= SBI_EXT_EXPERIMENTAL_END) ||
+		    ((cp->a0 >= SBI_EXT_VENDOR_START &&
+		     cp->a0 <= SBI_EXT_VENDOR_END)))) {
+		/* For experimental/vendor extensions forward to the userspace*/
+			kvm_riscv_vcpu_sbi_forward(vcpu, run);
+			*exit = true;
+		}
+		break;
+	case SBI_EXT_BASE_GET_MVENDORID:
+	case SBI_EXT_BASE_GET_MARCHID:
+	case SBI_EXT_BASE_GET_MIMPID:
+		ecall_ret = sbi_ecall(SBI_EXT_BASE, cp->a6, 0, 0, 0, 0, 0, 0);
+		if (!ecall_ret.error)
+			*out_val = ecall_ret.value;
+		/*TODO: We are unnecessarily converting the error twice */
+		ret = sbi_err_map_linux_errno(ecall_ret.error);
+		break;
+	default:
+		ret = -EOPNOTSUPP;
+		break;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = {
+	.extid_start = SBI_EXT_BASE,
+	.extid_end = SBI_EXT_BASE,
+	.handler = kvm_sbi_ext_base_handler,
+};
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 3/5] RISC-V: Add SBI v0.2 base extension
@ 2021-10-08  3:20   ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

SBI v0.2 base extension defined to allow backward compatibility and
probing of future extensions. This is also the only mandatory SBI
extension that must be implemented by SBI implementors.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/kvm_vcpu_sbi.h |  2 +
 arch/riscv/include/asm/sbi.h          |  8 +++
 arch/riscv/kvm/Makefile               |  1 +
 arch/riscv/kvm/vcpu_sbi.c             |  3 +-
 arch/riscv/kvm/vcpu_sbi_base.c        | 73 +++++++++++++++++++++++++++
 5 files changed, 86 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c

diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
index 704151969ceb..76e4e17a3e00 100644
--- a/arch/riscv/include/asm/kvm_vcpu_sbi.h
+++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
@@ -9,6 +9,8 @@
 #ifndef __RISCV_KVM_VCPU_SBI_H__
 #define __RISCV_KVM_VCPU_SBI_H__
 
+#define KVM_SBI_IMPID 3
+
 #define KVM_SBI_VERSION_MAJOR 0
 #define KVM_SBI_VERSION_MINOR 2
 
diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
index 289621da4a2a..20e049857e98 100644
--- a/arch/riscv/include/asm/sbi.h
+++ b/arch/riscv/include/asm/sbi.h
@@ -28,6 +28,14 @@ enum sbi_ext_id {
 	SBI_EXT_RFENCE = 0x52464E43,
 	SBI_EXT_HSM = 0x48534D,
 	SBI_EXT_SRST = 0x53525354,
+
+	/* Experimentals extensions must lie within this range */
+	SBI_EXT_EXPERIMENTAL_START = 0x0800000,
+	SBI_EXT_EXPERIMENTAL_END = 0x08FFFFFF,
+
+	/* Vendor extensions must lie within this range */
+	SBI_EXT_VENDOR_START = 0x09000000,
+	SBI_EXT_VENDOR_END = 0x09FFFFFF,
 };
 
 enum sbi_ext_base_fid {
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 53cbecc44c4c..aaf02efafc0f 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -23,4 +23,5 @@ kvm-y += vcpu_exit.o
 kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
+kvm-y += vcpu_sbi_base.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index e51de3e4526a..5533ffc25ed0 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -39,9 +39,10 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
 	.handler = NULL,
 };
 #endif
-
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
+	&vcpu_sbi_ext_base,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_base.c b/arch/riscv/kvm/vcpu_sbi_base.c
new file mode 100644
index 000000000000..1aeda3e10e7c
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_base.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_ext_base_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				    unsigned long *out_val,
+				    struct kvm_cpu_trap *trap, bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct sbiret ecall_ret;
+
+	if (!cp)
+		return -EINVAL;
+
+	switch (cp->a6) {
+	case SBI_EXT_BASE_GET_SPEC_VERSION:
+		*out_val = (KVM_SBI_VERSION_MAJOR <<
+			    SBI_SPEC_VERSION_MAJOR_SHIFT) |
+			    KVM_SBI_VERSION_MINOR;
+		break;
+	case SBI_EXT_BASE_GET_IMP_ID:
+		*out_val = KVM_SBI_IMPID;
+		break;
+	case SBI_EXT_BASE_GET_IMP_VERSION:
+		*out_val = 0;
+		break;
+	case SBI_EXT_BASE_PROBE_EXT:
+		*out_val = kvm_vcpu_sbi_find_ext(cp->a0) ? 1 : 0;
+		if ((!*out_val) &&
+		    ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START &&
+		     cp->a0 <= SBI_EXT_EXPERIMENTAL_END) ||
+		    ((cp->a0 >= SBI_EXT_VENDOR_START &&
+		     cp->a0 <= SBI_EXT_VENDOR_END)))) {
+		/* For experimental/vendor extensions forward to the userspace*/
+			kvm_riscv_vcpu_sbi_forward(vcpu, run);
+			*exit = true;
+		}
+		break;
+	case SBI_EXT_BASE_GET_MVENDORID:
+	case SBI_EXT_BASE_GET_MARCHID:
+	case SBI_EXT_BASE_GET_MIMPID:
+		ecall_ret = sbi_ecall(SBI_EXT_BASE, cp->a6, 0, 0, 0, 0, 0, 0);
+		if (!ecall_ret.error)
+			*out_val = ecall_ret.value;
+		/*TODO: We are unnecessarily converting the error twice */
+		ret = sbi_err_map_linux_errno(ecall_ret.error);
+		break;
+	default:
+		ret = -EOPNOTSUPP;
+		break;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base = {
+	.extid_start = SBI_EXT_BASE,
+	.extid_end = SBI_EXT_BASE,
+	.handler = kvm_sbi_ext_base_handler,
+};
-- 
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 4/5] RISC-V: Add v0.1 replacement SBI extensions defined in v02
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-08  3:20   ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The SBI v0.2 contains some of the improved versions of required v0.1
extensions such as remote fence, timer and IPI.

This patch implements those extensions.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kvm/Makefile           |   1 +
 arch/riscv/kvm/vcpu_sbi.c         |   7 ++
 arch/riscv/kvm/vcpu_sbi_replace.c | 136 ++++++++++++++++++++++++++++++
 3 files changed, 144 insertions(+)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index aaf02efafc0f..272428459a99 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -24,4 +24,5 @@ kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_sbi_base.o
+kvm-y += vcpu_sbi_replace.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index 5533ffc25ed0..dadee5e61a46 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -40,9 +40,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
 };
 #endif
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
+
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
 	&vcpu_sbi_ext_base,
+	&vcpu_sbi_ext_time,
+	&vcpu_sbi_ext_ipi,
+	&vcpu_sbi_ext_rfence,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
new file mode 100644
index 000000000000..a80fa7691b14
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_replace.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				    unsigned long *out_val,
+				    struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	u64 next_cycle;
+
+	if (!cp || (cp->a6 != SBI_EXT_TIME_SET_TIMER))
+		return -EINVAL;
+
+#if __riscv_xlen == 32
+	next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
+#else
+	next_cycle = (u64)cp->a0;
+#endif
+	kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = {
+	.extid_start = SBI_EXT_TIME,
+	.extid_end = SBI_EXT_TIME,
+	.handler = kvm_sbi_ext_time_handler,
+};
+
+static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				   unsigned long *out_val,
+				   struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int i, ret = 0;
+	struct kvm_vcpu *tmp;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long hmask = cp->a0;
+	unsigned long hbase = cp->a1;
+
+	if (!cp || (cp->a6 != SBI_EXT_IPI_SEND_IPI))
+		return -EINVAL;
+
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+		if (hbase != -1UL) {
+			if (tmp->vcpu_id < hbase)
+				continue;
+			if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
+				continue;
+		}
+		ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = {
+	.extid_start = SBI_EXT_IPI,
+	.extid_end = SBI_EXT_IPI,
+	.handler = kvm_sbi_ext_ipi_handler,
+};
+
+static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int i, ret = 0;
+	struct cpumask cm, hm;
+	struct kvm_vcpu *tmp;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long hmask = cp->a0;
+	unsigned long hbase = cp->a1;
+	unsigned long funcid = cp->a6;
+
+	if (!cp)
+		return -EINVAL;
+
+	cpumask_clear(&cm);
+	cpumask_clear(&hm);
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+		if (hbase != -1UL) {
+			if (tmp->vcpu_id < hbase)
+				continue;
+			if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
+				continue;
+		}
+		if (tmp->cpu < 0)
+			continue;
+		cpumask_set_cpu(tmp->cpu, &cm);
+	}
+
+	riscv_cpuid_to_hartid_mask(&cm, &hm);
+
+	switch (funcid) {
+	case SBI_EXT_RFENCE_REMOTE_FENCE_I:
+		ret = sbi_remote_fence_i(cpumask_bits(&hm));
+		break;
+	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
+		ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), cp->a2, cp->a3);
+		break;
+	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
+		ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), cp->a2,
+						  cp->a3, cp->a4);
+		break;
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
+	/* TODO: implement for nested hypervisor case */
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = {
+	.extid_start = SBI_EXT_RFENCE,
+	.extid_end = SBI_EXT_RFENCE,
+	.handler = kvm_sbi_ext_rfence_handler,
+};
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 4/5] RISC-V: Add v0.1 replacement SBI extensions defined in v02
@ 2021-10-08  3:20   ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

The SBI v0.2 contains some of the improved versions of required v0.1
extensions such as remote fence, timer and IPI.

This patch implements those extensions.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kvm/Makefile           |   1 +
 arch/riscv/kvm/vcpu_sbi.c         |   7 ++
 arch/riscv/kvm/vcpu_sbi_replace.c | 136 ++++++++++++++++++++++++++++++
 3 files changed, 144 insertions(+)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c

diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index aaf02efafc0f..272428459a99 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -24,4 +24,5 @@ kvm-y += vcpu_switch.o
 kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_sbi_base.o
+kvm-y += vcpu_sbi_replace.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index 5533ffc25ed0..dadee5e61a46 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -40,9 +40,16 @@ static const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
 };
 #endif
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
+
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
 	&vcpu_sbi_ext_base,
+	&vcpu_sbi_ext_time,
+	&vcpu_sbi_ext_ipi,
+	&vcpu_sbi_ext_rfence,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
new file mode 100644
index 000000000000..a80fa7691b14
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_replace.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_timer.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				    unsigned long *out_val,
+				    struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	u64 next_cycle;
+
+	if (!cp || (cp->a6 != SBI_EXT_TIME_SET_TIMER))
+		return -EINVAL;
+
+#if __riscv_xlen == 32
+	next_cycle = ((u64)cp->a1 << 32) | (u64)cp->a0;
+#else
+	next_cycle = (u64)cp->a0;
+#endif
+	kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time = {
+	.extid_start = SBI_EXT_TIME,
+	.extid_end = SBI_EXT_TIME,
+	.handler = kvm_sbi_ext_time_handler,
+};
+
+static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				   unsigned long *out_val,
+				   struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int i, ret = 0;
+	struct kvm_vcpu *tmp;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long hmask = cp->a0;
+	unsigned long hbase = cp->a1;
+
+	if (!cp || (cp->a6 != SBI_EXT_IPI_SEND_IPI))
+		return -EINVAL;
+
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+		if (hbase != -1UL) {
+			if (tmp->vcpu_id < hbase)
+				continue;
+			if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
+				continue;
+		}
+		ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT);
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi = {
+	.extid_start = SBI_EXT_IPI,
+	.extid_end = SBI_EXT_IPI,
+	.handler = kvm_sbi_ext_ipi_handler,
+};
+
+static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				      unsigned long *out_val,
+				      struct kvm_cpu_trap *utrap, bool *exit)
+{
+	int i, ret = 0;
+	struct cpumask cm, hm;
+	struct kvm_vcpu *tmp;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long hmask = cp->a0;
+	unsigned long hbase = cp->a1;
+	unsigned long funcid = cp->a6;
+
+	if (!cp)
+		return -EINVAL;
+
+	cpumask_clear(&cm);
+	cpumask_clear(&hm);
+	kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+		if (hbase != -1UL) {
+			if (tmp->vcpu_id < hbase)
+				continue;
+			if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
+				continue;
+		}
+		if (tmp->cpu < 0)
+			continue;
+		cpumask_set_cpu(tmp->cpu, &cm);
+	}
+
+	riscv_cpuid_to_hartid_mask(&cm, &hm);
+
+	switch (funcid) {
+	case SBI_EXT_RFENCE_REMOTE_FENCE_I:
+		ret = sbi_remote_fence_i(cpumask_bits(&hm));
+		break;
+	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
+		ret = sbi_remote_hfence_vvma(cpumask_bits(&hm), cp->a2, cp->a3);
+		break;
+	case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
+		ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm), cp->a2,
+						  cp->a3, cp->a4);
+		break;
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA:
+	case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
+	/* TODO: implement for nested hypervisor case */
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence = {
+	.extid_start = SBI_EXT_RFENCE,
+	.extid_end = SBI_EXT_RFENCE,
+	.handler = kvm_sbi_ext_rfence_handler,
+};
-- 
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-08  3:20   ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

SBI HSM extension allows OS to start/stop harts any time. It also allows
ordered booting of harts instead of random booting.

Implement SBI HSM exntesion and designate the vcpu 0 as the boot vcpu id.
All other non-zero non-booting vcpus should be brought up by the OS
implementing HSM extension. If the guest OS doesn't implement HSM
extension, only single vcpu will be available to OS.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/sbi.h  |   1 +
 arch/riscv/kvm/Makefile       |   1 +
 arch/riscv/kvm/vcpu.c         |  19 ++++++
 arch/riscv/kvm/vcpu_sbi.c     |   4 ++
 arch/riscv/kvm/vcpu_sbi_hsm.c | 109 ++++++++++++++++++++++++++++++++++
 5 files changed, 134 insertions(+)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c

diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
index 20e049857e98..710c1c1242a6 100644
--- a/arch/riscv/include/asm/sbi.h
+++ b/arch/riscv/include/asm/sbi.h
@@ -106,6 +106,7 @@ enum sbi_srst_reset_reason {
 #define SBI_ERR_INVALID_PARAM	-3
 #define SBI_ERR_DENIED		-4
 #define SBI_ERR_INVALID_ADDRESS	-5
+#define SBI_ERR_ALREADY_AVAILABLE -6
 
 extern unsigned long sbi_spec_version;
 struct sbiret {
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 272428459a99..fed2a69d1e18 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -25,4 +25,5 @@ kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_sbi_base.o
 kvm-y += vcpu_sbi_replace.o
+kvm-y += vcpu_sbi_hsm.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index c44cabce7dd8..278b4d643e1b 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
 	struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
 	struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context;
+	bool loaded;
+
+	/* Disable preemption to avoid race with preempt notifiers */
+	preempt_disable();
+	loaded = (vcpu->cpu != -1);
+	if (loaded)
+		kvm_arch_vcpu_put(vcpu);
 
 	memcpy(csr, reset_csr, sizeof(*csr));
 
@@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
 	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+
+	/* Reset the guest CSRs for hotplug usecase */
+	if (loaded)
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+	preempt_enable();
 }
 
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
@@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 {
+	/**
+	 * vcpu with id 0 is the designated boot cpu.
+	 * Keep all vcpus with non-zero cpu id in power-off state so that they
+	 * can brought to online using SBI HSM extension.
+	 */
+	if (vcpu->vcpu_idx != 0)
+		kvm_riscv_vcpu_power_off(vcpu);
 }
 
 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index dadee5e61a46..db54ef21168b 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -25,6 +25,8 @@ static int kvm_linux_err_map_sbi(int err)
 		return SBI_ERR_INVALID_ADDRESS;
 	case -EOPNOTSUPP:
 		return SBI_ERR_NOT_SUPPORTED;
+	case -EALREADY:
+		return SBI_ERR_ALREADY_AVAILABLE;
 	default:
 		return SBI_ERR_FAILURE;
 	};
@@ -43,6 +45,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
 
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
@@ -50,6 +53,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_time,
 	&vcpu_sbi_ext_ipi,
 	&vcpu_sbi_ext_rfence,
+	&vcpu_sbi_ext_hsm,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
new file mode 100644
index 000000000000..6c5e144ce130
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *reset_cntx;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct kvm_vcpu *target_vcpu;
+	unsigned long target_vcpuid = cp->a0;
+
+	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+	if (!target_vcpu)
+		return -EINVAL;
+	if (!target_vcpu->arch.power_off)
+		return -EALREADY;
+
+	reset_cntx = &target_vcpu->arch.guest_reset_context;
+	/* start address */
+	reset_cntx->sepc = cp->a1;
+	/* target vcpu id to start */
+	reset_cntx->a0 = target_vcpuid;
+	/* private data passed from kernel */
+	reset_cntx->a1 = cp->a2;
+	kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
+
+	/* Make sure that the reset request is enqueued before power on */
+	smp_wmb();
+	kvm_riscv_vcpu_power_on(target_vcpu);
+
+	return 0;
+}
+
+static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
+{
+	if ((!vcpu) || (vcpu->arch.power_off))
+		return -EINVAL;
+
+	kvm_riscv_vcpu_power_off(vcpu);
+
+	return 0;
+}
+
+static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long target_vcpuid = cp->a0;
+	struct kvm_vcpu *target_vcpu;
+
+	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+	if (!target_vcpu)
+		return -EINVAL;
+	if (!target_vcpu->arch.power_off)
+		return SBI_HSM_HART_STATUS_STARTED;
+	else
+		return SBI_HSM_HART_STATUS_STOPPED;
+}
+
+static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				   unsigned long *out_val,
+				   struct kvm_cpu_trap *utrap,
+				   bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct kvm *kvm = vcpu->kvm;
+	unsigned long funcid = cp->a6;
+
+	if (!cp)
+		return -EINVAL;
+	switch (funcid) {
+	case SBI_EXT_HSM_HART_START:
+		mutex_lock(&kvm->lock);
+		ret = kvm_sbi_hsm_vcpu_start(vcpu);
+		mutex_unlock(&kvm->lock);
+		break;
+	case SBI_EXT_HSM_HART_STOP:
+		ret = kvm_sbi_hsm_vcpu_stop(vcpu);
+		break;
+	case SBI_EXT_HSM_HART_STATUS:
+		ret = kvm_sbi_hsm_vcpu_get_status(vcpu);
+		if (ret >= 0) {
+			*out_val = ret;
+			ret = 0;
+		}
+		break;
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = {
+	.extid_start = SBI_EXT_HSM,
+	.extid_end = SBI_EXT_HSM,
+	.handler = kvm_sbi_ext_hsm_handler,
+};
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-08  3:20   ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-08  3:20 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

SBI HSM extension allows OS to start/stop harts any time. It also allows
ordered booting of harts instead of random booting.

Implement SBI HSM exntesion and designate the vcpu 0 as the boot vcpu id.
All other non-zero non-booting vcpus should be brought up by the OS
implementing HSM extension. If the guest OS doesn't implement HSM
extension, only single vcpu will be available to OS.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/sbi.h  |   1 +
 arch/riscv/kvm/Makefile       |   1 +
 arch/riscv/kvm/vcpu.c         |  19 ++++++
 arch/riscv/kvm/vcpu_sbi.c     |   4 ++
 arch/riscv/kvm/vcpu_sbi_hsm.c | 109 ++++++++++++++++++++++++++++++++++
 5 files changed, 134 insertions(+)
 create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c

diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
index 20e049857e98..710c1c1242a6 100644
--- a/arch/riscv/include/asm/sbi.h
+++ b/arch/riscv/include/asm/sbi.h
@@ -106,6 +106,7 @@ enum sbi_srst_reset_reason {
 #define SBI_ERR_INVALID_PARAM	-3
 #define SBI_ERR_DENIED		-4
 #define SBI_ERR_INVALID_ADDRESS	-5
+#define SBI_ERR_ALREADY_AVAILABLE -6
 
 extern unsigned long sbi_spec_version;
 struct sbiret {
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 272428459a99..fed2a69d1e18 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -25,4 +25,5 @@ kvm-y += vcpu_sbi.o
 kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_legacy.o
 kvm-y += vcpu_sbi_base.o
 kvm-y += vcpu_sbi_replace.o
+kvm-y += vcpu_sbi_hsm.o
 kvm-y += vcpu_timer.o
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index c44cabce7dd8..278b4d643e1b 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
 	struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
 	struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context;
+	bool loaded;
+
+	/* Disable preemption to avoid race with preempt notifiers */
+	preempt_disable();
+	loaded = (vcpu->cpu != -1);
+	if (loaded)
+		kvm_arch_vcpu_put(vcpu);
 
 	memcpy(csr, reset_csr, sizeof(*csr));
 
@@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
 
 	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
 	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
+
+	/* Reset the guest CSRs for hotplug usecase */
+	if (loaded)
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+	preempt_enable();
 }
 
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
@@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 
 void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
 {
+	/**
+	 * vcpu with id 0 is the designated boot cpu.
+	 * Keep all vcpus with non-zero cpu id in power-off state so that they
+	 * can brought to online using SBI HSM extension.
+	 */
+	if (vcpu->vcpu_idx != 0)
+		kvm_riscv_vcpu_power_off(vcpu);
 }
 
 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
index dadee5e61a46..db54ef21168b 100644
--- a/arch/riscv/kvm/vcpu_sbi.c
+++ b/arch/riscv/kvm/vcpu_sbi.c
@@ -25,6 +25,8 @@ static int kvm_linux_err_map_sbi(int err)
 		return SBI_ERR_INVALID_ADDRESS;
 	case -EOPNOTSUPP:
 		return SBI_ERR_NOT_SUPPORTED;
+	case -EALREADY:
+		return SBI_ERR_ALREADY_AVAILABLE;
 	default:
 		return SBI_ERR_FAILURE;
 	};
@@ -43,6 +45,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_base;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_time;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_ipi;
 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
+extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
 
 static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_legacy,
@@ -50,6 +53,7 @@ static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
 	&vcpu_sbi_ext_time,
 	&vcpu_sbi_ext_ipi,
 	&vcpu_sbi_ext_rfence,
+	&vcpu_sbi_ext_hsm,
 };
 
 void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu, struct kvm_run *run)
diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
new file mode 100644
index 000000000000..6c5e144ce130
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Western Digital Corporation or its affiliates.
+ *
+ * Authors:
+ *     Atish Patra <atish.patra@wdc.com>
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <asm/csr.h>
+#include <asm/sbi.h>
+#include <asm/kvm_vcpu_sbi.h>
+
+static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *reset_cntx;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct kvm_vcpu *target_vcpu;
+	unsigned long target_vcpuid = cp->a0;
+
+	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+	if (!target_vcpu)
+		return -EINVAL;
+	if (!target_vcpu->arch.power_off)
+		return -EALREADY;
+
+	reset_cntx = &target_vcpu->arch.guest_reset_context;
+	/* start address */
+	reset_cntx->sepc = cp->a1;
+	/* target vcpu id to start */
+	reset_cntx->a0 = target_vcpuid;
+	/* private data passed from kernel */
+	reset_cntx->a1 = cp->a2;
+	kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
+
+	/* Make sure that the reset request is enqueued before power on */
+	smp_wmb();
+	kvm_riscv_vcpu_power_on(target_vcpu);
+
+	return 0;
+}
+
+static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
+{
+	if ((!vcpu) || (vcpu->arch.power_off))
+		return -EINVAL;
+
+	kvm_riscv_vcpu_power_off(vcpu);
+
+	return 0;
+}
+
+static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	unsigned long target_vcpuid = cp->a0;
+	struct kvm_vcpu *target_vcpu;
+
+	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+	if (!target_vcpu)
+		return -EINVAL;
+	if (!target_vcpu->arch.power_off)
+		return SBI_HSM_HART_STATUS_STARTED;
+	else
+		return SBI_HSM_HART_STATUS_STOPPED;
+}
+
+static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+				   unsigned long *out_val,
+				   struct kvm_cpu_trap *utrap,
+				   bool *exit)
+{
+	int ret = 0;
+	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+	struct kvm *kvm = vcpu->kvm;
+	unsigned long funcid = cp->a6;
+
+	if (!cp)
+		return -EINVAL;
+	switch (funcid) {
+	case SBI_EXT_HSM_HART_START:
+		mutex_lock(&kvm->lock);
+		ret = kvm_sbi_hsm_vcpu_start(vcpu);
+		mutex_unlock(&kvm->lock);
+		break;
+	case SBI_EXT_HSM_HART_STOP:
+		ret = kvm_sbi_hsm_vcpu_stop(vcpu);
+		break;
+	case SBI_EXT_HSM_HART_STATUS:
+		ret = kvm_sbi_hsm_vcpu_get_status(vcpu);
+		if (ret >= 0) {
+			*out_val = ret;
+			ret = 0;
+		}
+		break;
+	default:
+		ret = -EOPNOTSUPP;
+	}
+
+	return ret;
+}
+
+const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm = {
+	.extid_start = SBI_EXT_HSM,
+	.extid_end = SBI_EXT_HSM,
+	.handler = kvm_sbi_ext_hsm_handler,
+};
-- 
2.31.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
  2021-10-08  3:20   ` Atish Patra
@ 2021-10-08  7:24     ` Paolo Bonzini
  -1 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2021-10-08  7:24 UTC (permalink / raw)
  To: Atish Patra, linux-kernel
  Cc: Anup Patel, Kefeng Wang, kvm-riscv, kvm, linux-riscv,
	Palmer Dabbelt, Paul Walmsley, Vincent Chen

On 08/10/21 05:20, Atish Patra wrote:
> The existing SBI specification impelementation follows v0.1 or legacy
> specification. The latest specification known as v0.2 allows more
> scalability and performance improvements.
> 
> Rename the existing implementation as legacy and provide a way to allow
> future extensions.
> 
> Signed-off-by: Atish Patra<atish.patra@wdc.com>
> ---
>   arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
>   arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
>   2 files changed, 148 insertions(+), 30 deletions(-)
>   create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h

It's bikeshedding I know, but still: every time somebody calls something 
"legacy", a kitten dies.  Please use kvm_sbi_ext_0_1_handler and 
vcpu_sbi_ext_0_1.

Paolo


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
@ 2021-10-08  7:24     ` Paolo Bonzini
  0 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2021-10-08  7:24 UTC (permalink / raw)
  To: Atish Patra, linux-kernel
  Cc: Anup Patel, Kefeng Wang, kvm-riscv, kvm, linux-riscv,
	Palmer Dabbelt, Paul Walmsley, Vincent Chen

On 08/10/21 05:20, Atish Patra wrote:
> The existing SBI specification impelementation follows v0.1 or legacy
> specification. The latest specification known as v0.2 allows more
> scalability and performance improvements.
> 
> Rename the existing implementation as legacy and provide a way to allow
> future extensions.
> 
> Signed-off-by: Atish Patra<atish.patra@wdc.com>
> ---
>   arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
>   arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
>   2 files changed, 148 insertions(+), 30 deletions(-)
>   create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h

It's bikeshedding I know, but still: every time somebody calls something 
"legacy", a kitten dies.  Please use kvm_sbi_ext_0_1_handler and 
vcpu_sbi_ext_0_1.

Paolo


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-08  3:20   ` Atish Patra
@ 2021-10-08 15:02     ` Sean Christopherson
  -1 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-08 15:02 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-kernel, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

On Thu, Oct 07, 2021, Atish Patra wrote:
> SBI HSM extension allows OS to start/stop harts any time. It also allows
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index c44cabce7dd8..278b4d643e1b 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>  	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
>  	struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
>  	struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context;
> +	bool loaded;
> +
> +	/* Disable preemption to avoid race with preempt notifiers */

Stating what the code literally does is not a helpful comment, as it doesn't
help the reader understand _why_ preemption needs to be disabled.

> +	preempt_disable();
> +	loaded = (vcpu->cpu != -1);
> +	if (loaded)
> +		kvm_arch_vcpu_put(vcpu);

Oof.  Looks like this pattern was taken from arm64.  Is there really no better
approach to handling this?  I don't see anything in kvm_riscv_reset_vcpu() that
will obviously break if the vCPU is loaded.  If the goal is purely to effect a
CSR reset via kvm_arch_vcpu_load(), then why not just factor out a helper to do
exactly that?

>  
>  	memcpy(csr, reset_csr, sizeof(*csr));
>  
> @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>  
>  	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
>  	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +
> +	/* Reset the guest CSRs for hotplug usecase */
> +	if (loaded)
> +		kvm_arch_vcpu_load(vcpu, smp_processor_id());

If the preempt shenanigans really have to stay, at least use get_cpu()/put_cpu().

> +	preempt_enable();
>  }
>  
>  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  
>  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
>  {
> +	/**
> +	 * vcpu with id 0 is the designated boot cpu.
> +	 * Keep all vcpus with non-zero cpu id in power-off state so that they
> +	 * can brought to online using SBI HSM extension.
> +	 */
> +	if (vcpu->vcpu_idx != 0)
> +		kvm_riscv_vcpu_power_off(vcpu);

Why do this in postcreate?

>  }
>  
>  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> index dadee5e61a46..db54ef21168b 100644

...

> +static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_cpu_context *reset_cntx;
> +	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> +	struct kvm_vcpu *target_vcpu;
> +	unsigned long target_vcpuid = cp->a0;
> +
> +	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
> +	if (!target_vcpu)
> +		return -EINVAL;
> +	if (!target_vcpu->arch.power_off)
> +		return -EALREADY;
> +
> +	reset_cntx = &target_vcpu->arch.guest_reset_context;
> +	/* start address */
> +	reset_cntx->sepc = cp->a1;
> +	/* target vcpu id to start */
> +	reset_cntx->a0 = target_vcpuid;
> +	/* private data passed from kernel */
> +	reset_cntx->a1 = cp->a2;
> +	kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
> +
> +	/* Make sure that the reset request is enqueued before power on */
> +	smp_wmb();

What does this pair with?  I suspect nothing, because AFAICT the code was taken
from arm64.

arm64 has the smp_wmb() in kvm_psci_vcpu_on() to ensure that the vCPU sees the
request if the vCPU sees the change in vcpu->arch.power_off, and so has a
smp_rmb() in kvm_reset_vcpu().

Side topic, how much of arm64 and RISC-V is this similar?  Would it make sense
to find some way for them to share code?

> +	kvm_riscv_vcpu_power_on(target_vcpu);
> +
> +	return 0;
> +}
> +
> +static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
> +{
> +	if ((!vcpu) || (vcpu->arch.power_off))

Too many parentheses, and the NULL vCPU check is unnecessary.

> +		return -EINVAL;
> +
> +	kvm_riscv_vcpu_power_off(vcpu);
> +
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-08 15:02     ` Sean Christopherson
  0 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-08 15:02 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-kernel, Paolo Bonzini, Anup Patel, Kefeng Wang, kvm-riscv,
	kvm, linux-riscv, Palmer Dabbelt, Paul Walmsley, Vincent Chen

On Thu, Oct 07, 2021, Atish Patra wrote:
> SBI HSM extension allows OS to start/stop harts any time. It also allows
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> index c44cabce7dd8..278b4d643e1b 100644
> --- a/arch/riscv/kvm/vcpu.c
> +++ b/arch/riscv/kvm/vcpu.c
> @@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>  	struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;
>  	struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
>  	struct kvm_cpu_context *reset_cntx = &vcpu->arch.guest_reset_context;
> +	bool loaded;
> +
> +	/* Disable preemption to avoid race with preempt notifiers */

Stating what the code literally does is not a helpful comment, as it doesn't
help the reader understand _why_ preemption needs to be disabled.

> +	preempt_disable();
> +	loaded = (vcpu->cpu != -1);
> +	if (loaded)
> +		kvm_arch_vcpu_put(vcpu);

Oof.  Looks like this pattern was taken from arm64.  Is there really no better
approach to handling this?  I don't see anything in kvm_riscv_reset_vcpu() that
will obviously break if the vCPU is loaded.  If the goal is purely to effect a
CSR reset via kvm_arch_vcpu_load(), then why not just factor out a helper to do
exactly that?

>  
>  	memcpy(csr, reset_csr, sizeof(*csr));
>  
> @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
>  
>  	WRITE_ONCE(vcpu->arch.irqs_pending, 0);
>  	WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> +
> +	/* Reset the guest CSRs for hotplug usecase */
> +	if (loaded)
> +		kvm_arch_vcpu_load(vcpu, smp_processor_id());

If the preempt shenanigans really have to stay, at least use get_cpu()/put_cpu().

> +	preempt_enable();
>  }
>  
>  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>  
>  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
>  {
> +	/**
> +	 * vcpu with id 0 is the designated boot cpu.
> +	 * Keep all vcpus with non-zero cpu id in power-off state so that they
> +	 * can brought to online using SBI HSM extension.
> +	 */
> +	if (vcpu->vcpu_idx != 0)
> +		kvm_riscv_vcpu_power_off(vcpu);

Why do this in postcreate?

>  }
>  
>  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> index dadee5e61a46..db54ef21168b 100644

...

> +static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_cpu_context *reset_cntx;
> +	struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> +	struct kvm_vcpu *target_vcpu;
> +	unsigned long target_vcpuid = cp->a0;
> +
> +	target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
> +	if (!target_vcpu)
> +		return -EINVAL;
> +	if (!target_vcpu->arch.power_off)
> +		return -EALREADY;
> +
> +	reset_cntx = &target_vcpu->arch.guest_reset_context;
> +	/* start address */
> +	reset_cntx->sepc = cp->a1;
> +	/* target vcpu id to start */
> +	reset_cntx->a0 = target_vcpuid;
> +	/* private data passed from kernel */
> +	reset_cntx->a1 = cp->a2;
> +	kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
> +
> +	/* Make sure that the reset request is enqueued before power on */
> +	smp_wmb();

What does this pair with?  I suspect nothing, because AFAICT the code was taken
from arm64.

arm64 has the smp_wmb() in kvm_psci_vcpu_on() to ensure that the vCPU sees the
request if the vCPU sees the change in vcpu->arch.power_off, and so has a
smp_rmb() in kvm_reset_vcpu().

Side topic, how much of arm64 and RISC-V is this similar?  Would it make sense
to find some way for them to share code?

> +	kvm_riscv_vcpu_power_on(target_vcpu);
> +
> +	return 0;
> +}
> +
> +static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
> +{
> +	if ((!vcpu) || (vcpu->arch.power_off))

Too many parentheses, and the NULL vCPU check is unnecessary.

> +		return -EINVAL;
> +
> +	kvm_riscv_vcpu_power_off(vcpu);
> +
> +	return 0;
> +}
> +

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 0/5] Add SBI v0.2 support for KVM
  2021-10-08  3:20 ` Atish Patra
@ 2021-10-10  9:34   ` Guo Ren
  -1 siblings, 0 replies; 28+ messages in thread
From: Guo Ren @ 2021-10-10  9:34 UTC (permalink / raw)
  To: Atish Patra
  Cc: Linux Kernel Mailing List, Paolo Bonzini, Anup Patel,
	Kefeng Wang, kvm-riscv, kvm, linux-riscv, Palmer Dabbelt,
	Paul Walmsley, Vincent Chen

On Fri, Oct 8, 2021 at 11:20 AM Atish Patra <atish.patra@wdc.com> wrote:
>
> The Supervisor Binary Interface(SBI) specification[1] now defines a
> base extension that provides extendability to add future extensions
> while maintaining backward compatibility with previous versions.
> The new version is defined as 0.2 and older version is marked as 0.1.
>
> This series adds following features to RISC-V Linux KVM.
> 1. Adds support for SBI v0.2 in KVM
> 2. SBI Hart state management extension (HSM) in KVM
> 3. Ordered booting of guest vcpus in guest Linux
>
> This series is based on base KVM series which is already part of the kvm-next[2].
>
> Guest kernel needs to also support SBI v0.2 and HSM extension in Kernel
> to boot multiple vcpus. Linux kernel supports both starting v5.7.
> In absense of that, guest can only boot 1 vcpu.
>
> Changes from v2->v3:
> 1. Rebased on the latest merged kvm series.
> 2. Dropped the reset extension patch because reset extension is not merged in kernel.
> However, my tree[3] still contains it in case anybody wants to test it.
>
> Changes from v1->v2:
> 1. Sent the patch 1 separately as it can merged independently.
> 2. Added Reset extension functionality.
>
> Tested on Qemu and Rocket core FPGA.
>
> [1] https://github.com/riscv/riscv-sbi-doc/blob/master/riscv-sbi.adoc
> [2] https://git.kernel.org/pub/scm/virt/kvm/kvm.git/log/?h=next
> [3] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02_reset
Tested-by: Guo Ren <guoren@kernel.org>

/ # echo o > /proc/sysrq-trigger
[   97.164312] sysrq: Power Off
[   97.177376] reboot: Power down
[   97.189630] machine_power_off, 34.
[   97.209240] sbi_srst_power_off, 528

  # KVM session ended normally.

> [4] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02
Yes, it will hang without your reset patch.

/ # echo o > /proc/sysrq-trigger
[   20.753143] sysrq: Power Off
[   20.766105] reboot: Power down
[   20.777462] machine_power_off, 34.
[   20.789932] default_power_off, 11.
[   41.769217] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[   41.803683]  (detected by 0, t=5252 jiffies, g=-191, q=2)
[   41.823503] rcu: All QSes seen, last rcu_sched kthread activity
5252 (4294902728-4294897476), jiffies_till_next_fqs=1, root ->qsmask
0x0
[   41.868656] rcu: rcu_sched kthread starved for 5252 jiffies! g-191
f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
[   41.905574] rcu:     Unless rcu_sched kthread gets sufficient CPU
time, OOM is now expected behavior.
[   41.939434] rcu: RCU grace-period kthread stack dump:
[   41.957946] task:rcu_sched       state:R  running task     stack:
 0 pid:   11 ppid:     2 flags:0x00000000
[   41.994477] Call Trace:
[   42.004217] [<ffffffff807384a8>] __schedule+0x22a/0x500
[   42.024580] [<ffffffff807387d6>] schedule+0x58/0xcc
[   42.042803] [<ffffffff8073cf82>] schedule_timeout+0x68/0xda
[   42.063509] [<ffffffff8006e930>] rcu_gp_fqs_loop+0x22e/0x2de
[   42.084670] [<ffffffff8006f9c4>] rcu_gp_kthread+0xf4/0x118
[   42.105075] [<ffffffff800345ae>] kthread+0x100/0x112
[   42.123564] [<ffffffff80003068>] ret_from_exception+0x0/0xc
[   42.146121] rcu: Stack dump where RCU GP kthread last ran:
[   42.173224] Task dump for CPU 0:
[   42.185784] task:kworker/0:1     state:R  running task     stack:
 0 pid:   13 ppid:     2 flags:0x00000008
[   42.222593] Workqueue: events do_poweroff
[   42.237804] Call Trace:
[   42.247202] [<ffffffff8000487e>] dump_backtrace+0x1c/0x24
[   42.267635] [<ffffffff80039aa6>] sched_show_task+0x156/0x176
[   42.289890] [<ffffffff8072d914>] dump_cpu_task+0x42/0x4c
[   42.309939] [<ffffffff8072e6e2>] rcu_check_gp_kthread_starvation+0xfa/0x112
[   42.337239] [<ffffffff80070e40>] rcu_sched_clock_irq+0x638/0x6ca
[   42.359294] [<ffffffff80076e44>] update_process_times+0xa2/0xca
[   42.381707] [<ffffffff80084cb2>] tick_sched_timer+0x78/0x130
[   42.402984] [<ffffffff800774e2>] __hrtimer_run_queues+0x122/0x186
[   42.427002] [<ffffffff80078198>] hrtimer_interrupt+0xcc/0x1d8
[   42.447506] [<ffffffff805cbe70>] riscv_timer_interrupt+0x32/0x3c
[   42.470831] [<ffffffff8006508e>] handle_percpu_devid_irq+0x80/0x118
[   42.495964] [<ffffffff800601d0>] handle_domain_irq+0x58/0x88
[   42.518874] [<ffffffff803007ce>] riscv_intc_irq+0x36/0x5e
[   42.540270] [<ffffffff80003068>] ret_from_exception+0x0/0xc
[   42.561475] [<ffffffff80364f3e>] univ8250_console_write+0x0/0x2a
[   68.137301] watchdog: BUG: soft lockup - CPU#0 stuck for 45s!
[kworker/0:1:13]
[   68.171330] Modules linked in:
[   68.183426] CPU: 0 PID: 13 Comm: kworker/0:1 Not tainted
5.15.0-rc2-00156-g5ff89d5fae43-dirty #10
[   68.216937] Hardware name: linux,dummy-virt (DT)
[   68.234380] Workqueue: events do_poweroff
[   68.249798] epc : default_power_off+0x22/0x24
[   68.266249]  ra : default_power_off+0x1e/0x24
[   68.283131] epc : ffffffff8072ce4a ra : ffffffff8072ce46 sp :
ffffffd00406bda0
[   68.310349]  gp : ffffffff812e7ad0 tp : ffffffe0016b8000 t0 :
ffffffff812f659f
[   68.337152]  t1 : ffffffff812f6590 t2 : 0000000000000000 s0 :
ffffffd00406bdb0
[   68.364485]  s1 : ffffffff81214950 a0 : 0000000000000016 a1 :
ffffffff81284af0
[   68.391110]  a2 : 0000000000000010 a3 : fffffffffffffffe a4 :
86a924d9b215f000
[   68.418203]  a5 : 86a924d9b215f000 a6 : 0000000000000030 a7 :
ffffffff80364f3e
[   68.445367]  s2 : ffffffe001640a80 s3 : 0000000000000000 s4 :
ffffffe00fa6d0c0
[   68.472728]  s5 : ffffffe00fa70e00 s6 : ffffffe00fa70e05 s7 :
ffffffff81214958
[   68.499914]  s8 : ffffffff8002df26 s9 : ffffffff812e92e8 s10:
ffffffff80034744
[   68.526820]  s11: 0000000000000000 t3 : 0000000000000064 t4 :
ffffffffffffffff
[   68.554701]  t5 : ffffffff812128f8 t6 : ffffffd00406bb08
[   68.574879] status: 0000000000000120 badaddr: 0000000000000000
cause: 8000000000000005
[   68.604540] [<ffffffff8072ce4a>] default_power_off+0x22/0x24
[   68.625675] [<ffffffff8072cecc>] machine_power_off+0x26/0x2e
[   68.646515] [<ffffffff80036c7e>] kernel_power_off+0x66/0x6e
[   68.667518] [<ffffffff8005ba02>] do_poweroff+0xc/0x14
[   68.686739] [<ffffffff8002dd24>] process_one_work+0x13e/0x28a
[   68.707735] [<ffffffff8002deec>] worker_thread+0x7c/0x320
[   68.729517] [<ffffffff800345ae>] kthread+0x100/0x112
[   68.748754] [<ffffffff80003068>] ret_from_exception+0x0/0xc

>
> Atish Patra (5):
> RISC-V: Mark the existing SBI v0.1 implementation as legacy
> RISC-V: Reorganize SBI code by moving legacy SBI to its own file
> RISC-V: Add SBI v0.2 base extension
> RISC-V: Add v0.1 replacement SBI extensions defined in v02
> RISC-V: Add SBI HSM extension in KVM
>
> arch/riscv/include/asm/kvm_vcpu_sbi.h |  33 ++++
> arch/riscv/include/asm/sbi.h          |   9 ++
> arch/riscv/kvm/Makefile               |   4 +
> arch/riscv/kvm/vcpu.c                 |  19 +++
> arch/riscv/kvm/vcpu_sbi.c             | 208 ++++++++++++--------------
> arch/riscv/kvm/vcpu_sbi_base.c        |  73 +++++++++
> arch/riscv/kvm/vcpu_sbi_hsm.c         | 109 ++++++++++++++
> arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++
> arch/riscv/kvm/vcpu_sbi_replace.c     | 136 +++++++++++++++++
> 9 files changed, 608 insertions(+), 112 deletions(-)
> create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
> create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c
>
> --
> 2.31.1
>
>
> --
> kvm-riscv mailing list
> kvm-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kvm-riscv



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 0/5] Add SBI v0.2 support for KVM
@ 2021-10-10  9:34   ` Guo Ren
  0 siblings, 0 replies; 28+ messages in thread
From: Guo Ren @ 2021-10-10  9:34 UTC (permalink / raw)
  To: Atish Patra
  Cc: Linux Kernel Mailing List, Paolo Bonzini, Anup Patel,
	Kefeng Wang, kvm-riscv, kvm, linux-riscv, Palmer Dabbelt,
	Paul Walmsley, Vincent Chen

On Fri, Oct 8, 2021 at 11:20 AM Atish Patra <atish.patra@wdc.com> wrote:
>
> The Supervisor Binary Interface(SBI) specification[1] now defines a
> base extension that provides extendability to add future extensions
> while maintaining backward compatibility with previous versions.
> The new version is defined as 0.2 and older version is marked as 0.1.
>
> This series adds following features to RISC-V Linux KVM.
> 1. Adds support for SBI v0.2 in KVM
> 2. SBI Hart state management extension (HSM) in KVM
> 3. Ordered booting of guest vcpus in guest Linux
>
> This series is based on base KVM series which is already part of the kvm-next[2].
>
> Guest kernel needs to also support SBI v0.2 and HSM extension in Kernel
> to boot multiple vcpus. Linux kernel supports both starting v5.7.
> In absense of that, guest can only boot 1 vcpu.
>
> Changes from v2->v3:
> 1. Rebased on the latest merged kvm series.
> 2. Dropped the reset extension patch because reset extension is not merged in kernel.
> However, my tree[3] still contains it in case anybody wants to test it.
>
> Changes from v1->v2:
> 1. Sent the patch 1 separately as it can merged independently.
> 2. Added Reset extension functionality.
>
> Tested on Qemu and Rocket core FPGA.
>
> [1] https://github.com/riscv/riscv-sbi-doc/blob/master/riscv-sbi.adoc
> [2] https://git.kernel.org/pub/scm/virt/kvm/kvm.git/log/?h=next
> [3] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02_reset
Tested-by: Guo Ren <guoren@kernel.org>

/ # echo o > /proc/sysrq-trigger
[   97.164312] sysrq: Power Off
[   97.177376] reboot: Power down
[   97.189630] machine_power_off, 34.
[   97.209240] sbi_srst_power_off, 528

  # KVM session ended normally.

> [4] https://github.com/atishp04/linux/tree/kvm_next_sbi_v02
Yes, it will hang without your reset patch.

/ # echo o > /proc/sysrq-trigger
[   20.753143] sysrq: Power Off
[   20.766105] reboot: Power down
[   20.777462] machine_power_off, 34.
[   20.789932] default_power_off, 11.
[   41.769217] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[   41.803683]  (detected by 0, t=5252 jiffies, g=-191, q=2)
[   41.823503] rcu: All QSes seen, last rcu_sched kthread activity
5252 (4294902728-4294897476), jiffies_till_next_fqs=1, root ->qsmask
0x0
[   41.868656] rcu: rcu_sched kthread starved for 5252 jiffies! g-191
f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
[   41.905574] rcu:     Unless rcu_sched kthread gets sufficient CPU
time, OOM is now expected behavior.
[   41.939434] rcu: RCU grace-period kthread stack dump:
[   41.957946] task:rcu_sched       state:R  running task     stack:
 0 pid:   11 ppid:     2 flags:0x00000000
[   41.994477] Call Trace:
[   42.004217] [<ffffffff807384a8>] __schedule+0x22a/0x500
[   42.024580] [<ffffffff807387d6>] schedule+0x58/0xcc
[   42.042803] [<ffffffff8073cf82>] schedule_timeout+0x68/0xda
[   42.063509] [<ffffffff8006e930>] rcu_gp_fqs_loop+0x22e/0x2de
[   42.084670] [<ffffffff8006f9c4>] rcu_gp_kthread+0xf4/0x118
[   42.105075] [<ffffffff800345ae>] kthread+0x100/0x112
[   42.123564] [<ffffffff80003068>] ret_from_exception+0x0/0xc
[   42.146121] rcu: Stack dump where RCU GP kthread last ran:
[   42.173224] Task dump for CPU 0:
[   42.185784] task:kworker/0:1     state:R  running task     stack:
 0 pid:   13 ppid:     2 flags:0x00000008
[   42.222593] Workqueue: events do_poweroff
[   42.237804] Call Trace:
[   42.247202] [<ffffffff8000487e>] dump_backtrace+0x1c/0x24
[   42.267635] [<ffffffff80039aa6>] sched_show_task+0x156/0x176
[   42.289890] [<ffffffff8072d914>] dump_cpu_task+0x42/0x4c
[   42.309939] [<ffffffff8072e6e2>] rcu_check_gp_kthread_starvation+0xfa/0x112
[   42.337239] [<ffffffff80070e40>] rcu_sched_clock_irq+0x638/0x6ca
[   42.359294] [<ffffffff80076e44>] update_process_times+0xa2/0xca
[   42.381707] [<ffffffff80084cb2>] tick_sched_timer+0x78/0x130
[   42.402984] [<ffffffff800774e2>] __hrtimer_run_queues+0x122/0x186
[   42.427002] [<ffffffff80078198>] hrtimer_interrupt+0xcc/0x1d8
[   42.447506] [<ffffffff805cbe70>] riscv_timer_interrupt+0x32/0x3c
[   42.470831] [<ffffffff8006508e>] handle_percpu_devid_irq+0x80/0x118
[   42.495964] [<ffffffff800601d0>] handle_domain_irq+0x58/0x88
[   42.518874] [<ffffffff803007ce>] riscv_intc_irq+0x36/0x5e
[   42.540270] [<ffffffff80003068>] ret_from_exception+0x0/0xc
[   42.561475] [<ffffffff80364f3e>] univ8250_console_write+0x0/0x2a
[   68.137301] watchdog: BUG: soft lockup - CPU#0 stuck for 45s!
[kworker/0:1:13]
[   68.171330] Modules linked in:
[   68.183426] CPU: 0 PID: 13 Comm: kworker/0:1 Not tainted
5.15.0-rc2-00156-g5ff89d5fae43-dirty #10
[   68.216937] Hardware name: linux,dummy-virt (DT)
[   68.234380] Workqueue: events do_poweroff
[   68.249798] epc : default_power_off+0x22/0x24
[   68.266249]  ra : default_power_off+0x1e/0x24
[   68.283131] epc : ffffffff8072ce4a ra : ffffffff8072ce46 sp :
ffffffd00406bda0
[   68.310349]  gp : ffffffff812e7ad0 tp : ffffffe0016b8000 t0 :
ffffffff812f659f
[   68.337152]  t1 : ffffffff812f6590 t2 : 0000000000000000 s0 :
ffffffd00406bdb0
[   68.364485]  s1 : ffffffff81214950 a0 : 0000000000000016 a1 :
ffffffff81284af0
[   68.391110]  a2 : 0000000000000010 a3 : fffffffffffffffe a4 :
86a924d9b215f000
[   68.418203]  a5 : 86a924d9b215f000 a6 : 0000000000000030 a7 :
ffffffff80364f3e
[   68.445367]  s2 : ffffffe001640a80 s3 : 0000000000000000 s4 :
ffffffe00fa6d0c0
[   68.472728]  s5 : ffffffe00fa70e00 s6 : ffffffe00fa70e05 s7 :
ffffffff81214958
[   68.499914]  s8 : ffffffff8002df26 s9 : ffffffff812e92e8 s10:
ffffffff80034744
[   68.526820]  s11: 0000000000000000 t3 : 0000000000000064 t4 :
ffffffffffffffff
[   68.554701]  t5 : ffffffff812128f8 t6 : ffffffd00406bb08
[   68.574879] status: 0000000000000120 badaddr: 0000000000000000
cause: 8000000000000005
[   68.604540] [<ffffffff8072ce4a>] default_power_off+0x22/0x24
[   68.625675] [<ffffffff8072cecc>] machine_power_off+0x26/0x2e
[   68.646515] [<ffffffff80036c7e>] kernel_power_off+0x66/0x6e
[   68.667518] [<ffffffff8005ba02>] do_poweroff+0xc/0x14
[   68.686739] [<ffffffff8002dd24>] process_one_work+0x13e/0x28a
[   68.707735] [<ffffffff8002deec>] worker_thread+0x7c/0x320
[   68.729517] [<ffffffff800345ae>] kthread+0x100/0x112
[   68.748754] [<ffffffff80003068>] ret_from_exception+0x0/0xc

>
> Atish Patra (5):
> RISC-V: Mark the existing SBI v0.1 implementation as legacy
> RISC-V: Reorganize SBI code by moving legacy SBI to its own file
> RISC-V: Add SBI v0.2 base extension
> RISC-V: Add v0.1 replacement SBI extensions defined in v02
> RISC-V: Add SBI HSM extension in KVM
>
> arch/riscv/include/asm/kvm_vcpu_sbi.h |  33 ++++
> arch/riscv/include/asm/sbi.h          |   9 ++
> arch/riscv/kvm/Makefile               |   4 +
> arch/riscv/kvm/vcpu.c                 |  19 +++
> arch/riscv/kvm/vcpu_sbi.c             | 208 ++++++++++++--------------
> arch/riscv/kvm/vcpu_sbi_base.c        |  73 +++++++++
> arch/riscv/kvm/vcpu_sbi_hsm.c         | 109 ++++++++++++++
> arch/riscv/kvm/vcpu_sbi_legacy.c      | 129 ++++++++++++++++
> arch/riscv/kvm/vcpu_sbi_replace.c     | 136 +++++++++++++++++
> 9 files changed, 608 insertions(+), 112 deletions(-)
> create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
> create mode 100644 arch/riscv/kvm/vcpu_sbi_base.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_hsm.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_legacy.c
> create mode 100644 arch/riscv/kvm/vcpu_sbi_replace.c
>
> --
> 2.31.1
>
>
> --
> kvm-riscv mailing list
> kvm-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kvm-riscv



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-08 15:02     ` Sean Christopherson
@ 2021-10-11  8:02       ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-11  8:02 UTC (permalink / raw)
  To: seanjc
  Cc: linux-riscv, Anup Patel, vincent.chen, kvm-riscv, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> On Thu, Oct 07, 2021, Atish Patra wrote:
> > SBI HSM extension allows OS to start/stop harts any time. It also
> > allows
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index c44cabce7dd8..278b4d643e1b 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct
> > kvm_vcpu *vcpu)
> >         struct kvm_vcpu_csr *reset_csr = &vcpu-
> > >arch.guest_reset_csr;
> >         struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
> >         struct kvm_cpu_context *reset_cntx = &vcpu-
> > >arch.guest_reset_context;
> > +       bool loaded;
> > +
> > +       /* Disable preemption to avoid race with preempt notifiers
> > */
> 
> Stating what the code literally does is not a helpful comment, as it
> doesn't
> help the reader understand _why_ preemption needs to be disabled.


The preemption is disabled here because it races with
kvm_sched_out/kvm_sched_in(called from preempt notifiers) which also
calls vcpu_load/put.

I will add some more explanation in the comment to make it more
explicit.

> 
> > +       preempt_disable();
> > +       loaded = (vcpu->cpu != -1);
> > +       if (loaded)
> > +               kvm_arch_vcpu_put(vcpu);
> 
> Oof.  Looks like this pattern was taken from arm64. 

Yes. This part is similar to arm64 because the same race condition can
happen in riscv due to save/restore of CSRs during reset.


>  Is there really no better
> approach to handling this?  I don't see anything in
> kvm_riscv_reset_vcpu() that
> will obviously break if the vCPU is loaded.  If the goal is purely to
> effect a
> CSR reset via kvm_arch_vcpu_load(), then why not just factor out a
> helper to do
> exactly that?
> 
> >  
> >         memcpy(csr, reset_csr, sizeof(*csr));
> >  
> > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > kvm_vcpu *vcpu)
> >  
> >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +
> > +       /* Reset the guest CSRs for hotplug usecase */
> > +       if (loaded)
> > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> 
> If the preempt shenanigans really have to stay, at least use
> get_cpu()/put_cpu().
> 

Is there any specific advantage to that ? get_cpu/put_cpu are just
macros which calls preempt_disable/preempt_enable.

The only advantage of get_cpu is that it returns the current cpu. 
vcpu_load function uses get_cpu because it requires the current cpu id.

However, we don't need that in this case. I am not against changing it
to get_cpu/put_cpu. Just wanted to understand the reasoning behind your
suggestion.


> > +       preempt_enable();
> >  }
> >  
> >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > *vcpu)
> >  
> >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> >  {
> > +       /**
> > +        * vcpu with id 0 is the designated boot cpu.
> > +        * Keep all vcpus with non-zero cpu id in power-off state
> > so that they
> > +        * can brought to online using SBI HSM extension.
> > +        */
> > +       if (vcpu->vcpu_idx != 0)
> > +               kvm_riscv_vcpu_power_off(vcpu);
> 
> Why do this in postcreate?
> 

Because we need to absolutely sure that the vcpu is created. It is
cleaner in this way rather than doing this here at the end of
kvm_arch_vcpu_create. create_vcpu can also fail after
kvm_arch_vcpu_create returns.


> >  }
> >  
> >  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> > diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> > index dadee5e61a46..db54ef21168b 100644
> 
> ...
> 
> > +static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
> > +{
> > +       struct kvm_cpu_context *reset_cntx;
> > +       struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> > +       struct kvm_vcpu *target_vcpu;
> > +       unsigned long target_vcpuid = cp->a0;
> > +
> > +       target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
> > +       if (!target_vcpu)
> > +               return -EINVAL;
> > +       if (!target_vcpu->arch.power_off)
> > +               return -EALREADY;
> > +
> > +       reset_cntx = &target_vcpu->arch.guest_reset_context;
> > +       /* start address */
> > +       reset_cntx->sepc = cp->a1;
> > +       /* target vcpu id to start */
> > +       reset_cntx->a0 = target_vcpuid;
> > +       /* private data passed from kernel */
> > +       reset_cntx->a1 = cp->a2;
> > +       kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
> > +
> > +       /* Make sure that the reset request is enqueued before
> > power on */
> > +       smp_wmb();
> 
> What does this pair with?  I suspect nothing, because AFAICT the code
> was taken
> from arm64.
> 

Thanks for noticing this. It was a slip up. I will fix it in next
revision.


> arm64 has the smp_wmb() in kvm_psci_vcpu_on() to ensure that the vCPU
> sees the
> request if the vCPU sees the change in vcpu->arch.power_off, and so
> has a
> smp_rmb() in kvm_reset_vcpu().
> 
> Side topic, how much of arm64 and RISC-V is this similar?  Would it
> make sense
> to find some way for them to share code?

While some of the operational methodologies implemented in this patch
are similar with ARM64 (i.e. SBI HSM vs PSCI for cpu hotplug/booting),
the implementation is different in terms of CSR save/restore & pending
irq handling. We may end up in more code to as new arch hookups may
have to be added.

This option certainly can be explored in future. But it is not our TODO
list right now.


> 
> > +       kvm_riscv_vcpu_power_on(target_vcpu);
> > +
> > +       return 0;
> > +}
> > +
> > +static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
> > +{
> > +       if ((!vcpu) || (vcpu->arch.power_off))
> 
> Too many parentheses, and the NULL vCPU check is unnecessary.

ok. Will fix it.


> 
> > +               return -EINVAL;
> > +
> > +       kvm_riscv_vcpu_power_off(vcpu);
> > +
> > +       return 0;
> > +}
> > +

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-11  8:02       ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-11  8:02 UTC (permalink / raw)
  To: seanjc
  Cc: linux-riscv, Anup Patel, vincent.chen, kvm-riscv, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> On Thu, Oct 07, 2021, Atish Patra wrote:
> > SBI HSM extension allows OS to start/stop harts any time. It also
> > allows
> > diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> > index c44cabce7dd8..278b4d643e1b 100644
> > --- a/arch/riscv/kvm/vcpu.c
> > +++ b/arch/riscv/kvm/vcpu.c
> > @@ -133,6 +133,13 @@ static void kvm_riscv_reset_vcpu(struct
> > kvm_vcpu *vcpu)
> >         struct kvm_vcpu_csr *reset_csr = &vcpu-
> > >arch.guest_reset_csr;
> >         struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
> >         struct kvm_cpu_context *reset_cntx = &vcpu-
> > >arch.guest_reset_context;
> > +       bool loaded;
> > +
> > +       /* Disable preemption to avoid race with preempt notifiers
> > */
> 
> Stating what the code literally does is not a helpful comment, as it
> doesn't
> help the reader understand _why_ preemption needs to be disabled.


The preemption is disabled here because it races with
kvm_sched_out/kvm_sched_in(called from preempt notifiers) which also
calls vcpu_load/put.

I will add some more explanation in the comment to make it more
explicit.

> 
> > +       preempt_disable();
> > +       loaded = (vcpu->cpu != -1);
> > +       if (loaded)
> > +               kvm_arch_vcpu_put(vcpu);
> 
> Oof.  Looks like this pattern was taken from arm64. 

Yes. This part is similar to arm64 because the same race condition can
happen in riscv due to save/restore of CSRs during reset.


>  Is there really no better
> approach to handling this?  I don't see anything in
> kvm_riscv_reset_vcpu() that
> will obviously break if the vCPU is loaded.  If the goal is purely to
> effect a
> CSR reset via kvm_arch_vcpu_load(), then why not just factor out a
> helper to do
> exactly that?
> 
> >  
> >         memcpy(csr, reset_csr, sizeof(*csr));
> >  
> > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > kvm_vcpu *vcpu)
> >  
> >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > +
> > +       /* Reset the guest CSRs for hotplug usecase */
> > +       if (loaded)
> > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> 
> If the preempt shenanigans really have to stay, at least use
> get_cpu()/put_cpu().
> 

Is there any specific advantage to that ? get_cpu/put_cpu are just
macros which calls preempt_disable/preempt_enable.

The only advantage of get_cpu is that it returns the current cpu. 
vcpu_load function uses get_cpu because it requires the current cpu id.

However, we don't need that in this case. I am not against changing it
to get_cpu/put_cpu. Just wanted to understand the reasoning behind your
suggestion.


> > +       preempt_enable();
> >  }
> >  
> >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > *vcpu)
> >  
> >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> >  {
> > +       /**
> > +        * vcpu with id 0 is the designated boot cpu.
> > +        * Keep all vcpus with non-zero cpu id in power-off state
> > so that they
> > +        * can brought to online using SBI HSM extension.
> > +        */
> > +       if (vcpu->vcpu_idx != 0)
> > +               kvm_riscv_vcpu_power_off(vcpu);
> 
> Why do this in postcreate?
> 

Because we need to absolutely sure that the vcpu is created. It is
cleaner in this way rather than doing this here at the end of
kvm_arch_vcpu_create. create_vcpu can also fail after
kvm_arch_vcpu_create returns.


> >  }
> >  
> >  void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
> > diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> > index dadee5e61a46..db54ef21168b 100644
> 
> ...
> 
> > +static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu)
> > +{
> > +       struct kvm_cpu_context *reset_cntx;
> > +       struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> > +       struct kvm_vcpu *target_vcpu;
> > +       unsigned long target_vcpuid = cp->a0;
> > +
> > +       target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
> > +       if (!target_vcpu)
> > +               return -EINVAL;
> > +       if (!target_vcpu->arch.power_off)
> > +               return -EALREADY;
> > +
> > +       reset_cntx = &target_vcpu->arch.guest_reset_context;
> > +       /* start address */
> > +       reset_cntx->sepc = cp->a1;
> > +       /* target vcpu id to start */
> > +       reset_cntx->a0 = target_vcpuid;
> > +       /* private data passed from kernel */
> > +       reset_cntx->a1 = cp->a2;
> > +       kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu);
> > +
> > +       /* Make sure that the reset request is enqueued before
> > power on */
> > +       smp_wmb();
> 
> What does this pair with?  I suspect nothing, because AFAICT the code
> was taken
> from arm64.
> 

Thanks for noticing this. It was a slip up. I will fix it in next
revision.


> arm64 has the smp_wmb() in kvm_psci_vcpu_on() to ensure that the vCPU
> sees the
> request if the vCPU sees the change in vcpu->arch.power_off, and so
> has a
> smp_rmb() in kvm_reset_vcpu().
> 
> Side topic, how much of arm64 and RISC-V is this similar?  Would it
> make sense
> to find some way for them to share code?

While some of the operational methodologies implemented in this patch
are similar with ARM64 (i.e. SBI HSM vs PSCI for cpu hotplug/booting),
the implementation is different in terms of CSR save/restore & pending
irq handling. We may end up in more code to as new arch hookups may
have to be added.

This option certainly can be explored in future. But it is not our TODO
list right now.


> 
> > +       kvm_riscv_vcpu_power_on(target_vcpu);
> > +
> > +       return 0;
> > +}
> > +
> > +static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu)
> > +{
> > +       if ((!vcpu) || (vcpu->arch.power_off))
> 
> Too many parentheses, and the NULL vCPU check is unnecessary.

ok. Will fix it.


> 
> > +               return -EINVAL;
> > +
> > +       kvm_riscv_vcpu_power_off(vcpu);
> > +
> > +       return 0;
> > +}
> > +

-- 
Regards,
Atish
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-11  8:02       ` Atish Patra
@ 2021-10-11 14:32         ` Sean Christopherson
  -1 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-11 14:32 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-riscv, Anup Patel, vincent.chen, kvm-riscv, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, Oct 11, 2021, Atish Patra wrote:
> On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > +       preempt_disable();
> > > +       loaded = (vcpu->cpu != -1);
> > > +       if (loaded)
> > > +               kvm_arch_vcpu_put(vcpu);
> > 
> > Oof.  Looks like this pattern was taken from arm64. 
> 
> Yes. This part is similar to arm64 because the same race condition can
> happen in riscv due to save/restore of CSRs during reset.
> 
> 
> >  Is there really no better approach to handling this?  I don't see anything
> >  in kvm_riscv_reset_vcpu() that will obviously break if the vCPU is
> >  loaded.  If the goal is purely to effect a CSR reset via
> >  kvm_arch_vcpu_load(), then why not just factor out a helper to do exactly
> >  that?

What about the question here?

> > 
> > >  
> > >         memcpy(csr, reset_csr, sizeof(*csr));
> > >  
> > > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > > kvm_vcpu *vcpu)
> > >  
> > >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > > +
> > > +       /* Reset the guest CSRs for hotplug usecase */
> > > +       if (loaded)
> > > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> > 
> > If the preempt shenanigans really have to stay, at least use
> > get_cpu()/put_cpu().
> > 
> 
> Is there any specific advantage to that ? get_cpu/put_cpu are just
> macros which calls preempt_disable/preempt_enable.
> 
> The only advantage of get_cpu is that it returns the current cpu. 
> vcpu_load function uses get_cpu because it requires the current cpu id.
> 
> However, we don't need that in this case. I am not against changing it
> to get_cpu/put_cpu. Just wanted to understand the reasoning behind your
> suggestion.

It would make the code a bit self-documenting, because AFAICT it doesn't truly
care about being preempted, it cares about keeping the vCPU on the correct pCPU.

> > > +       preempt_enable();
> > >  }
> > >  
> > >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > > *vcpu)
> > >  
> > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > >  {
> > > +       /**
> > > +        * vcpu with id 0 is the designated boot cpu.
> > > +        * Keep all vcpus with non-zero cpu id in power-off state
> > > so that they
> > > +        * can brought to online using SBI HSM extension.
> > > +        */
> > > +       if (vcpu->vcpu_idx != 0)
> > > +               kvm_riscv_vcpu_power_off(vcpu);
> > 
> > Why do this in postcreate?
> > 
> 
> Because we need to absolutely sure that the vcpu is created. It is
> cleaner in this way rather than doing this here at the end of
> kvm_arch_vcpu_create. create_vcpu can also fail after
> kvm_arch_vcpu_create returns.

But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of the vCPU.  It
clears vcpu->arch.power_off, makes a request, and kicks the vCPU.  None of that
has side effects to anything else in KVM.  If the vCPU isn't created successfully,
it gets deleted and nothing ever sees that state change.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-11 14:32         ` Sean Christopherson
  0 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-11 14:32 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-riscv, Anup Patel, vincent.chen, kvm-riscv, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, Oct 11, 2021, Atish Patra wrote:
> On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > +       preempt_disable();
> > > +       loaded = (vcpu->cpu != -1);
> > > +       if (loaded)
> > > +               kvm_arch_vcpu_put(vcpu);
> > 
> > Oof.  Looks like this pattern was taken from arm64. 
> 
> Yes. This part is similar to arm64 because the same race condition can
> happen in riscv due to save/restore of CSRs during reset.
> 
> 
> >  Is there really no better approach to handling this?  I don't see anything
> >  in kvm_riscv_reset_vcpu() that will obviously break if the vCPU is
> >  loaded.  If the goal is purely to effect a CSR reset via
> >  kvm_arch_vcpu_load(), then why not just factor out a helper to do exactly
> >  that?

What about the question here?

> > 
> > >  
> > >         memcpy(csr, reset_csr, sizeof(*csr));
> > >  
> > > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > > kvm_vcpu *vcpu)
> > >  
> > >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > > +
> > > +       /* Reset the guest CSRs for hotplug usecase */
> > > +       if (loaded)
> > > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> > 
> > If the preempt shenanigans really have to stay, at least use
> > get_cpu()/put_cpu().
> > 
> 
> Is there any specific advantage to that ? get_cpu/put_cpu are just
> macros which calls preempt_disable/preempt_enable.
> 
> The only advantage of get_cpu is that it returns the current cpu. 
> vcpu_load function uses get_cpu because it requires the current cpu id.
> 
> However, we don't need that in this case. I am not against changing it
> to get_cpu/put_cpu. Just wanted to understand the reasoning behind your
> suggestion.

It would make the code a bit self-documenting, because AFAICT it doesn't truly
care about being preempted, it cares about keeping the vCPU on the correct pCPU.

> > > +       preempt_enable();
> > >  }
> > >  
> > >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > > *vcpu)
> > >  
> > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > >  {
> > > +       /**
> > > +        * vcpu with id 0 is the designated boot cpu.
> > > +        * Keep all vcpus with non-zero cpu id in power-off state
> > > so that they
> > > +        * can brought to online using SBI HSM extension.
> > > +        */
> > > +       if (vcpu->vcpu_idx != 0)
> > > +               kvm_riscv_vcpu_power_off(vcpu);
> > 
> > Why do this in postcreate?
> > 
> 
> Because we need to absolutely sure that the vcpu is created. It is
> cleaner in this way rather than doing this here at the end of
> kvm_arch_vcpu_create. create_vcpu can also fail after
> kvm_arch_vcpu_create returns.

But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of the vCPU.  It
clears vcpu->arch.power_off, makes a request, and kicks the vCPU.  None of that
has side effects to anything else in KVM.  If the vCPU isn't created successfully,
it gets deleted and nothing ever sees that state change.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-11 14:32         ` Sean Christopherson
@ 2021-10-11 22:50           ` Atish Patra
  -1 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-11 22:50 UTC (permalink / raw)
  To: seanjc
  Cc: kvm-riscv, linux-riscv, vincent.chen, Anup Patel, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, 2021-10-11 at 14:32 +0000, Sean Christopherson wrote:
> On Mon, Oct 11, 2021, Atish Patra wrote:
> > On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > > +       preempt_disable();
> > > > +       loaded = (vcpu->cpu != -1);
> > > > +       if (loaded)
> > > > +               kvm_arch_vcpu_put(vcpu);
> > > 
> > > Oof.  Looks like this pattern was taken from arm64. 
> > 
> > Yes. This part is similar to arm64 because the same race condition
> > can
> > happen in riscv due to save/restore of CSRs during reset.
> > 
> > 
> > >  Is there really no better approach to handling this?  I don't
> > > see anything
> > >  in kvm_riscv_reset_vcpu() that will obviously break if the vCPU
> > > is
> > >  loaded.  If the goal is purely to effect a CSR reset via
> > >  kvm_arch_vcpu_load(), then why not just factor out a helper to
> > > do exactly
> > >  that?
> 
> What about the question here?

Are you suggesting to factor the csr reset part to a different function
?

> 
> > > 
> > > >  
> > > >         memcpy(csr, reset_csr, sizeof(*csr));
> > > >  
> > > > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > > > kvm_vcpu *vcpu)
> > > >  
> > > >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > > >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > > > +
> > > > +       /* Reset the guest CSRs for hotplug usecase */
> > > > +       if (loaded)
> > > > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> > > 
> > > If the preempt shenanigans really have to stay, at least use
> > > get_cpu()/put_cpu().
> > > 
> > 
> > Is there any specific advantage to that ? get_cpu/put_cpu are just
> > macros which calls preempt_disable/preempt_enable.
> > 
> > The only advantage of get_cpu is that it returns the current cpu. 
> > vcpu_load function uses get_cpu because it requires the current cpu
> > id.
> > 
> > However, we don't need that in this case. I am not against changing
> > it
> > to get_cpu/put_cpu. Just wanted to understand the reasoning behind
> > your
> > suggestion.
> 
> It would make the code a bit self-documenting, because AFAICT it
> doesn't truly
> care about being preempted, it cares about keeping the vCPU on the
> correct pCPU.


Sure. I will change it to get_cpu/put_cpu interface.

> 
> > > > +       preempt_enable();
> > > >  }
> > > >  
> > > >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > > > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > > > *vcpu)
> > > >  
> > > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > > >  {
> > > > +       /**
> > > > +        * vcpu with id 0 is the designated boot cpu.
> > > > +        * Keep all vcpus with non-zero cpu id in power-off
> > > > state
> > > > so that they
> > > > +        * can brought to online using SBI HSM extension.
> > > > +        */
> > > > +       if (vcpu->vcpu_idx != 0)
> > > > +               kvm_riscv_vcpu_power_off(vcpu);
> > > 
> > > Why do this in postcreate?
> > > 
> > 
> > Because we need to absolutely sure that the vcpu is created. It is
> > cleaner in this way rather than doing this here at the end of
> > kvm_arch_vcpu_create. create_vcpu can also fail after
> > kvm_arch_vcpu_create returns.
> 
> But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of
> the vCPU.  It
> clears vcpu->arch.power_off, makes a request, and kicks the vCPU. 
> None of that
> has side effects to anything else in KVM.  If the vCPU isn't created
> successfully,
> it gets deleted and nothing ever sees that state change.

I am assuming that you are suggesting to add this logic at the end of
the kvm_arch_vcpu_create() instead of kvm_arch_vcpu_postcreate().

vcpu_idx is assigned after kvm_arch_vcpu_create() returns in the
kvm_vm_ioctl_create_vcpu. kvm_arch_vcpu_postcreate() is the arch hookup
after vcpu_idx is assigned.

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-11 22:50           ` Atish Patra
  0 siblings, 0 replies; 28+ messages in thread
From: Atish Patra @ 2021-10-11 22:50 UTC (permalink / raw)
  To: seanjc
  Cc: kvm-riscv, linux-riscv, vincent.chen, Anup Patel, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, 2021-10-11 at 14:32 +0000, Sean Christopherson wrote:
> On Mon, Oct 11, 2021, Atish Patra wrote:
> > On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > > +       preempt_disable();
> > > > +       loaded = (vcpu->cpu != -1);
> > > > +       if (loaded)
> > > > +               kvm_arch_vcpu_put(vcpu);
> > > 
> > > Oof.  Looks like this pattern was taken from arm64. 
> > 
> > Yes. This part is similar to arm64 because the same race condition
> > can
> > happen in riscv due to save/restore of CSRs during reset.
> > 
> > 
> > >  Is there really no better approach to handling this?  I don't
> > > see anything
> > >  in kvm_riscv_reset_vcpu() that will obviously break if the vCPU
> > > is
> > >  loaded.  If the goal is purely to effect a CSR reset via
> > >  kvm_arch_vcpu_load(), then why not just factor out a helper to
> > > do exactly
> > >  that?
> 
> What about the question here?

Are you suggesting to factor the csr reset part to a different function
?

> 
> > > 
> > > >  
> > > >         memcpy(csr, reset_csr, sizeof(*csr));
> > > >  
> > > > @@ -144,6 +151,11 @@ static void kvm_riscv_reset_vcpu(struct
> > > > kvm_vcpu *vcpu)
> > > >  
> > > >         WRITE_ONCE(vcpu->arch.irqs_pending, 0);
> > > >         WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
> > > > +
> > > > +       /* Reset the guest CSRs for hotplug usecase */
> > > > +       if (loaded)
> > > > +               kvm_arch_vcpu_load(vcpu, smp_processor_id());
> > > 
> > > If the preempt shenanigans really have to stay, at least use
> > > get_cpu()/put_cpu().
> > > 
> > 
> > Is there any specific advantage to that ? get_cpu/put_cpu are just
> > macros which calls preempt_disable/preempt_enable.
> > 
> > The only advantage of get_cpu is that it returns the current cpu. 
> > vcpu_load function uses get_cpu because it requires the current cpu
> > id.
> > 
> > However, we don't need that in this case. I am not against changing
> > it
> > to get_cpu/put_cpu. Just wanted to understand the reasoning behind
> > your
> > suggestion.
> 
> It would make the code a bit self-documenting, because AFAICT it
> doesn't truly
> care about being preempted, it cares about keeping the vCPU on the
> correct pCPU.


Sure. I will change it to get_cpu/put_cpu interface.

> 
> > > > +       preempt_enable();
> > > >  }
> > > >  
> > > >  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> > > > @@ -180,6 +192,13 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
> > > > *vcpu)
> > > >  
> > > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > > >  {
> > > > +       /**
> > > > +        * vcpu with id 0 is the designated boot cpu.
> > > > +        * Keep all vcpus with non-zero cpu id in power-off
> > > > state
> > > > so that they
> > > > +        * can brought to online using SBI HSM extension.
> > > > +        */
> > > > +       if (vcpu->vcpu_idx != 0)
> > > > +               kvm_riscv_vcpu_power_off(vcpu);
> > > 
> > > Why do this in postcreate?
> > > 
> > 
> > Because we need to absolutely sure that the vcpu is created. It is
> > cleaner in this way rather than doing this here at the end of
> > kvm_arch_vcpu_create. create_vcpu can also fail after
> > kvm_arch_vcpu_create returns.
> 
> But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of
> the vCPU.  It
> clears vcpu->arch.power_off, makes a request, and kicks the vCPU. 
> None of that
> has side effects to anything else in KVM.  If the vCPU isn't created
> successfully,
> it gets deleted and nothing ever sees that state change.

I am assuming that you are suggesting to add this logic at the end of
the kvm_arch_vcpu_create() instead of kvm_arch_vcpu_postcreate().

vcpu_idx is assigned after kvm_arch_vcpu_create() returns in the
kvm_vm_ioctl_create_vcpu. kvm_arch_vcpu_postcreate() is the arch hookup
after vcpu_idx is assigned.

-- 
Regards,
Atish
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
  2021-10-11 22:50           ` Atish Patra
@ 2021-10-12 16:46             ` Sean Christopherson
  -1 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-12 16:46 UTC (permalink / raw)
  To: Atish Patra
  Cc: kvm-riscv, linux-riscv, vincent.chen, Anup Patel, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, Oct 11, 2021, Atish Patra wrote:
> On Mon, 2021-10-11 at 14:32 +0000, Sean Christopherson wrote:
> > On Mon, Oct 11, 2021, Atish Patra wrote:
> > > On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > > > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > > > +       preempt_disable();
> > > > > +       loaded = (vcpu->cpu != -1);
> > > > > +       if (loaded)
> > > > > +               kvm_arch_vcpu_put(vcpu);
> > > > 
> > > > Oof.  Looks like this pattern was taken from arm64. 
> > > 
> > > Yes. This part is similar to arm64 because the same race condition
> > > can
> > > happen in riscv due to save/restore of CSRs during reset.
> > > 
> > > 
> > > > Is there really no better approach to handling this?  I don't see
> > > > anything  in kvm_riscv_reset_vcpu() that will obviously break if the
> > > > vCPU is  loaded.  If the goal is purely to effect a CSR reset via
> > > > kvm_arch_vcpu_load(), then why not just factor out a helper to do
> > > > exactly that?
> > 
> > What about the question here?
> 
> Are you suggesting to factor the csr reset part to a different function?

More or less.  I'm mostly asking why putting the vCPU is necessary.

> > > > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > > > >  {
> > > > > +       /**
> > > > > +        * vcpu with id 0 is the designated boot cpu.
> > > > > +        * Keep all vcpus with non-zero cpu id in power-off
> > > > > state
> > > > > so that they
> > > > > +        * can brought to online using SBI HSM extension.
> > > > > +        */
> > > > > +       if (vcpu->vcpu_idx != 0)
> > > > > +               kvm_riscv_vcpu_power_off(vcpu);
> > > > 
> > > > Why do this in postcreate?
> > > > 
> > > 
> > > Because we need to absolutely sure that the vcpu is created. It is
> > > cleaner in this way rather than doing this here at the end of
> > > kvm_arch_vcpu_create. create_vcpu can also fail after
> > > kvm_arch_vcpu_create returns.
> > 
> > But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of the
> > vCPU.  It clears vcpu->arch.power_off, makes a request, and kicks the
> > vCPU.  None of that has side effects to anything else in KVM.  If the vCPU
> > isn't created successfully, it gets deleted and nothing ever sees that
> > state change.
> 
> I am assuming that you are suggesting to add this logic at the end of
> the kvm_arch_vcpu_create() instead of kvm_arch_vcpu_postcreate().
> 
> vcpu_idx is assigned after kvm_arch_vcpu_create() returns in the
> kvm_vm_ioctl_create_vcpu. kvm_arch_vcpu_postcreate() is the arch hookup
> after vcpu_idx is assigned.

Ah, it's the consumption of vcpu->vcpu_idx that's problematic.  Thanks!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM
@ 2021-10-12 16:46             ` Sean Christopherson
  0 siblings, 0 replies; 28+ messages in thread
From: Sean Christopherson @ 2021-10-12 16:46 UTC (permalink / raw)
  To: Atish Patra
  Cc: kvm-riscv, linux-riscv, vincent.chen, Anup Patel, paul.walmsley,
	palmer, wangkefeng.wang, kvm, pbonzini, linux-kernel

On Mon, Oct 11, 2021, Atish Patra wrote:
> On Mon, 2021-10-11 at 14:32 +0000, Sean Christopherson wrote:
> > On Mon, Oct 11, 2021, Atish Patra wrote:
> > > On Fri, 2021-10-08 at 15:02 +0000, Sean Christopherson wrote:
> > > > On Thu, Oct 07, 2021, Atish Patra wrote:
> > > > > +       preempt_disable();
> > > > > +       loaded = (vcpu->cpu != -1);
> > > > > +       if (loaded)
> > > > > +               kvm_arch_vcpu_put(vcpu);
> > > > 
> > > > Oof.  Looks like this pattern was taken from arm64. 
> > > 
> > > Yes. This part is similar to arm64 because the same race condition
> > > can
> > > happen in riscv due to save/restore of CSRs during reset.
> > > 
> > > 
> > > > Is there really no better approach to handling this?  I don't see
> > > > anything  in kvm_riscv_reset_vcpu() that will obviously break if the
> > > > vCPU is  loaded.  If the goal is purely to effect a CSR reset via
> > > > kvm_arch_vcpu_load(), then why not just factor out a helper to do
> > > > exactly that?
> > 
> > What about the question here?
> 
> Are you suggesting to factor the csr reset part to a different function?

More or less.  I'm mostly asking why putting the vCPU is necessary.

> > > > >  void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
> > > > >  {
> > > > > +       /**
> > > > > +        * vcpu with id 0 is the designated boot cpu.
> > > > > +        * Keep all vcpus with non-zero cpu id in power-off
> > > > > state
> > > > > so that they
> > > > > +        * can brought to online using SBI HSM extension.
> > > > > +        */
> > > > > +       if (vcpu->vcpu_idx != 0)
> > > > > +               kvm_riscv_vcpu_power_off(vcpu);
> > > > 
> > > > Why do this in postcreate?
> > > > 
> > > 
> > > Because we need to absolutely sure that the vcpu is created. It is
> > > cleaner in this way rather than doing this here at the end of
> > > kvm_arch_vcpu_create. create_vcpu can also fail after
> > > kvm_arch_vcpu_create returns.
> > 
> > But kvm_riscv_vcpu_power_off() doesn't doesn't anything outside of the
> > vCPU.  It clears vcpu->arch.power_off, makes a request, and kicks the
> > vCPU.  None of that has side effects to anything else in KVM.  If the vCPU
> > isn't created successfully, it gets deleted and nothing ever sees that
> > state change.
> 
> I am assuming that you are suggesting to add this logic at the end of
> the kvm_arch_vcpu_create() instead of kvm_arch_vcpu_postcreate().
> 
> vcpu_idx is assigned after kvm_arch_vcpu_create() returns in the
> kvm_vm_ioctl_create_vcpu. kvm_arch_vcpu_postcreate() is the arch hookup
> after vcpu_idx is assigned.

Ah, it's the consumption of vcpu->vcpu_idx that's problematic.  Thanks!

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
  2021-10-08  3:20   ` Atish Patra
@ 2021-10-25  7:53     ` Anup Patel
  -1 siblings, 0 replies; 28+ messages in thread
From: Anup Patel @ 2021-10-25  7:53 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-kernel@vger.kernel.org List, Paolo Bonzini, Anup Patel,
	Kefeng Wang, kvm-riscv, KVM General, linux-riscv, Palmer Dabbelt,
	Paul Walmsley, Vincent Chen

On Fri, Oct 8, 2021 at 8:50 AM Atish Patra <atish.patra@wdc.com> wrote:
>
> The existing SBI specification impelementation follows v0.1 or legacy
> specification. The latest specification known as v0.2 allows more
> scalability and performance improvements.
>
> Rename the existing implementation as legacy and provide a way to allow
> future extensions.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
>  arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
>  2 files changed, 148 insertions(+), 30 deletions(-)
>  create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
>
> diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
> new file mode 100644
> index 000000000000..1a4cb0db2d0b
> --- /dev/null
> +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/**
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + *
> + * Authors:
> + *     Atish Patra <atish.patra@wdc.com>
> + */
> +
> +#ifndef __RISCV_KVM_VCPU_SBI_H__
> +#define __RISCV_KVM_VCPU_SBI_H__
> +
> +#define KVM_SBI_VERSION_MAJOR 0
> +#define KVM_SBI_VERSION_MINOR 2
> +
> +struct kvm_vcpu_sbi_extension {
> +       unsigned long extid_start;
> +       unsigned long extid_end;
> +       /**
> +        * SBI extension handler. It can be defined for a given extension or group of
> +        * extension. But it should always return linux error codes rather than SBI
> +        * specific error codes.
> +        */
> +       int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +                      unsigned long *out_val, struct kvm_cpu_trap *utrap,
> +                      bool *exit);
> +};
> +
> +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
> +#endif /* __RISCV_KVM_VCPU_SBI_H__ */
> diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> index ebdcdbade9c6..8c168d305763 100644
> --- a/arch/riscv/kvm/vcpu_sbi.c
> +++ b/arch/riscv/kvm/vcpu_sbi.c
> @@ -12,9 +12,25 @@
>  #include <asm/csr.h>
>  #include <asm/sbi.h>
>  #include <asm/kvm_vcpu_timer.h>
> +#include <asm/kvm_vcpu_sbi.h>
>
> -#define SBI_VERSION_MAJOR                      0
> -#define SBI_VERSION_MINOR                      1
> +static int kvm_linux_err_map_sbi(int err)
> +{
> +       switch (err) {
> +       case 0:
> +               return SBI_SUCCESS;
> +       case -EPERM:
> +               return SBI_ERR_DENIED;
> +       case -EINVAL:
> +               return SBI_ERR_INVALID_PARAM;
> +       case -EFAULT:
> +               return SBI_ERR_INVALID_ADDRESS;
> +       case -EOPNOTSUPP:
> +               return SBI_ERR_NOT_SUPPORTED;
> +       default:
> +               return SBI_ERR_FAILURE;
> +       };
> +}
>
>  static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
>                                        struct kvm_run *run)
> @@ -72,16 +88,17 @@ static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
>         run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
>  }
>
> -int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +                                     unsigned long *out_val,
> +                                     struct kvm_cpu_trap *utrap,
> +                                     bool *exit)
>  {
>         ulong hmask;
> -       int i, ret = 1;
> +       int i, ret = 0;
>         u64 next_cycle;
>         struct kvm_vcpu *rvcpu;
> -       bool next_sepc = true;
>         struct cpumask cm, hm;
>         struct kvm *kvm = vcpu->kvm;
> -       struct kvm_cpu_trap utrap = { 0 };
>         struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
>
>         if (!cp)
> @@ -95,8 +112,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>                  * handled in kernel so we forward these to user-space
>                  */
>                 kvm_riscv_vcpu_sbi_forward(vcpu, run);
> -               next_sepc = false;
> -               ret = 0;
> +               *exit = true;
>                 break;
>         case SBI_EXT_0_1_SET_TIMER:
>  #if __riscv_xlen == 32
> @@ -104,47 +120,42 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  #else
>                 next_cycle = (u64)cp->a0;
>  #endif
> -               kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
> +               ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
>                 break;
>         case SBI_EXT_0_1_CLEAR_IPI:
> -               kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
> +               ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
>                 break;
>         case SBI_EXT_0_1_SEND_IPI:
>                 if (cp->a0)
>                         hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
> -                                                          &utrap);
> +                                                          utrap);
>                 else
>                         hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
> -               if (utrap.scause) {
> -                       utrap.sepc = cp->sepc;
> -                       kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> -                       next_sepc = false;
> +               if (utrap->scause)
>                         break;
> -               }
> +
>                 for_each_set_bit(i, &hmask, BITS_PER_LONG) {
>                         rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
> -                       kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
> +                       ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
> +                       if (ret < 0)
> +                               break;
>                 }
>                 break;
>         case SBI_EXT_0_1_SHUTDOWN:
>                 kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
> -               next_sepc = false;
> -               ret = 0;
> +               *exit = true;
>                 break;
>         case SBI_EXT_0_1_REMOTE_FENCE_I:
>         case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
>         case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
>                 if (cp->a0)
>                         hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
> -                                                          &utrap);
> +                                                          utrap);
>                 else
>                         hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
> -               if (utrap.scause) {
> -                       utrap.sepc = cp->sepc;
> -                       kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> -                       next_sepc = false;
> +               if (utrap->scause)
>                         break;
> -               }
> +
>                 cpumask_clear(&cm);
>                 for_each_set_bit(i, &hmask, BITS_PER_LONG) {
>                         rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
> @@ -154,22 +165,100 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>                 }
>                 riscv_cpuid_to_hartid_mask(&cm, &hm);
>                 if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
> -                       sbi_remote_fence_i(cpumask_bits(&hm));
> +                       ret = sbi_remote_fence_i(cpumask_bits(&hm));
>                 else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
> -                       sbi_remote_hfence_vvma(cpumask_bits(&hm),
> +                       ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
>                                                 cp->a1, cp->a2);
>                 else
> -                       sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
> +                       ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
>                                                 cp->a1, cp->a2, cp->a3);
>                 break;
>         default:
> -               /* Return error for unsupported SBI calls */
> -               cp->a0 = SBI_ERR_NOT_SUPPORTED;
> +               ret = -EINVAL;
>                 break;
>         };
>
> +       return ret;
> +}
> +
> +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
> +       .extid_start = SBI_EXT_0_1_SET_TIMER,
> +       .extid_end = SBI_EXT_0_1_SHUTDOWN,
> +       .handler = kvm_sbi_ext_legacy_handler,
> +};
> +
> +static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
> +       &vcpu_sbi_ext_legacy,
> +};
> +
> +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
> +{
> +       int i = 0;
> +
> +       for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
> +               if (sbi_ext[i]->extid_start <= extid &&
> +                   sbi_ext[i]->extid_end >= extid)
> +                       return sbi_ext[i];
> +       }
> +
> +       return NULL;
> +}
> +
> +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +       int ret = 1;
> +       bool next_sepc = true;
> +       bool userspace_exit = false;
> +       struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> +       const struct kvm_vcpu_sbi_extension *sbi_ext;
> +       struct kvm_cpu_trap utrap = { 0 };
> +       unsigned long out_val = 0;
> +       bool ext_is_v01 = false;
> +
> +       if (!cp)
> +               return -EINVAL;

This is a redundant check because cp is always non-NULL.

> +
> +       sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
> +       if (sbi_ext && sbi_ext->handler) {
> +               if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
> +                   cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
> +                       ext_is_v01 = true;
> +               ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
> +       } else {
> +               /* Return error for unsupported SBI calls */
> +               cp->a0 = SBI_ERR_NOT_SUPPORTED;
> +               goto ecall_done;
> +       }
> +
> +       /* Handle special error cases i.e trap, exit or userspace forward */
> +       if (utrap.scause) {
> +               /* No need to increment sepc or exit ioctl loop */
> +               ret = 1;
> +               utrap.sepc = cp->sepc;
> +               kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> +               next_sepc = false;
> +               goto ecall_done;
> +       }
> +
> +       /* Exit ioctl loop or Propagate the error code the guest */
> +       if (userspace_exit) {
> +               next_sepc = false;
> +               ret = 0;
> +       } else {
> +               /**
> +                * SBI extension handler always returns an Linux error code. Convert
> +                * it to the SBI specific error code that can be propagated the SBI
> +                * caller.
> +                */
> +               ret = kvm_linux_err_map_sbi(ret);
> +               cp->a0 = ret;
> +               ret = 1;
> +       }
> +ecall_done:
>         if (next_sepc)
>                 cp->sepc += 4;
> +       if (!ext_is_v01)
> +               cp->a1 = out_val;
>
>         return ret;
>  }
> --
> 2.31.1
>
>
> --
> kvm-riscv mailing list
> kvm-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kvm-riscv

Regards,
Anup

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy
@ 2021-10-25  7:53     ` Anup Patel
  0 siblings, 0 replies; 28+ messages in thread
From: Anup Patel @ 2021-10-25  7:53 UTC (permalink / raw)
  To: Atish Patra
  Cc: linux-kernel@vger.kernel.org List, Paolo Bonzini, Anup Patel,
	Kefeng Wang, kvm-riscv, KVM General, linux-riscv, Palmer Dabbelt,
	Paul Walmsley, Vincent Chen

On Fri, Oct 8, 2021 at 8:50 AM Atish Patra <atish.patra@wdc.com> wrote:
>
> The existing SBI specification impelementation follows v0.1 or legacy
> specification. The latest specification known as v0.2 allows more
> scalability and performance improvements.
>
> Rename the existing implementation as legacy and provide a way to allow
> future extensions.
>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/include/asm/kvm_vcpu_sbi.h |  29 +++++
>  arch/riscv/kvm/vcpu_sbi.c             | 149 ++++++++++++++++++++------
>  2 files changed, 148 insertions(+), 30 deletions(-)
>  create mode 100644 arch/riscv/include/asm/kvm_vcpu_sbi.h
>
> diff --git a/arch/riscv/include/asm/kvm_vcpu_sbi.h b/arch/riscv/include/asm/kvm_vcpu_sbi.h
> new file mode 100644
> index 000000000000..1a4cb0db2d0b
> --- /dev/null
> +++ b/arch/riscv/include/asm/kvm_vcpu_sbi.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/**
> + * Copyright (c) 2021 Western Digital Corporation or its affiliates.
> + *
> + * Authors:
> + *     Atish Patra <atish.patra@wdc.com>
> + */
> +
> +#ifndef __RISCV_KVM_VCPU_SBI_H__
> +#define __RISCV_KVM_VCPU_SBI_H__
> +
> +#define KVM_SBI_VERSION_MAJOR 0
> +#define KVM_SBI_VERSION_MINOR 2
> +
> +struct kvm_vcpu_sbi_extension {
> +       unsigned long extid_start;
> +       unsigned long extid_end;
> +       /**
> +        * SBI extension handler. It can be defined for a given extension or group of
> +        * extension. But it should always return linux error codes rather than SBI
> +        * specific error codes.
> +        */
> +       int (*handler)(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +                      unsigned long *out_val, struct kvm_cpu_trap *utrap,
> +                      bool *exit);
> +};
> +
> +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid);
> +#endif /* __RISCV_KVM_VCPU_SBI_H__ */
> diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
> index ebdcdbade9c6..8c168d305763 100644
> --- a/arch/riscv/kvm/vcpu_sbi.c
> +++ b/arch/riscv/kvm/vcpu_sbi.c
> @@ -12,9 +12,25 @@
>  #include <asm/csr.h>
>  #include <asm/sbi.h>
>  #include <asm/kvm_vcpu_timer.h>
> +#include <asm/kvm_vcpu_sbi.h>
>
> -#define SBI_VERSION_MAJOR                      0
> -#define SBI_VERSION_MINOR                      1
> +static int kvm_linux_err_map_sbi(int err)
> +{
> +       switch (err) {
> +       case 0:
> +               return SBI_SUCCESS;
> +       case -EPERM:
> +               return SBI_ERR_DENIED;
> +       case -EINVAL:
> +               return SBI_ERR_INVALID_PARAM;
> +       case -EFAULT:
> +               return SBI_ERR_INVALID_ADDRESS;
> +       case -EOPNOTSUPP:
> +               return SBI_ERR_NOT_SUPPORTED;
> +       default:
> +               return SBI_ERR_FAILURE;
> +       };
> +}
>
>  static void kvm_riscv_vcpu_sbi_forward(struct kvm_vcpu *vcpu,
>                                        struct kvm_run *run)
> @@ -72,16 +88,17 @@ static void kvm_sbi_system_shutdown(struct kvm_vcpu *vcpu,
>         run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
>  }
>
> -int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +static int kvm_sbi_ext_legacy_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +                                     unsigned long *out_val,
> +                                     struct kvm_cpu_trap *utrap,
> +                                     bool *exit)
>  {
>         ulong hmask;
> -       int i, ret = 1;
> +       int i, ret = 0;
>         u64 next_cycle;
>         struct kvm_vcpu *rvcpu;
> -       bool next_sepc = true;
>         struct cpumask cm, hm;
>         struct kvm *kvm = vcpu->kvm;
> -       struct kvm_cpu_trap utrap = { 0 };
>         struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
>
>         if (!cp)
> @@ -95,8 +112,7 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>                  * handled in kernel so we forward these to user-space
>                  */
>                 kvm_riscv_vcpu_sbi_forward(vcpu, run);
> -               next_sepc = false;
> -               ret = 0;
> +               *exit = true;
>                 break;
>         case SBI_EXT_0_1_SET_TIMER:
>  #if __riscv_xlen == 32
> @@ -104,47 +120,42 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  #else
>                 next_cycle = (u64)cp->a0;
>  #endif
> -               kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
> +               ret = kvm_riscv_vcpu_timer_next_event(vcpu, next_cycle);
>                 break;
>         case SBI_EXT_0_1_CLEAR_IPI:
> -               kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
> +               ret = kvm_riscv_vcpu_unset_interrupt(vcpu, IRQ_VS_SOFT);
>                 break;
>         case SBI_EXT_0_1_SEND_IPI:
>                 if (cp->a0)
>                         hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
> -                                                          &utrap);
> +                                                          utrap);
>                 else
>                         hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
> -               if (utrap.scause) {
> -                       utrap.sepc = cp->sepc;
> -                       kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> -                       next_sepc = false;
> +               if (utrap->scause)
>                         break;
> -               }
> +
>                 for_each_set_bit(i, &hmask, BITS_PER_LONG) {
>                         rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
> -                       kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
> +                       ret = kvm_riscv_vcpu_set_interrupt(rvcpu, IRQ_VS_SOFT);
> +                       if (ret < 0)
> +                               break;
>                 }
>                 break;
>         case SBI_EXT_0_1_SHUTDOWN:
>                 kvm_sbi_system_shutdown(vcpu, run, KVM_SYSTEM_EVENT_SHUTDOWN);
> -               next_sepc = false;
> -               ret = 0;
> +               *exit = true;
>                 break;
>         case SBI_EXT_0_1_REMOTE_FENCE_I:
>         case SBI_EXT_0_1_REMOTE_SFENCE_VMA:
>         case SBI_EXT_0_1_REMOTE_SFENCE_VMA_ASID:
>                 if (cp->a0)
>                         hmask = kvm_riscv_vcpu_unpriv_read(vcpu, false, cp->a0,
> -                                                          &utrap);
> +                                                          utrap);
>                 else
>                         hmask = (1UL << atomic_read(&kvm->online_vcpus)) - 1;
> -               if (utrap.scause) {
> -                       utrap.sepc = cp->sepc;
> -                       kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> -                       next_sepc = false;
> +               if (utrap->scause)
>                         break;
> -               }
> +
>                 cpumask_clear(&cm);
>                 for_each_set_bit(i, &hmask, BITS_PER_LONG) {
>                         rvcpu = kvm_get_vcpu_by_id(vcpu->kvm, i);
> @@ -154,22 +165,100 @@ int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
>                 }
>                 riscv_cpuid_to_hartid_mask(&cm, &hm);
>                 if (cp->a7 == SBI_EXT_0_1_REMOTE_FENCE_I)
> -                       sbi_remote_fence_i(cpumask_bits(&hm));
> +                       ret = sbi_remote_fence_i(cpumask_bits(&hm));
>                 else if (cp->a7 == SBI_EXT_0_1_REMOTE_SFENCE_VMA)
> -                       sbi_remote_hfence_vvma(cpumask_bits(&hm),
> +                       ret = sbi_remote_hfence_vvma(cpumask_bits(&hm),
>                                                 cp->a1, cp->a2);
>                 else
> -                       sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
> +                       ret = sbi_remote_hfence_vvma_asid(cpumask_bits(&hm),
>                                                 cp->a1, cp->a2, cp->a3);
>                 break;
>         default:
> -               /* Return error for unsupported SBI calls */
> -               cp->a0 = SBI_ERR_NOT_SUPPORTED;
> +               ret = -EINVAL;
>                 break;
>         };
>
> +       return ret;
> +}
> +
> +const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_legacy = {
> +       .extid_start = SBI_EXT_0_1_SET_TIMER,
> +       .extid_end = SBI_EXT_0_1_SHUTDOWN,
> +       .handler = kvm_sbi_ext_legacy_handler,
> +};
> +
> +static const struct kvm_vcpu_sbi_extension *sbi_ext[] = {
> +       &vcpu_sbi_ext_legacy,
> +};
> +
> +const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext(unsigned long extid)
> +{
> +       int i = 0;
> +
> +       for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
> +               if (sbi_ext[i]->extid_start <= extid &&
> +                   sbi_ext[i]->extid_end >= extid)
> +                       return sbi_ext[i];
> +       }
> +
> +       return NULL;
> +}
> +
> +int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +       int ret = 1;
> +       bool next_sepc = true;
> +       bool userspace_exit = false;
> +       struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
> +       const struct kvm_vcpu_sbi_extension *sbi_ext;
> +       struct kvm_cpu_trap utrap = { 0 };
> +       unsigned long out_val = 0;
> +       bool ext_is_v01 = false;
> +
> +       if (!cp)
> +               return -EINVAL;

This is a redundant check because cp is always non-NULL.

> +
> +       sbi_ext = kvm_vcpu_sbi_find_ext(cp->a7);
> +       if (sbi_ext && sbi_ext->handler) {
> +               if (cp->a7 >= SBI_EXT_0_1_SET_TIMER &&
> +                   cp->a7 <= SBI_EXT_0_1_SHUTDOWN)
> +                       ext_is_v01 = true;
> +               ret = sbi_ext->handler(vcpu, run, &out_val, &utrap, &userspace_exit);
> +       } else {
> +               /* Return error for unsupported SBI calls */
> +               cp->a0 = SBI_ERR_NOT_SUPPORTED;
> +               goto ecall_done;
> +       }
> +
> +       /* Handle special error cases i.e trap, exit or userspace forward */
> +       if (utrap.scause) {
> +               /* No need to increment sepc or exit ioctl loop */
> +               ret = 1;
> +               utrap.sepc = cp->sepc;
> +               kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> +               next_sepc = false;
> +               goto ecall_done;
> +       }
> +
> +       /* Exit ioctl loop or Propagate the error code the guest */
> +       if (userspace_exit) {
> +               next_sepc = false;
> +               ret = 0;
> +       } else {
> +               /**
> +                * SBI extension handler always returns an Linux error code. Convert
> +                * it to the SBI specific error code that can be propagated the SBI
> +                * caller.
> +                */
> +               ret = kvm_linux_err_map_sbi(ret);
> +               cp->a0 = ret;
> +               ret = 1;
> +       }
> +ecall_done:
>         if (next_sepc)
>                 cp->sepc += 4;
> +       if (!ext_is_v01)
> +               cp->a1 = out_val;
>
>         return ret;
>  }
> --
> 2.31.1
>
>
> --
> kvm-riscv mailing list
> kvm-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kvm-riscv

Regards,
Anup

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-10-25  7:53 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-08  3:20 [PATCH v3 0/5] Add SBI v0.2 support for KVM Atish Patra
2021-10-08  3:20 ` Atish Patra
2021-10-08  3:20 ` [PATCH v3 1/5] RISC-V: Mark the existing SBI v0.1 implementation as legacy Atish Patra
2021-10-08  3:20   ` Atish Patra
2021-10-08  7:24   ` Paolo Bonzini
2021-10-08  7:24     ` Paolo Bonzini
2021-10-25  7:53   ` Anup Patel
2021-10-25  7:53     ` Anup Patel
2021-10-08  3:20 ` [PATCH v3 2/5] RISC-V: Reorganize SBI code by moving legacy SBI to its own file Atish Patra
2021-10-08  3:20   ` Atish Patra
2021-10-08  3:20 ` [PATCH v3 3/5] RISC-V: Add SBI v0.2 base extension Atish Patra
2021-10-08  3:20   ` Atish Patra
2021-10-08  3:20 ` [PATCH v3 4/5] RISC-V: Add v0.1 replacement SBI extensions defined in v02 Atish Patra
2021-10-08  3:20   ` Atish Patra
2021-10-08  3:20 ` [PATCH v3 5/5] RISC-V: Add SBI HSM extension in KVM Atish Patra
2021-10-08  3:20   ` Atish Patra
2021-10-08 15:02   ` Sean Christopherson
2021-10-08 15:02     ` Sean Christopherson
2021-10-11  8:02     ` Atish Patra
2021-10-11  8:02       ` Atish Patra
2021-10-11 14:32       ` Sean Christopherson
2021-10-11 14:32         ` Sean Christopherson
2021-10-11 22:50         ` Atish Patra
2021-10-11 22:50           ` Atish Patra
2021-10-12 16:46           ` Sean Christopherson
2021-10-12 16:46             ` Sean Christopherson
2021-10-10  9:34 ` [PATCH v3 0/5] Add SBI v0.2 support for KVM Guo Ren
2021-10-10  9:34   ` Guo Ren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.