All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anup Patel <Anup.Patel@wdc.com>
To: Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Paolo Bonzini <pbonzini@redhat.com>, Radim K <rkrcmar@redhat.com>
Cc: Alexander Graf <graf@amazon.com>,
	Atish Patra <Atish.Patra@wdc.com>,
	Alistair Francis <Alistair.Francis@wdc.com>,
	Damien Le Moal <Damien.LeMoal@wdc.com>,
	Christoph Hellwig <hch@lst.de>, Anup Patel <anup@brainfault.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"kvm-riscv@lists.infradead.org" <kvm-riscv@lists.infradead.org>,
	"linux-riscv@lists.infradead.org"
	<linux-riscv@lists.infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Anup Patel <Anup.Patel@wdc.com>
Subject: [PATCH v10 10/19] RISC-V: KVM: Handle WFI exits for VCPU
Date: Mon, 23 Dec 2019 11:36:38 +0000	[thread overview]
Message-ID: <20191223113443.68969-11-anup.patel@wdc.com> (raw)
In-Reply-To: <20191223113443.68969-1-anup.patel@wdc.com>

We get illegal instruction trap whenever Guest/VM executes WFI
instruction.

This patch handles WFI trap by blocking the trapped VCPU using
kvm_vcpu_block() API. The blocked VCPU will be automatically
resumed whenever a VCPU interrupt is injected from user-space
or from in-kernel IRQCHIP emulation.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/riscv/kvm/vcpu_exit.c | 72 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index cbf973c5f2fb..8d0ae1a23b70 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -12,6 +12,13 @@
 #include <linux/kvm_host.h>
 #include <asm/csr.h>
 
+#define INSN_OPCODE_MASK	0x007c
+#define INSN_OPCODE_SHIFT	2
+#define INSN_OPCODE_SYSTEM	28
+
+#define INSN_MASK_WFI		0xffffff00
+#define INSN_MATCH_WFI		0x10500000
+
 #define INSN_MATCH_LB		0x3
 #define INSN_MASK_LB		0x707f
 #define INSN_MATCH_LH		0x1003
@@ -116,6 +123,67 @@
 				 (s32)(((insn) >> 7) & 0x1f))
 #define MASK_FUNCT3		0x7000
 
+static int truly_illegal_insn(struct kvm_vcpu *vcpu,
+			      struct kvm_run *run,
+			      ulong insn)
+{
+	/* Redirect trap to Guest VCPU */
+	kvm_riscv_vcpu_trap_redirect(vcpu, EXC_INST_ILLEGAL, insn);
+
+	return 1;
+}
+
+static int system_opcode_insn(struct kvm_vcpu *vcpu,
+			      struct kvm_run *run,
+			      ulong insn)
+{
+	if ((insn & INSN_MASK_WFI) == INSN_MATCH_WFI) {
+		vcpu->stat.wfi_exit_stat++;
+		if (!kvm_arch_vcpu_runnable(vcpu)) {
+			srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx);
+			kvm_vcpu_block(vcpu);
+			vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		}
+		vcpu->arch.guest_context.sepc += INSN_LEN(insn);
+		return 1;
+	}
+
+	return truly_illegal_insn(vcpu, run, insn);
+}
+
+static int illegal_inst_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
+			      unsigned long insn)
+{
+	unsigned long ut_scause = 0;
+	struct kvm_cpu_context *ct;
+
+	if (unlikely(INSN_IS_16BIT(insn))) {
+		if (insn == 0) {
+			ct = &vcpu->arch.guest_context;
+			insn = kvm_riscv_vcpu_unpriv_read(vcpu, true,
+							  ct->sepc,
+							  &ut_scause);
+			if (ut_scause) {
+				if (ut_scause == EXC_LOAD_PAGE_FAULT)
+					ut_scause = EXC_INST_PAGE_FAULT;
+				kvm_riscv_vcpu_trap_redirect(vcpu, ut_scause,
+							     ct->sepc);
+				return 1;
+			}
+		}
+		if (INSN_IS_16BIT(insn))
+			return truly_illegal_insn(vcpu, run, insn);
+	}
+
+	switch ((insn & INSN_OPCODE_MASK) >> INSN_OPCODE_SHIFT) {
+	case INSN_OPCODE_SYSTEM:
+		return system_opcode_insn(vcpu, run, insn);
+	default:
+		return truly_illegal_insn(vcpu, run, insn);
+	}
+}
+
 static int emulate_load(struct kvm_vcpu *vcpu, struct kvm_run *run,
 			unsigned long fault_addr, unsigned long htinst)
 {
@@ -537,6 +605,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	ret = -EFAULT;
 	run->exit_reason = KVM_EXIT_UNKNOWN;
 	switch (scause) {
+	case EXC_INST_ILLEGAL:
+		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
+			ret = illegal_inst_fault(vcpu, run, stval);
+		break;
 	case EXC_INST_GUEST_PAGE_FAULT:
 	case EXC_LOAD_GUEST_PAGE_FAULT:
 	case EXC_STORE_GUEST_PAGE_FAULT:
-- 
2.17.1


WARNING: multiple messages have this Message-ID (diff)
From: Anup Patel <Anup.Patel@wdc.com>
To: Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Paolo Bonzini <pbonzini@redhat.com>, Radim K <rkrcmar@redhat.com>
Cc: Damien Le Moal <Damien.LeMoal@wdc.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Anup Patel <anup@brainfault.org>, Anup Patel <Anup.Patel@wdc.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Atish Patra <Atish.Patra@wdc.com>,
	Alistair Francis <Alistair.Francis@wdc.com>,
	"kvm-riscv@lists.infradead.org" <kvm-riscv@lists.infradead.org>,
	Alexander Graf <graf@amazon.com>,
	"linux-riscv@lists.infradead.org"
	<linux-riscv@lists.infradead.org>, Christoph Hellwig <hch@lst.de>
Subject: [PATCH v10 10/19] RISC-V: KVM: Handle WFI exits for VCPU
Date: Mon, 23 Dec 2019 11:36:38 +0000	[thread overview]
Message-ID: <20191223113443.68969-11-anup.patel@wdc.com> (raw)
In-Reply-To: <20191223113443.68969-1-anup.patel@wdc.com>

We get illegal instruction trap whenever Guest/VM executes WFI
instruction.

This patch handles WFI trap by blocking the trapped VCPU using
kvm_vcpu_block() API. The blocked VCPU will be automatically
resumed whenever a VCPU interrupt is injected from user-space
or from in-kernel IRQCHIP emulation.

Signed-off-by: Anup Patel <anup.patel@wdc.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/riscv/kvm/vcpu_exit.c | 72 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index cbf973c5f2fb..8d0ae1a23b70 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -12,6 +12,13 @@
 #include <linux/kvm_host.h>
 #include <asm/csr.h>
 
+#define INSN_OPCODE_MASK	0x007c
+#define INSN_OPCODE_SHIFT	2
+#define INSN_OPCODE_SYSTEM	28
+
+#define INSN_MASK_WFI		0xffffff00
+#define INSN_MATCH_WFI		0x10500000
+
 #define INSN_MATCH_LB		0x3
 #define INSN_MASK_LB		0x707f
 #define INSN_MATCH_LH		0x1003
@@ -116,6 +123,67 @@
 				 (s32)(((insn) >> 7) & 0x1f))
 #define MASK_FUNCT3		0x7000
 
+static int truly_illegal_insn(struct kvm_vcpu *vcpu,
+			      struct kvm_run *run,
+			      ulong insn)
+{
+	/* Redirect trap to Guest VCPU */
+	kvm_riscv_vcpu_trap_redirect(vcpu, EXC_INST_ILLEGAL, insn);
+
+	return 1;
+}
+
+static int system_opcode_insn(struct kvm_vcpu *vcpu,
+			      struct kvm_run *run,
+			      ulong insn)
+{
+	if ((insn & INSN_MASK_WFI) == INSN_MATCH_WFI) {
+		vcpu->stat.wfi_exit_stat++;
+		if (!kvm_arch_vcpu_runnable(vcpu)) {
+			srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx);
+			kvm_vcpu_block(vcpu);
+			vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+			kvm_clear_request(KVM_REQ_UNHALT, vcpu);
+		}
+		vcpu->arch.guest_context.sepc += INSN_LEN(insn);
+		return 1;
+	}
+
+	return truly_illegal_insn(vcpu, run, insn);
+}
+
+static int illegal_inst_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
+			      unsigned long insn)
+{
+	unsigned long ut_scause = 0;
+	struct kvm_cpu_context *ct;
+
+	if (unlikely(INSN_IS_16BIT(insn))) {
+		if (insn == 0) {
+			ct = &vcpu->arch.guest_context;
+			insn = kvm_riscv_vcpu_unpriv_read(vcpu, true,
+							  ct->sepc,
+							  &ut_scause);
+			if (ut_scause) {
+				if (ut_scause == EXC_LOAD_PAGE_FAULT)
+					ut_scause = EXC_INST_PAGE_FAULT;
+				kvm_riscv_vcpu_trap_redirect(vcpu, ut_scause,
+							     ct->sepc);
+				return 1;
+			}
+		}
+		if (INSN_IS_16BIT(insn))
+			return truly_illegal_insn(vcpu, run, insn);
+	}
+
+	switch ((insn & INSN_OPCODE_MASK) >> INSN_OPCODE_SHIFT) {
+	case INSN_OPCODE_SYSTEM:
+		return system_opcode_insn(vcpu, run, insn);
+	default:
+		return truly_illegal_insn(vcpu, run, insn);
+	}
+}
+
 static int emulate_load(struct kvm_vcpu *vcpu, struct kvm_run *run,
 			unsigned long fault_addr, unsigned long htinst)
 {
@@ -537,6 +605,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
 	ret = -EFAULT;
 	run->exit_reason = KVM_EXIT_UNKNOWN;
 	switch (scause) {
+	case EXC_INST_ILLEGAL:
+		if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
+			ret = illegal_inst_fault(vcpu, run, stval);
+		break;
 	case EXC_INST_GUEST_PAGE_FAULT:
 	case EXC_LOAD_GUEST_PAGE_FAULT:
 	case EXC_STORE_GUEST_PAGE_FAULT:
-- 
2.17.1



  parent reply	other threads:[~2019-12-23 11:36 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-23 11:35 [PATCH v10 00/19] KVM RISC-V Support Anup Patel
2019-12-23 11:35 ` Anup Patel
2019-12-23 11:35 ` [PATCH v10 01/19] RISC-V: Export riscv_cpuid_to_hartid_mask() API Anup Patel
2019-12-23 11:35   ` Anup Patel
2019-12-23 11:35 ` [PATCH v10 02/19] RISC-V: Add bitmap reprensenting ISA features common across CPUs Anup Patel
2019-12-23 11:35   ` Anup Patel
2019-12-23 11:35 ` [PATCH v10 03/19] RISC-V: Add hypervisor extension related CSR defines Anup Patel
2019-12-23 11:35   ` Anup Patel
2019-12-23 11:35 ` [PATCH v10 04/19] RISC-V: Add initial skeletal KVM support Anup Patel
2019-12-23 11:35   ` Anup Patel
2019-12-23 11:35 ` [PATCH v10 05/19] RISC-V: KVM: Implement VCPU create, init and destroy functions Anup Patel
2019-12-23 11:35   ` Anup Patel
2019-12-23 11:36 ` [PATCH v10 06/19] RISC-V: KVM: Implement VCPU interrupts and requests handling Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:36 ` [PATCH v10 07/19] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:36 ` [PATCH v10 08/19] RISC-V: KVM: Implement VCPU world-switch Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:36 ` [PATCH v10 09/19] RISC-V: KVM: Handle MMIO exits for VCPU Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:36 ` Anup Patel [this message]
2019-12-23 11:36   ` [PATCH v10 10/19] RISC-V: KVM: Handle WFI " Anup Patel
2019-12-23 11:36 ` [PATCH v10 11/19] RISC-V: KVM: Implement VMID allocator Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:36 ` [PATCH v10 12/19] RISC-V: KVM: Implement stage2 page table programming Anup Patel
2019-12-23 11:36   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 13/19] RISC-V: KVM: Implement MMU notifiers Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 14/19] RISC-V: KVM: Add timer functionality Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 15/19] RISC-V: KVM: FP lazy save/restore Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 16/19] RISC-V: KVM: Implement ONE REG interface for FP registers Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 17/19] RISC-V: KVM: Add SBI v0.1 support Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 18/19] RISC-V: KVM: Document RISC-V specific parts of KVM API Anup Patel
2019-12-23 11:37   ` Anup Patel
2019-12-23 11:37 ` [PATCH v10 19/19] RISC-V: KVM: Add MAINTAINERS entry Anup Patel
2019-12-23 11:37   ` Anup Patel
2020-01-16  1:45 ` [PATCH v10 00/19] KVM RISC-V Support Palmer Dabbelt
2020-01-16  1:45   ` Palmer Dabbelt
2020-01-16 16:51 ` [PATCH v10 01/19] RISC-V: Export riscv_cpuid_to_hartid_mask() API Palmer Dabbelt
2020-01-16 16:51   ` Palmer Dabbelt
2020-01-16 19:51 ` [PATCH v10 02/19] RISC-V: Add bitmap reprensenting ISA features common across CPUs Palmer Dabbelt
2020-01-16 19:51   ` Palmer Dabbelt
2020-01-17  6:45   ` Anup Patel
2020-01-17  6:45     ` Anup Patel
2020-01-16 19:51 ` [PATCH v10 03/19] RISC-V: Add hypervisor extension related CSR defines Palmer Dabbelt
2020-01-16 19:51   ` Palmer Dabbelt
2020-01-17  7:14   ` Anup Patel
2020-01-17  7:14     ` Anup Patel
2020-01-16 23:37 ` [PATCH v10 04/19] RISC-V: Add initial skeletal KVM support Palmer Dabbelt
2020-01-16 23:37   ` Palmer Dabbelt
2020-01-17  7:11   ` Anup Patel
2020-01-17  7:11     ` Anup Patel
2020-01-22 19:04 ` [PATCH v10 05/19] RISC-V: KVM: Implement VCPU create, init and destroy functions Palmer Dabbelt
2020-01-22 19:04   ` Palmer Dabbelt
2020-01-23  3:34   ` Anup Patel
2020-01-23  3:34     ` Anup Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191223113443.68969-11-anup.patel@wdc.com \
    --to=anup.patel@wdc.com \
    --cc=Alistair.Francis@wdc.com \
    --cc=Atish.Patra@wdc.com \
    --cc=Damien.LeMoal@wdc.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=graf@amazon.com \
    --cc=hch@lst.de \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.