From: Anup Patel <Anup.Patel@wdc.com> To: Palmer Dabbelt <palmer@sifive.com>, Paul Walmsley <paul.walmsley@sifive.com>, Paolo Bonzini <pbonzini@redhat.com>, Radim K <rkrcmar@redhat.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org>, Thomas Gleixner <tglx@linutronix.de>, Atish Patra <Atish.Patra@wdc.com>, Alistair Francis <Alistair.Francis@wdc.com>, Damien Le Moal <Damien.LeMoal@wdc.com>, Christoph Hellwig <hch@infradead.org>, Anup Patel <anup@brainfault.org>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, "linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Anup Patel <Anup.Patel@wdc.com> Subject: [RFC PATCH 07/16] RISC-V: KVM: Implement VCPU world-switch Date: Mon, 29 Jul 2019 11:57:05 +0000 [thread overview] Message-ID: <20190729115544.17895-8-anup.patel@wdc.com> (raw) In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> This patch implements the VCPU world-switch for KVM RISC-V. The KVM RISC-V world-switch (i.e. __kvm_riscv_switch_to()) mostly switches general purpose registers, SSTATUS, STVEC, SSCRATCH and HSTATUS CSRs. Other CSRs are switched via vcpu_load() and vcpu_put() interface in kvm_arch_vcpu_load() and kvm_arch_vcpu_put() functions respectively. Signed-off-by: Anup Patel <anup.patel@wdc.com> --- arch/riscv/include/asm/kvm_host.h | 9 +- arch/riscv/kernel/asm-offsets.c | 76 ++++++++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 33 ++++- arch/riscv/kvm/vcpu_switch.S | 193 ++++++++++++++++++++++++++++++ 5 files changed, 309 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_switch.S diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index aa89f1922da1..006785bd6474 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -113,6 +113,13 @@ struct kvm_vcpu_arch { /* ISA feature bits (similar to MISA) */ unsigned long isa; + /* SSCRATCH and STVEC of Host */ + unsigned long host_sscratch; + unsigned long host_stvec; + + /* CPU context of Host */ + struct kvm_cpu_context host_context; + /* CPU context of Guest VCPU */ struct kvm_cpu_context guest_context; @@ -151,7 +158,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval); -static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 9f5628c38ac9..711656710190 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -7,7 +7,9 @@ #define GENERATING_ASM_OFFSETS #include <linux/kbuild.h> +#include <linux/mm.h> #include <linux/sched.h> +#include <asm/kvm_host.h> #include <asm/thread_info.h> #include <asm/ptrace.h> @@ -109,6 +111,80 @@ void asm_offsets(void) OFFSET(PT_SBADADDR, pt_regs, sbadaddr); OFFSET(PT_SCAUSE, pt_regs, scause); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); + OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); + OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); + OFFSET(KVM_ARCH_GUEST_GP, kvm_vcpu_arch, guest_context.gp); + OFFSET(KVM_ARCH_GUEST_TP, kvm_vcpu_arch, guest_context.tp); + OFFSET(KVM_ARCH_GUEST_T0, kvm_vcpu_arch, guest_context.t0); + OFFSET(KVM_ARCH_GUEST_T1, kvm_vcpu_arch, guest_context.t1); + OFFSET(KVM_ARCH_GUEST_T2, kvm_vcpu_arch, guest_context.t2); + OFFSET(KVM_ARCH_GUEST_S0, kvm_vcpu_arch, guest_context.s0); + OFFSET(KVM_ARCH_GUEST_S1, kvm_vcpu_arch, guest_context.s1); + OFFSET(KVM_ARCH_GUEST_A0, kvm_vcpu_arch, guest_context.a0); + OFFSET(KVM_ARCH_GUEST_A1, kvm_vcpu_arch, guest_context.a1); + OFFSET(KVM_ARCH_GUEST_A2, kvm_vcpu_arch, guest_context.a2); + OFFSET(KVM_ARCH_GUEST_A3, kvm_vcpu_arch, guest_context.a3); + OFFSET(KVM_ARCH_GUEST_A4, kvm_vcpu_arch, guest_context.a4); + OFFSET(KVM_ARCH_GUEST_A5, kvm_vcpu_arch, guest_context.a5); + OFFSET(KVM_ARCH_GUEST_A6, kvm_vcpu_arch, guest_context.a6); + OFFSET(KVM_ARCH_GUEST_A7, kvm_vcpu_arch, guest_context.a7); + OFFSET(KVM_ARCH_GUEST_S2, kvm_vcpu_arch, guest_context.s2); + OFFSET(KVM_ARCH_GUEST_S3, kvm_vcpu_arch, guest_context.s3); + OFFSET(KVM_ARCH_GUEST_S4, kvm_vcpu_arch, guest_context.s4); + OFFSET(KVM_ARCH_GUEST_S5, kvm_vcpu_arch, guest_context.s5); + OFFSET(KVM_ARCH_GUEST_S6, kvm_vcpu_arch, guest_context.s6); + OFFSET(KVM_ARCH_GUEST_S7, kvm_vcpu_arch, guest_context.s7); + OFFSET(KVM_ARCH_GUEST_S8, kvm_vcpu_arch, guest_context.s8); + OFFSET(KVM_ARCH_GUEST_S9, kvm_vcpu_arch, guest_context.s9); + OFFSET(KVM_ARCH_GUEST_S10, kvm_vcpu_arch, guest_context.s10); + OFFSET(KVM_ARCH_GUEST_S11, kvm_vcpu_arch, guest_context.s11); + OFFSET(KVM_ARCH_GUEST_T3, kvm_vcpu_arch, guest_context.t3); + OFFSET(KVM_ARCH_GUEST_T4, kvm_vcpu_arch, guest_context.t4); + OFFSET(KVM_ARCH_GUEST_T5, kvm_vcpu_arch, guest_context.t5); + OFFSET(KVM_ARCH_GUEST_T6, kvm_vcpu_arch, guest_context.t6); + OFFSET(KVM_ARCH_GUEST_SEPC, kvm_vcpu_arch, guest_context.sepc); + OFFSET(KVM_ARCH_GUEST_SSTATUS, kvm_vcpu_arch, guest_context.sstatus); + OFFSET(KVM_ARCH_GUEST_HSTATUS, kvm_vcpu_arch, guest_context.hstatus); + + OFFSET(KVM_ARCH_HOST_ZERO, kvm_vcpu_arch, host_context.zero); + OFFSET(KVM_ARCH_HOST_RA, kvm_vcpu_arch, host_context.ra); + OFFSET(KVM_ARCH_HOST_SP, kvm_vcpu_arch, host_context.sp); + OFFSET(KVM_ARCH_HOST_GP, kvm_vcpu_arch, host_context.gp); + OFFSET(KVM_ARCH_HOST_TP, kvm_vcpu_arch, host_context.tp); + OFFSET(KVM_ARCH_HOST_T0, kvm_vcpu_arch, host_context.t0); + OFFSET(KVM_ARCH_HOST_T1, kvm_vcpu_arch, host_context.t1); + OFFSET(KVM_ARCH_HOST_T2, kvm_vcpu_arch, host_context.t2); + OFFSET(KVM_ARCH_HOST_S0, kvm_vcpu_arch, host_context.s0); + OFFSET(KVM_ARCH_HOST_S1, kvm_vcpu_arch, host_context.s1); + OFFSET(KVM_ARCH_HOST_A0, kvm_vcpu_arch, host_context.a0); + OFFSET(KVM_ARCH_HOST_A1, kvm_vcpu_arch, host_context.a1); + OFFSET(KVM_ARCH_HOST_A2, kvm_vcpu_arch, host_context.a2); + OFFSET(KVM_ARCH_HOST_A3, kvm_vcpu_arch, host_context.a3); + OFFSET(KVM_ARCH_HOST_A4, kvm_vcpu_arch, host_context.a4); + OFFSET(KVM_ARCH_HOST_A5, kvm_vcpu_arch, host_context.a5); + OFFSET(KVM_ARCH_HOST_A6, kvm_vcpu_arch, host_context.a6); + OFFSET(KVM_ARCH_HOST_A7, kvm_vcpu_arch, host_context.a7); + OFFSET(KVM_ARCH_HOST_S2, kvm_vcpu_arch, host_context.s2); + OFFSET(KVM_ARCH_HOST_S3, kvm_vcpu_arch, host_context.s3); + OFFSET(KVM_ARCH_HOST_S4, kvm_vcpu_arch, host_context.s4); + OFFSET(KVM_ARCH_HOST_S5, kvm_vcpu_arch, host_context.s5); + OFFSET(KVM_ARCH_HOST_S6, kvm_vcpu_arch, host_context.s6); + OFFSET(KVM_ARCH_HOST_S7, kvm_vcpu_arch, host_context.s7); + OFFSET(KVM_ARCH_HOST_S8, kvm_vcpu_arch, host_context.s8); + OFFSET(KVM_ARCH_HOST_S9, kvm_vcpu_arch, host_context.s9); + OFFSET(KVM_ARCH_HOST_S10, kvm_vcpu_arch, host_context.s10); + OFFSET(KVM_ARCH_HOST_S11, kvm_vcpu_arch, host_context.s11); + OFFSET(KVM_ARCH_HOST_T3, kvm_vcpu_arch, host_context.t3); + OFFSET(KVM_ARCH_HOST_T4, kvm_vcpu_arch, host_context.t4); + OFFSET(KVM_ARCH_HOST_T5, kvm_vcpu_arch, host_context.t5); + OFFSET(KVM_ARCH_HOST_T6, kvm_vcpu_arch, host_context.t6); + OFFSET(KVM_ARCH_HOST_SEPC, kvm_vcpu_arch, host_context.sepc); + OFFSET(KVM_ARCH_HOST_SSTATUS, kvm_vcpu_arch, host_context.sstatus); + OFFSET(KVM_ARCH_HOST_HSTATUS, kvm_vcpu_arch, host_context.hstatus); + OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); + OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 37b5a59d4f4f..845579273727 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -8,6 +8,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) -kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 37368eeb6c41..4ab9f803536e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -546,14 +546,43 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + csr_write(CSR_HIDELEG, csr->hideleg); + csr_write(CSR_HEDELEG, csr->hedeleg); + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_VSIE, csr->vsie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_VSIP, csr->vsip); + csr_write(CSR_VSATP, csr->vsatp); kvm_riscv_stage2_update_pgtbl(vcpu); + + vcpu->cpu = cpu; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + vcpu->cpu = -1; + + csr_write(CSR_HGATP, 0); + csr_write(CSR_HIDELEG, 0); + csr_write(CSR_HEDELEG, 0); + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->vsie = csr_read(CSR_VSIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->vsip = csr_read(CSR_VSIP); + csr->vsatp = csr_read(CSR_VSATP); } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S new file mode 100644 index 000000000000..c5b85605bf73 --- /dev/null +++ b/arch/riscv/kvm/vcpu_switch.S @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel <anup.patel@wdc.com> + */ + +#include <linux/linkage.h> +#include <asm/asm.h> +#include <asm/asm-offsets.h> +#include <asm/csr.h> + + .text + .altmacro + +ENTRY(__kvm_riscv_switch_to) + /* Save Host GPRs (except A0 and T0-T6) */ + REG_S ra, (KVM_ARCH_HOST_RA)(a0) + REG_S sp, (KVM_ARCH_HOST_SP)(a0) + REG_S gp, (KVM_ARCH_HOST_GP)(a0) + REG_S tp, (KVM_ARCH_HOST_TP)(a0) + REG_S s0, (KVM_ARCH_HOST_S0)(a0) + REG_S s1, (KVM_ARCH_HOST_S1)(a0) + REG_S a1, (KVM_ARCH_HOST_A1)(a0) + REG_S a2, (KVM_ARCH_HOST_A2)(a0) + REG_S a3, (KVM_ARCH_HOST_A3)(a0) + REG_S a4, (KVM_ARCH_HOST_A4)(a0) + REG_S a5, (KVM_ARCH_HOST_A5)(a0) + REG_S a6, (KVM_ARCH_HOST_A6)(a0) + REG_S a7, (KVM_ARCH_HOST_A7)(a0) + REG_S s2, (KVM_ARCH_HOST_S2)(a0) + REG_S s3, (KVM_ARCH_HOST_S3)(a0) + REG_S s4, (KVM_ARCH_HOST_S4)(a0) + REG_S s5, (KVM_ARCH_HOST_S5)(a0) + REG_S s6, (KVM_ARCH_HOST_S6)(a0) + REG_S s7, (KVM_ARCH_HOST_S7)(a0) + REG_S s8, (KVM_ARCH_HOST_S8)(a0) + REG_S s9, (KVM_ARCH_HOST_S9)(a0) + REG_S s10, (KVM_ARCH_HOST_S10)(a0) + REG_S s11, (KVM_ARCH_HOST_S11)(a0) + + /* Save Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + csrr t0, CSR_SSTATUS + REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) + csrr t1, CSR_HSTATUS + REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) + csrr t2, CSR_SSCRATCH + REG_S t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrr t3, CSR_STVEC + REG_S t3, (KVM_ARCH_HOST_STVEC)(a0) + + /* Change Host exception vector to return path */ + la t4, __kvm_switch_return + csrw CSR_STVEC, t4 + + /* Restore Guest HSTATUS, SSTATUS and SEPC */ + REG_L t4, (KVM_ARCH_GUEST_SEPC)(a0) + csrw CSR_SEPC, t4 + REG_L t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrw CSR_SSTATUS, t5 + REG_L t6, (KVM_ARCH_GUEST_HSTATUS)(a0) + csrw CSR_HSTATUS, t6 + + /* Restore Guest GPRs (except A0) */ + REG_L ra, (KVM_ARCH_GUEST_RA)(a0) + REG_L sp, (KVM_ARCH_GUEST_SP)(a0) + REG_L gp, (KVM_ARCH_GUEST_GP)(a0) + REG_L tp, (KVM_ARCH_GUEST_TP)(a0) + REG_L t0, (KVM_ARCH_GUEST_T0)(a0) + REG_L t1, (KVM_ARCH_GUEST_T1)(a0) + REG_L t2, (KVM_ARCH_GUEST_T2)(a0) + REG_L s0, (KVM_ARCH_GUEST_S0)(a0) + REG_L s1, (KVM_ARCH_GUEST_S1)(a0) + REG_L a1, (KVM_ARCH_GUEST_A1)(a0) + REG_L a2, (KVM_ARCH_GUEST_A2)(a0) + REG_L a3, (KVM_ARCH_GUEST_A3)(a0) + REG_L a4, (KVM_ARCH_GUEST_A4)(a0) + REG_L a5, (KVM_ARCH_GUEST_A5)(a0) + REG_L a6, (KVM_ARCH_GUEST_A6)(a0) + REG_L a7, (KVM_ARCH_GUEST_A7)(a0) + REG_L s2, (KVM_ARCH_GUEST_S2)(a0) + REG_L s3, (KVM_ARCH_GUEST_S3)(a0) + REG_L s4, (KVM_ARCH_GUEST_S4)(a0) + REG_L s5, (KVM_ARCH_GUEST_S5)(a0) + REG_L s6, (KVM_ARCH_GUEST_S6)(a0) + REG_L s7, (KVM_ARCH_GUEST_S7)(a0) + REG_L s8, (KVM_ARCH_GUEST_S8)(a0) + REG_L s9, (KVM_ARCH_GUEST_S9)(a0) + REG_L s10, (KVM_ARCH_GUEST_S10)(a0) + REG_L s11, (KVM_ARCH_GUEST_S11)(a0) + REG_L t3, (KVM_ARCH_GUEST_T3)(a0) + REG_L t4, (KVM_ARCH_GUEST_T4)(a0) + REG_L t5, (KVM_ARCH_GUEST_T5)(a0) + REG_L t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Host A0 in SSCRATCH */ + csrw CSR_SSCRATCH, a0 + + /* Restore Guest A0 */ + REG_L a0, (KVM_ARCH_GUEST_A0)(a0) + + /* Resume Guest */ + sret + + /* Back to Host */ + .align 2 +__kvm_switch_return: + /* Swap Guest A0 with SSCRATCH */ + csrrw a0, CSR_SSCRATCH, a0 + + /* Save Guest GPRs (except A0) */ + REG_S ra, (KVM_ARCH_GUEST_RA)(a0) + REG_S sp, (KVM_ARCH_GUEST_SP)(a0) + REG_S gp, (KVM_ARCH_GUEST_GP)(a0) + REG_S tp, (KVM_ARCH_GUEST_TP)(a0) + REG_S t0, (KVM_ARCH_GUEST_T0)(a0) + REG_S t1, (KVM_ARCH_GUEST_T1)(a0) + REG_S t2, (KVM_ARCH_GUEST_T2)(a0) + REG_S s0, (KVM_ARCH_GUEST_S0)(a0) + REG_S s1, (KVM_ARCH_GUEST_S1)(a0) + REG_S a1, (KVM_ARCH_GUEST_A1)(a0) + REG_S a2, (KVM_ARCH_GUEST_A2)(a0) + REG_S a3, (KVM_ARCH_GUEST_A3)(a0) + REG_S a4, (KVM_ARCH_GUEST_A4)(a0) + REG_S a5, (KVM_ARCH_GUEST_A5)(a0) + REG_S a6, (KVM_ARCH_GUEST_A6)(a0) + REG_S a7, (KVM_ARCH_GUEST_A7)(a0) + REG_S s2, (KVM_ARCH_GUEST_S2)(a0) + REG_S s3, (KVM_ARCH_GUEST_S3)(a0) + REG_S s4, (KVM_ARCH_GUEST_S4)(a0) + REG_S s5, (KVM_ARCH_GUEST_S5)(a0) + REG_S s6, (KVM_ARCH_GUEST_S6)(a0) + REG_S s7, (KVM_ARCH_GUEST_S7)(a0) + REG_S s8, (KVM_ARCH_GUEST_S8)(a0) + REG_S s9, (KVM_ARCH_GUEST_S9)(a0) + REG_S s10, (KVM_ARCH_GUEST_S10)(a0) + REG_S s11, (KVM_ARCH_GUEST_S11)(a0) + REG_S t3, (KVM_ARCH_GUEST_T3)(a0) + REG_S t4, (KVM_ARCH_GUEST_T4)(a0) + REG_S t5, (KVM_ARCH_GUEST_T5)(a0) + REG_S t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Guest A0 */ + csrr t0, CSR_SSCRATCH + REG_S t0, (KVM_ARCH_GUEST_A0)(a0) + + /* Save Guest HSTATUS, SSTATUS, and SEPC */ + csrr t0, CSR_SEPC + REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) + csrr t1, CSR_SSTATUS + REG_S t1, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrr t2, CSR_HSTATUS + REG_S t2, (KVM_ARCH_GUEST_HSTATUS)(a0) + + /* Restore Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + REG_L t3, (KVM_ARCH_HOST_SSTATUS)(a0) + csrw CSR_SSTATUS, t3 + REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) + csrw CSR_HSTATUS, t4 + REG_L t5, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrw CSR_SSCRATCH, t5 + REG_L t6, (KVM_ARCH_HOST_STVEC)(a0) + csrw CSR_STVEC, t6 + + /* Restore Host GPRs (except A0 and T0-T6) */ + REG_L ra, (KVM_ARCH_HOST_RA)(a0) + REG_L sp, (KVM_ARCH_HOST_SP)(a0) + REG_L gp, (KVM_ARCH_HOST_GP)(a0) + REG_L tp, (KVM_ARCH_HOST_TP)(a0) + REG_L s0, (KVM_ARCH_HOST_S0)(a0) + REG_L s1, (KVM_ARCH_HOST_S1)(a0) + REG_L a1, (KVM_ARCH_HOST_A1)(a0) + REG_L a2, (KVM_ARCH_HOST_A2)(a0) + REG_L a3, (KVM_ARCH_HOST_A3)(a0) + REG_L a4, (KVM_ARCH_HOST_A4)(a0) + REG_L a5, (KVM_ARCH_HOST_A5)(a0) + REG_L a6, (KVM_ARCH_HOST_A6)(a0) + REG_L a7, (KVM_ARCH_HOST_A7)(a0) + REG_L s2, (KVM_ARCH_HOST_S2)(a0) + REG_L s3, (KVM_ARCH_HOST_S3)(a0) + REG_L s4, (KVM_ARCH_HOST_S4)(a0) + REG_L s5, (KVM_ARCH_HOST_S5)(a0) + REG_L s6, (KVM_ARCH_HOST_S6)(a0) + REG_L s7, (KVM_ARCH_HOST_S7)(a0) + REG_L s8, (KVM_ARCH_HOST_S8)(a0) + REG_L s9, (KVM_ARCH_HOST_S9)(a0) + REG_L s10, (KVM_ARCH_HOST_S10)(a0) + REG_L s11, (KVM_ARCH_HOST_S11)(a0) + + /* Return to C code */ + ret +ENDPROC(__kvm_riscv_switch_to) -- 2.17.1
WARNING: multiple messages have this Message-ID (diff)
From: Anup Patel <Anup.Patel@wdc.com> To: Palmer Dabbelt <palmer@sifive.com>, Paul Walmsley <paul.walmsley@sifive.com>, Paolo Bonzini <pbonzini@redhat.com>, Radim K <rkrcmar@redhat.com> Cc: Damien Le Moal <Damien.LeMoal@wdc.com>, Anup Patel <Anup.Patel@wdc.com>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, Anup Patel <anup@brainfault.org>, Daniel Lezcano <daniel.lezcano@linaro.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, Christoph Hellwig <hch@infradead.org>, Atish Patra <Atish.Patra@wdc.com>, Alistair Francis <Alistair.Francis@wdc.com>, Thomas Gleixner <tglx@linutronix.de>, "linux-riscv@lists.infradead.org" <linux-riscv@lists.infradead.org> Subject: [RFC PATCH 07/16] RISC-V: KVM: Implement VCPU world-switch Date: Mon, 29 Jul 2019 11:57:05 +0000 [thread overview] Message-ID: <20190729115544.17895-8-anup.patel@wdc.com> (raw) In-Reply-To: <20190729115544.17895-1-anup.patel@wdc.com> This patch implements the VCPU world-switch for KVM RISC-V. The KVM RISC-V world-switch (i.e. __kvm_riscv_switch_to()) mostly switches general purpose registers, SSTATUS, STVEC, SSCRATCH and HSTATUS CSRs. Other CSRs are switched via vcpu_load() and vcpu_put() interface in kvm_arch_vcpu_load() and kvm_arch_vcpu_put() functions respectively. Signed-off-by: Anup Patel <anup.patel@wdc.com> --- arch/riscv/include/asm/kvm_host.h | 9 +- arch/riscv/kernel/asm-offsets.c | 76 ++++++++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 33 ++++- arch/riscv/kvm/vcpu_switch.S | 193 ++++++++++++++++++++++++++++++ 5 files changed, 309 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_switch.S diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index aa89f1922da1..006785bd6474 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -113,6 +113,13 @@ struct kvm_vcpu_arch { /* ISA feature bits (similar to MISA) */ unsigned long isa; + /* SSCRATCH and STVEC of Host */ + unsigned long host_sscratch; + unsigned long host_stvec; + + /* CPU context of Host */ + struct kvm_cpu_context host_context; + /* CPU context of Guest VCPU */ struct kvm_cpu_context guest_context; @@ -151,7 +158,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, unsigned long scause, unsigned long stval); -static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 9f5628c38ac9..711656710190 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -7,7 +7,9 @@ #define GENERATING_ASM_OFFSETS #include <linux/kbuild.h> +#include <linux/mm.h> #include <linux/sched.h> +#include <asm/kvm_host.h> #include <asm/thread_info.h> #include <asm/ptrace.h> @@ -109,6 +111,80 @@ void asm_offsets(void) OFFSET(PT_SBADADDR, pt_regs, sbadaddr); OFFSET(PT_SCAUSE, pt_regs, scause); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); + OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); + OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); + OFFSET(KVM_ARCH_GUEST_GP, kvm_vcpu_arch, guest_context.gp); + OFFSET(KVM_ARCH_GUEST_TP, kvm_vcpu_arch, guest_context.tp); + OFFSET(KVM_ARCH_GUEST_T0, kvm_vcpu_arch, guest_context.t0); + OFFSET(KVM_ARCH_GUEST_T1, kvm_vcpu_arch, guest_context.t1); + OFFSET(KVM_ARCH_GUEST_T2, kvm_vcpu_arch, guest_context.t2); + OFFSET(KVM_ARCH_GUEST_S0, kvm_vcpu_arch, guest_context.s0); + OFFSET(KVM_ARCH_GUEST_S1, kvm_vcpu_arch, guest_context.s1); + OFFSET(KVM_ARCH_GUEST_A0, kvm_vcpu_arch, guest_context.a0); + OFFSET(KVM_ARCH_GUEST_A1, kvm_vcpu_arch, guest_context.a1); + OFFSET(KVM_ARCH_GUEST_A2, kvm_vcpu_arch, guest_context.a2); + OFFSET(KVM_ARCH_GUEST_A3, kvm_vcpu_arch, guest_context.a3); + OFFSET(KVM_ARCH_GUEST_A4, kvm_vcpu_arch, guest_context.a4); + OFFSET(KVM_ARCH_GUEST_A5, kvm_vcpu_arch, guest_context.a5); + OFFSET(KVM_ARCH_GUEST_A6, kvm_vcpu_arch, guest_context.a6); + OFFSET(KVM_ARCH_GUEST_A7, kvm_vcpu_arch, guest_context.a7); + OFFSET(KVM_ARCH_GUEST_S2, kvm_vcpu_arch, guest_context.s2); + OFFSET(KVM_ARCH_GUEST_S3, kvm_vcpu_arch, guest_context.s3); + OFFSET(KVM_ARCH_GUEST_S4, kvm_vcpu_arch, guest_context.s4); + OFFSET(KVM_ARCH_GUEST_S5, kvm_vcpu_arch, guest_context.s5); + OFFSET(KVM_ARCH_GUEST_S6, kvm_vcpu_arch, guest_context.s6); + OFFSET(KVM_ARCH_GUEST_S7, kvm_vcpu_arch, guest_context.s7); + OFFSET(KVM_ARCH_GUEST_S8, kvm_vcpu_arch, guest_context.s8); + OFFSET(KVM_ARCH_GUEST_S9, kvm_vcpu_arch, guest_context.s9); + OFFSET(KVM_ARCH_GUEST_S10, kvm_vcpu_arch, guest_context.s10); + OFFSET(KVM_ARCH_GUEST_S11, kvm_vcpu_arch, guest_context.s11); + OFFSET(KVM_ARCH_GUEST_T3, kvm_vcpu_arch, guest_context.t3); + OFFSET(KVM_ARCH_GUEST_T4, kvm_vcpu_arch, guest_context.t4); + OFFSET(KVM_ARCH_GUEST_T5, kvm_vcpu_arch, guest_context.t5); + OFFSET(KVM_ARCH_GUEST_T6, kvm_vcpu_arch, guest_context.t6); + OFFSET(KVM_ARCH_GUEST_SEPC, kvm_vcpu_arch, guest_context.sepc); + OFFSET(KVM_ARCH_GUEST_SSTATUS, kvm_vcpu_arch, guest_context.sstatus); + OFFSET(KVM_ARCH_GUEST_HSTATUS, kvm_vcpu_arch, guest_context.hstatus); + + OFFSET(KVM_ARCH_HOST_ZERO, kvm_vcpu_arch, host_context.zero); + OFFSET(KVM_ARCH_HOST_RA, kvm_vcpu_arch, host_context.ra); + OFFSET(KVM_ARCH_HOST_SP, kvm_vcpu_arch, host_context.sp); + OFFSET(KVM_ARCH_HOST_GP, kvm_vcpu_arch, host_context.gp); + OFFSET(KVM_ARCH_HOST_TP, kvm_vcpu_arch, host_context.tp); + OFFSET(KVM_ARCH_HOST_T0, kvm_vcpu_arch, host_context.t0); + OFFSET(KVM_ARCH_HOST_T1, kvm_vcpu_arch, host_context.t1); + OFFSET(KVM_ARCH_HOST_T2, kvm_vcpu_arch, host_context.t2); + OFFSET(KVM_ARCH_HOST_S0, kvm_vcpu_arch, host_context.s0); + OFFSET(KVM_ARCH_HOST_S1, kvm_vcpu_arch, host_context.s1); + OFFSET(KVM_ARCH_HOST_A0, kvm_vcpu_arch, host_context.a0); + OFFSET(KVM_ARCH_HOST_A1, kvm_vcpu_arch, host_context.a1); + OFFSET(KVM_ARCH_HOST_A2, kvm_vcpu_arch, host_context.a2); + OFFSET(KVM_ARCH_HOST_A3, kvm_vcpu_arch, host_context.a3); + OFFSET(KVM_ARCH_HOST_A4, kvm_vcpu_arch, host_context.a4); + OFFSET(KVM_ARCH_HOST_A5, kvm_vcpu_arch, host_context.a5); + OFFSET(KVM_ARCH_HOST_A6, kvm_vcpu_arch, host_context.a6); + OFFSET(KVM_ARCH_HOST_A7, kvm_vcpu_arch, host_context.a7); + OFFSET(KVM_ARCH_HOST_S2, kvm_vcpu_arch, host_context.s2); + OFFSET(KVM_ARCH_HOST_S3, kvm_vcpu_arch, host_context.s3); + OFFSET(KVM_ARCH_HOST_S4, kvm_vcpu_arch, host_context.s4); + OFFSET(KVM_ARCH_HOST_S5, kvm_vcpu_arch, host_context.s5); + OFFSET(KVM_ARCH_HOST_S6, kvm_vcpu_arch, host_context.s6); + OFFSET(KVM_ARCH_HOST_S7, kvm_vcpu_arch, host_context.s7); + OFFSET(KVM_ARCH_HOST_S8, kvm_vcpu_arch, host_context.s8); + OFFSET(KVM_ARCH_HOST_S9, kvm_vcpu_arch, host_context.s9); + OFFSET(KVM_ARCH_HOST_S10, kvm_vcpu_arch, host_context.s10); + OFFSET(KVM_ARCH_HOST_S11, kvm_vcpu_arch, host_context.s11); + OFFSET(KVM_ARCH_HOST_T3, kvm_vcpu_arch, host_context.t3); + OFFSET(KVM_ARCH_HOST_T4, kvm_vcpu_arch, host_context.t4); + OFFSET(KVM_ARCH_HOST_T5, kvm_vcpu_arch, host_context.t5); + OFFSET(KVM_ARCH_HOST_T6, kvm_vcpu_arch, host_context.t6); + OFFSET(KVM_ARCH_HOST_SEPC, kvm_vcpu_arch, host_context.sepc); + OFFSET(KVM_ARCH_HOST_SSTATUS, kvm_vcpu_arch, host_context.sstatus); + OFFSET(KVM_ARCH_HOST_HSTATUS, kvm_vcpu_arch, host_context.hstatus); + OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); + OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 37b5a59d4f4f..845579273727 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -8,6 +8,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) -kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 37368eeb6c41..4ab9f803536e 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -546,14 +546,43 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + csr_write(CSR_HIDELEG, csr->hideleg); + csr_write(CSR_HEDELEG, csr->hedeleg); + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_VSIE, csr->vsie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_VSIP, csr->vsip); + csr_write(CSR_VSATP, csr->vsatp); kvm_riscv_stage2_update_pgtbl(vcpu); + + vcpu->cpu = cpu; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + vcpu->cpu = -1; + + csr_write(CSR_HGATP, 0); + csr_write(CSR_HIDELEG, 0); + csr_write(CSR_HEDELEG, 0); + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->vsie = csr_read(CSR_VSIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->vsip = csr_read(CSR_VSIP); + csr->vsatp = csr_read(CSR_VSATP); } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S new file mode 100644 index 000000000000..c5b85605bf73 --- /dev/null +++ b/arch/riscv/kvm/vcpu_switch.S @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel <anup.patel@wdc.com> + */ + +#include <linux/linkage.h> +#include <asm/asm.h> +#include <asm/asm-offsets.h> +#include <asm/csr.h> + + .text + .altmacro + +ENTRY(__kvm_riscv_switch_to) + /* Save Host GPRs (except A0 and T0-T6) */ + REG_S ra, (KVM_ARCH_HOST_RA)(a0) + REG_S sp, (KVM_ARCH_HOST_SP)(a0) + REG_S gp, (KVM_ARCH_HOST_GP)(a0) + REG_S tp, (KVM_ARCH_HOST_TP)(a0) + REG_S s0, (KVM_ARCH_HOST_S0)(a0) + REG_S s1, (KVM_ARCH_HOST_S1)(a0) + REG_S a1, (KVM_ARCH_HOST_A1)(a0) + REG_S a2, (KVM_ARCH_HOST_A2)(a0) + REG_S a3, (KVM_ARCH_HOST_A3)(a0) + REG_S a4, (KVM_ARCH_HOST_A4)(a0) + REG_S a5, (KVM_ARCH_HOST_A5)(a0) + REG_S a6, (KVM_ARCH_HOST_A6)(a0) + REG_S a7, (KVM_ARCH_HOST_A7)(a0) + REG_S s2, (KVM_ARCH_HOST_S2)(a0) + REG_S s3, (KVM_ARCH_HOST_S3)(a0) + REG_S s4, (KVM_ARCH_HOST_S4)(a0) + REG_S s5, (KVM_ARCH_HOST_S5)(a0) + REG_S s6, (KVM_ARCH_HOST_S6)(a0) + REG_S s7, (KVM_ARCH_HOST_S7)(a0) + REG_S s8, (KVM_ARCH_HOST_S8)(a0) + REG_S s9, (KVM_ARCH_HOST_S9)(a0) + REG_S s10, (KVM_ARCH_HOST_S10)(a0) + REG_S s11, (KVM_ARCH_HOST_S11)(a0) + + /* Save Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + csrr t0, CSR_SSTATUS + REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) + csrr t1, CSR_HSTATUS + REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) + csrr t2, CSR_SSCRATCH + REG_S t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrr t3, CSR_STVEC + REG_S t3, (KVM_ARCH_HOST_STVEC)(a0) + + /* Change Host exception vector to return path */ + la t4, __kvm_switch_return + csrw CSR_STVEC, t4 + + /* Restore Guest HSTATUS, SSTATUS and SEPC */ + REG_L t4, (KVM_ARCH_GUEST_SEPC)(a0) + csrw CSR_SEPC, t4 + REG_L t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrw CSR_SSTATUS, t5 + REG_L t6, (KVM_ARCH_GUEST_HSTATUS)(a0) + csrw CSR_HSTATUS, t6 + + /* Restore Guest GPRs (except A0) */ + REG_L ra, (KVM_ARCH_GUEST_RA)(a0) + REG_L sp, (KVM_ARCH_GUEST_SP)(a0) + REG_L gp, (KVM_ARCH_GUEST_GP)(a0) + REG_L tp, (KVM_ARCH_GUEST_TP)(a0) + REG_L t0, (KVM_ARCH_GUEST_T0)(a0) + REG_L t1, (KVM_ARCH_GUEST_T1)(a0) + REG_L t2, (KVM_ARCH_GUEST_T2)(a0) + REG_L s0, (KVM_ARCH_GUEST_S0)(a0) + REG_L s1, (KVM_ARCH_GUEST_S1)(a0) + REG_L a1, (KVM_ARCH_GUEST_A1)(a0) + REG_L a2, (KVM_ARCH_GUEST_A2)(a0) + REG_L a3, (KVM_ARCH_GUEST_A3)(a0) + REG_L a4, (KVM_ARCH_GUEST_A4)(a0) + REG_L a5, (KVM_ARCH_GUEST_A5)(a0) + REG_L a6, (KVM_ARCH_GUEST_A6)(a0) + REG_L a7, (KVM_ARCH_GUEST_A7)(a0) + REG_L s2, (KVM_ARCH_GUEST_S2)(a0) + REG_L s3, (KVM_ARCH_GUEST_S3)(a0) + REG_L s4, (KVM_ARCH_GUEST_S4)(a0) + REG_L s5, (KVM_ARCH_GUEST_S5)(a0) + REG_L s6, (KVM_ARCH_GUEST_S6)(a0) + REG_L s7, (KVM_ARCH_GUEST_S7)(a0) + REG_L s8, (KVM_ARCH_GUEST_S8)(a0) + REG_L s9, (KVM_ARCH_GUEST_S9)(a0) + REG_L s10, (KVM_ARCH_GUEST_S10)(a0) + REG_L s11, (KVM_ARCH_GUEST_S11)(a0) + REG_L t3, (KVM_ARCH_GUEST_T3)(a0) + REG_L t4, (KVM_ARCH_GUEST_T4)(a0) + REG_L t5, (KVM_ARCH_GUEST_T5)(a0) + REG_L t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Host A0 in SSCRATCH */ + csrw CSR_SSCRATCH, a0 + + /* Restore Guest A0 */ + REG_L a0, (KVM_ARCH_GUEST_A0)(a0) + + /* Resume Guest */ + sret + + /* Back to Host */ + .align 2 +__kvm_switch_return: + /* Swap Guest A0 with SSCRATCH */ + csrrw a0, CSR_SSCRATCH, a0 + + /* Save Guest GPRs (except A0) */ + REG_S ra, (KVM_ARCH_GUEST_RA)(a0) + REG_S sp, (KVM_ARCH_GUEST_SP)(a0) + REG_S gp, (KVM_ARCH_GUEST_GP)(a0) + REG_S tp, (KVM_ARCH_GUEST_TP)(a0) + REG_S t0, (KVM_ARCH_GUEST_T0)(a0) + REG_S t1, (KVM_ARCH_GUEST_T1)(a0) + REG_S t2, (KVM_ARCH_GUEST_T2)(a0) + REG_S s0, (KVM_ARCH_GUEST_S0)(a0) + REG_S s1, (KVM_ARCH_GUEST_S1)(a0) + REG_S a1, (KVM_ARCH_GUEST_A1)(a0) + REG_S a2, (KVM_ARCH_GUEST_A2)(a0) + REG_S a3, (KVM_ARCH_GUEST_A3)(a0) + REG_S a4, (KVM_ARCH_GUEST_A4)(a0) + REG_S a5, (KVM_ARCH_GUEST_A5)(a0) + REG_S a6, (KVM_ARCH_GUEST_A6)(a0) + REG_S a7, (KVM_ARCH_GUEST_A7)(a0) + REG_S s2, (KVM_ARCH_GUEST_S2)(a0) + REG_S s3, (KVM_ARCH_GUEST_S3)(a0) + REG_S s4, (KVM_ARCH_GUEST_S4)(a0) + REG_S s5, (KVM_ARCH_GUEST_S5)(a0) + REG_S s6, (KVM_ARCH_GUEST_S6)(a0) + REG_S s7, (KVM_ARCH_GUEST_S7)(a0) + REG_S s8, (KVM_ARCH_GUEST_S8)(a0) + REG_S s9, (KVM_ARCH_GUEST_S9)(a0) + REG_S s10, (KVM_ARCH_GUEST_S10)(a0) + REG_S s11, (KVM_ARCH_GUEST_S11)(a0) + REG_S t3, (KVM_ARCH_GUEST_T3)(a0) + REG_S t4, (KVM_ARCH_GUEST_T4)(a0) + REG_S t5, (KVM_ARCH_GUEST_T5)(a0) + REG_S t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Guest A0 */ + csrr t0, CSR_SSCRATCH + REG_S t0, (KVM_ARCH_GUEST_A0)(a0) + + /* Save Guest HSTATUS, SSTATUS, and SEPC */ + csrr t0, CSR_SEPC + REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) + csrr t1, CSR_SSTATUS + REG_S t1, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrr t2, CSR_HSTATUS + REG_S t2, (KVM_ARCH_GUEST_HSTATUS)(a0) + + /* Restore Host SSTATUS, HSTATUS, SCRATCH and STVEC */ + REG_L t3, (KVM_ARCH_HOST_SSTATUS)(a0) + csrw CSR_SSTATUS, t3 + REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) + csrw CSR_HSTATUS, t4 + REG_L t5, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrw CSR_SSCRATCH, t5 + REG_L t6, (KVM_ARCH_HOST_STVEC)(a0) + csrw CSR_STVEC, t6 + + /* Restore Host GPRs (except A0 and T0-T6) */ + REG_L ra, (KVM_ARCH_HOST_RA)(a0) + REG_L sp, (KVM_ARCH_HOST_SP)(a0) + REG_L gp, (KVM_ARCH_HOST_GP)(a0) + REG_L tp, (KVM_ARCH_HOST_TP)(a0) + REG_L s0, (KVM_ARCH_HOST_S0)(a0) + REG_L s1, (KVM_ARCH_HOST_S1)(a0) + REG_L a1, (KVM_ARCH_HOST_A1)(a0) + REG_L a2, (KVM_ARCH_HOST_A2)(a0) + REG_L a3, (KVM_ARCH_HOST_A3)(a0) + REG_L a4, (KVM_ARCH_HOST_A4)(a0) + REG_L a5, (KVM_ARCH_HOST_A5)(a0) + REG_L a6, (KVM_ARCH_HOST_A6)(a0) + REG_L a7, (KVM_ARCH_HOST_A7)(a0) + REG_L s2, (KVM_ARCH_HOST_S2)(a0) + REG_L s3, (KVM_ARCH_HOST_S3)(a0) + REG_L s4, (KVM_ARCH_HOST_S4)(a0) + REG_L s5, (KVM_ARCH_HOST_S5)(a0) + REG_L s6, (KVM_ARCH_HOST_S6)(a0) + REG_L s7, (KVM_ARCH_HOST_S7)(a0) + REG_L s8, (KVM_ARCH_HOST_S8)(a0) + REG_L s9, (KVM_ARCH_HOST_S9)(a0) + REG_L s10, (KVM_ARCH_HOST_S10)(a0) + REG_L s11, (KVM_ARCH_HOST_S11)(a0) + + /* Return to C code */ + ret +ENDPROC(__kvm_riscv_switch_to) -- 2.17.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
next prev parent reply other threads:[~2019-07-29 11:57 UTC|newest] Thread overview: 134+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-29 11:56 [RFC PATCH 00/16] KVM RISC-V Support Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-29 11:56 ` [RFC PATCH 01/16] KVM: RISC-V: Add KVM_REG_RISCV for ONE_REG interface Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-29 11:56 ` [RFC PATCH 02/16] RISC-V: Add hypervisor extension related CSR defines Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-29 11:56 ` [RFC PATCH 03/16] RISC-V: Add initial skeletal KVM support Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-30 9:23 ` Paolo Bonzini 2019-07-30 9:23 ` Paolo Bonzini 2019-07-30 11:04 ` Anup Patel 2019-07-30 11:04 ` Anup Patel 2019-07-30 9:25 ` Paolo Bonzini 2019-07-30 9:25 ` Paolo Bonzini 2019-07-30 11:03 ` Anup Patel 2019-07-30 11:03 ` Anup Patel 2019-07-29 11:56 ` [RFC PATCH 04/16] RISC-V: KVM: Implement VCPU create, init and destroy functions Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-30 8:48 ` Paolo Bonzini 2019-07-30 8:48 ` Paolo Bonzini 2019-07-30 10:16 ` Paolo Bonzini 2019-07-30 10:16 ` Paolo Bonzini 2019-07-30 11:45 ` Anup Patel 2019-07-30 11:45 ` Anup Patel 2019-07-30 11:47 ` Paolo Bonzini 2019-07-30 11:47 ` Paolo Bonzini 2019-07-29 11:56 ` [RFC PATCH 05/16] RISC-V: KVM: Implement VCPU interrupts and requests handling Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-30 11:17 ` Paolo Bonzini 2019-07-30 11:17 ` Paolo Bonzini 2019-07-30 12:00 ` Anup Patel 2019-07-30 12:00 ` Anup Patel 2019-07-30 12:12 ` Paolo Bonzini 2019-07-30 12:12 ` Paolo Bonzini 2019-07-30 12:45 ` Anup Patel 2019-07-30 12:45 ` Anup Patel 2019-07-30 13:18 ` Paolo Bonzini 2019-07-30 13:18 ` Paolo Bonzini 2019-07-30 13:35 ` Anup Patel 2019-07-30 13:35 ` Anup Patel 2019-07-30 14:08 ` Paolo Bonzini 2019-07-30 14:08 ` Paolo Bonzini 2019-08-02 3:59 ` Anup Patel 2019-08-02 3:59 ` Anup Patel 2019-07-29 11:56 ` [RFC PATCH 06/16] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Anup Patel 2019-07-29 11:56 ` Anup Patel 2019-07-30 8:43 ` Paolo Bonzini 2019-07-30 8:43 ` Paolo Bonzini 2019-07-30 9:35 ` Paolo Bonzini 2019-07-30 9:35 ` Paolo Bonzini 2019-07-30 12:08 ` Anup Patel 2019-07-30 12:08 ` Anup Patel 2019-07-30 12:10 ` Paolo Bonzini 2019-07-30 12:10 ` Paolo Bonzini 2019-07-30 12:16 ` Anup Patel 2019-07-30 12:16 ` Anup Patel 2019-07-29 11:57 ` Anup Patel [this message] 2019-07-29 11:57 ` [RFC PATCH 07/16] RISC-V: KVM: Implement VCPU world-switch Anup Patel 2019-07-30 9:34 ` Paolo Bonzini 2019-07-30 9:34 ` Paolo Bonzini 2019-07-30 12:51 ` Anup Patel 2019-07-30 12:51 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 08/16] RISC-V: KVM: Handle MMIO exits for VCPU Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-30 11:20 ` Paolo Bonzini 2019-07-30 11:20 ` Paolo Bonzini 2019-07-31 7:23 ` Anup Patel 2019-07-31 7:23 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 09/16] RISC-V: KVM: Handle WFI " Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 10/16] RISC-V: KVM: Implement VMID allocator Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-30 8:59 ` Paolo Bonzini 2019-07-30 8:59 ` Paolo Bonzini 2019-07-29 11:57 ` [RFC PATCH 11/16] RISC-V: KVM: Implement stage2 page table programming Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-30 9:00 ` Paolo Bonzini 2019-07-30 9:00 ` Paolo Bonzini 2019-07-30 12:14 ` Anup Patel 2019-07-30 12:14 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 12/16] RISC-V: KVM: Implement MMU notifiers Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 13/16] RISC-V: KVM: Add timer functionality Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-29 14:40 ` Andreas Schwab 2019-07-29 14:40 ` Andreas Schwab 2019-07-29 18:02 ` Atish Patra 2019-07-29 18:02 ` Atish Patra 2019-07-30 6:51 ` Andreas Schwab 2019-07-30 6:51 ` Andreas Schwab 2019-07-30 7:00 ` Atish Patra 2019-07-30 7:00 ` Atish Patra 2019-07-30 11:26 ` Paolo Bonzini 2019-07-30 11:26 ` Paolo Bonzini 2019-07-31 1:55 ` Atish Patra 2019-07-31 1:55 ` Atish Patra 2019-07-31 6:58 ` Paolo Bonzini 2019-07-31 6:58 ` Paolo Bonzini 2019-07-31 7:18 ` Anup Patel 2019-07-31 7:18 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 14/16] RISC-V: KVM: FP lazy save/restore Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-29 11:57 ` [RFC PATCH 15/16] RISC-V: KVM: Add SBI v0.1 support Anup Patel 2019-07-29 11:57 ` Anup Patel 2019-07-29 19:40 ` Paolo Bonzini 2019-07-29 19:40 ` Paolo Bonzini 2019-07-29 19:51 ` Atish Patra 2019-07-29 19:51 ` Atish Patra 2019-07-29 20:08 ` Paolo Bonzini 2019-07-29 20:08 ` Paolo Bonzini 2019-07-29 21:08 ` Atish Patra 2019-07-29 21:08 ` Atish Patra 2019-07-30 9:26 ` Paolo Bonzini 2019-07-30 9:26 ` Paolo Bonzini 2019-07-29 11:58 ` [RFC PATCH 16/16] RISC-V: Enable VIRTIO drivers in RV64 and RV32 defconfig Anup Patel 2019-07-29 11:58 ` Anup Patel 2019-07-29 21:47 ` [RFC PATCH 00/16] KVM RISC-V Support Paolo Bonzini 2019-07-29 21:47 ` Paolo Bonzini 2019-07-30 5:26 ` Anup Patel 2019-07-30 5:26 ` Anup Patel 2019-07-30 11:33 ` Paolo Bonzini 2019-07-30 11:33 ` Paolo Bonzini 2019-07-30 13:50 ` Anup Patel 2019-07-30 13:50 ` Anup Patel 2019-07-30 14:02 ` Paolo Bonzini 2019-07-30 14:02 ` Paolo Bonzini 2019-07-30 6:53 ` Andreas Schwab 2019-07-30 6:53 ` Andreas Schwab 2019-07-30 7:25 ` Anup Patel 2019-07-30 7:25 ` Anup Patel 2019-07-30 7:42 ` Andreas Schwab 2019-07-30 7:42 ` Andreas Schwab 2019-07-30 7:36 ` Anup Patel 2019-07-30 7:36 ` Anup Patel
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190729115544.17895-8-anup.patel@wdc.com \ --to=anup.patel@wdc.com \ --cc=Alistair.Francis@wdc.com \ --cc=Atish.Patra@wdc.com \ --cc=Damien.LeMoal@wdc.com \ --cc=anup@brainfault.org \ --cc=daniel.lezcano@linaro.org \ --cc=hch@infradead.org \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=palmer@sifive.com \ --cc=paul.walmsley@sifive.com \ --cc=pbonzini@redhat.com \ --cc=rkrcmar@redhat.com \ --cc=tglx@linutronix.de \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.