All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v20 00/30] Add KVM LoongArch support
@ 2023-08-31  8:29 Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Tianrui Zhao
                   ` (30 more replies)
  0 siblings, 31 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

From: zhaotianrui <zhaotianrui@loongson.cn>

This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
assisted virtualization. With cpu virtualization, there are separate
hw-supported user mode and kernel mode in guest mode. With memory
virtualization, there are two-level hw mmu table for guest mode and host
mode. Also there is separate hw cpu timer with consant frequency in
guest mode, so that vm can migrate between hosts with different freq.
Currently, we are able to boot LoongArch Linux Guests.

Few key aspects of KVM LoongArch added by this series are:
1. Enable kvm hardware function when kvm module is loaded.
2. Implement VM and vcpu related ioctl interface such as vcpu create,
   vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
   get general registers one by one.
3. Hardware access about MMU, timer and csr are emulated in kernel.
4. Hardwares such as mmio and iocsr device are emulated in user space
   such as APIC, IPI, pci devices etc.

The running environment of LoongArch virt machine:
1. Cross tools to build kernel and uefi:
   $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
   tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
   export PATH=/opt/cross-tools/bin:$PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
   export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
2. This series is based on the linux source code:
   https://github.com/loongson/linux-loongarch-kvm
   Build command:
   git checkout kvm-loongarch
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
   make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
3. QEMU hypervisor with LoongArch supported:
   https://github.com/loongson/qemu
   Build command:
   git checkout kvm-loongarch
   ./configure --target-list="loongarch64-softmmu"  --enable-kvm
   make
4. Uefi bios of LoongArch virt machine:
   Link: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
5. you can also access the binary files we have already build:
   https://github.com/yangxiaojuan-loongson/qemu-binary
The command to boot loongarch virt machine:
   $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
   -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
   -serial stdio   -monitor telnet:localhost:4495,server,nowait \
   -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
   --nographic

changes for v20:
1. Remove the binary codes of virtualization instructions in
insn_def.h and csr_ops.S and directly use the default csrrd,
csrwr,csrxchg instructions. And let CONFIG_KVM depends on the
AS_HAS_LVZ_EXTENSION, so we should use the binutils that have
already supported them to compile the KVM. This can make our
LoongArch KVM codes more maintainable and easier.

changes for v19:
1. Use the common interface xfer_to_guest_mode_handle_work to
Check conditions before entering the guest.
2. Add vcpu dirty ring support.

changes for v18:
1. Code cleanup for vcpu timer: remove unnecessary timer_period_ns,
timer_bias, timer_dyn_bias variables in kvm_vcpu_arch and rename
the stable_ktime_saved variable to expire.
2. Change the value of KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE to 40.

changes for v17:
1. Add CONFIG_AS_HAS_LVZ_EXTENSION config option which depends on
binutils that support LVZ assemble instruction.
2. Change kvm mmu related functions, such as rename level2_ptw_pgd
to kvm_ptw_pgd, replace kvm_flush_range with kvm_ptw_pgd pagewalk
framework, replace kvm_arch.gpa_mm with kvm_arch.pgd, set
mark_page_dirty/kvm_set_pfn_dirty out of mmu_lock in kvm page fault
handling.
3. Replace kvm_loongarch_interrupt with standard kvm_interrupt
when injecting IRQ.
4. Replace vcpu_arch.last_exec_cpu with existing vcpu.cpu, remove
kvm_arch.online_vcpus and kvm_arch.is_migrating,
5. Remove EXCCODE_TLBNR and EXCCODE_TLBNX in kvm exception table,
since NR/NX bit is not set in kvm page fault handling.

Changes for v16:
1. Free allocated memory of vmcs,kvm_loongarch_ops in kvm module init,
exit to avoid memory leak problem.
2. Simplify some assemble codes in switch.S which are necessary to be
replaced with pseudo-instructions. And any other instructions do not need
to be replaced anymore.
3. Add kvm_{save,restore}_guest_gprs macros to replace these ld.d,st.d
guest regs instructions when vcpu world switch.
4. It is more secure to disable irq when flush guest tlb by gpa, so replace
preempt_disable with loacl_irq_save in kvm_flush_tlb_gpa.

Changes for v15:
1. Re-order some macros and variables in LoongArch kvm headers, put them
together which have the same meaning.
2. Make some function definitions in one line, as it is not needed to split
them.
3. Re-name some macros such as KVM_REG_LOONGARCH_GPR.

Changes for v14:
1. Remove the macro CONFIG_KVM_GENERIC_HARDWARE_ENABLING in
loongarch/kvm/main.c, as it is not useful.
2. Add select KVM_GENERIC_HARDWARE_ENABLING in loongarch/kvm/Kconfig,
as it is used by virt/kvm.
3. Fix the LoongArch KVM source link in MAINTAINERS.
4. Improve LoongArch KVM documentation, such as add comment for
LoongArch kvm_regs.

Changes for v13:
1. Remove patch-28 "Implement probe virtualization when cpu init", as the
virtualization information about FPU,PMP,LSX in guest.options,options_dyn
is not used and the gcfg reg value can be read in kvm_hardware_enable, so
remove the previous cpu_probe_lvz function.
2. Fix vcpu_enable_cap interface, it should return -EINVAL directly, as
FPU cap is enable by default, and do not support any other caps now.
3. Simplify the jirl instruction with jr when without return addr,
simplify case HW0 ... HW7 statment in interrupt.c
4. Rename host_stack,host_gp in kvm_vcpu_arch to host_sp,host_tp.
5. Remove 'cpu' parameter in _kvm_check_requests, as 'cpu' is not used,
and remove 'cpu' parameter in kvm_check_vmid function, as it can get
cpu number by itself.

Changes for v12:
1. Improve the gcsr write/read/xchg interface to avoid the previous
instruction statment like parse_r and make the code easy understanding,
they are implemented in asm/insn-def.h and the instructions consistent
of "opcode" "rj" "rd" "simm14" arguments.
2. Fix the maintainers list of LoongArch KVM.

Changes for v11:
1. Add maintainers for LoongArch KVM.

Changes for v10:
1. Fix grammatical problems in LoongArch documentation.
2. It is not necessary to save or restore the LOONGARCH_CSR_PGD when
vcpu put and vcpu load, so we remove it.

Changes for v9:
1. Apply the new defined interrupt number macros in loongarch.h to kvm,
such as INT_SWI0, INT_HWI0, INT_TI, INT_IPI, etc. And remove the
previous unused macros.
2. Remove unused variables in kvm_vcpu_arch, and reorder the variables
to make them more standard.

Changes for v8:
1. Adjust the cpu_data.guest.options structure, add the ases flag into
it, and remove the previous guest.ases. We do this to keep consistent
with host cpu_data.options structure.
2. Remove the "#include <asm/kvm_host.h>" in some files which also
include the "<linux/kvm_host.h>". As linux/kvm_host.h already include
the asm/kvm_host.h.
3. Fix some unstandard spelling and grammar errors in comments, and
improve a little code format to make it easier and standard.

Changes for v7:
1. Fix the kvm_save/restore_hw_gcsr compiling warnings reported by
kernel test robot. The report link is:
https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
2. Fix loongarch kvm trace related compiling problems.

Changes for v6:
1. Fix the Documentation/virt/kvm/api.rst compile warning about
loongarch parts.

Changes for v5:
1. Implement get/set mp_state ioctl interface, and only the
KVM_MP_STATE_RUNNABLE state is supported now, and other states
will be completed in the future. The state is also used when vcpu
run idle instruction, if vcpu state is changed to RUNNABLE, the
vcpu will have the possibility to be woken up.
2. Supplement kvm document about loongarch-specific part, such as add
api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
etc.
3. Improve the kvm_switch_to_guest function in switch.S, remove the
previous tmp,tmp1 arguments and replace it with t0,t1 reg.

Changes for v4:
1. Add a csr_need_update flag in _vcpu_put, as most csr registers keep
unchanged during process context switch, so we need not to update it
every time. We can do this only if the soft csr is different form hardware.
That is to say all of csrs should update after vcpu enter guest, as for
set_csr_ioctl, we have written soft csr to keep consistent with hardware.
2. Improve get/set_csr_ioctl interface, we set SW or HW or INVALID flag
for all csrs according to it's features when kvm init. In get/set_csr_ioctl,
if csr is HW, we use gcsrrd/ gcsrwr instruction to access it, else if csr is
SW, we use software to emulate it, and others return false.
3. Add set_hw_gcsr function in csr_ops.S, and it is used in set_csr_ioctl.
We have splited hw gcsr into three parts, so we can calculate the code offset
by gcsrid and jump here to run the gcsrwr instruction. We use this function to
make the code easier and avoid to use the previous SET_HW_GCSR(XXX) interface.
4. Improve kvm mmu functions, such as flush page table and make clean page table
interface.

Changes for v3:
1. Remove the vpid array list in kvm_vcpu_arch and use a vpid variable here,
because a vpid will never be recycled if a vCPU migrates from physical CPU A
to B and back to A.
2. Make some constant variables in kvm_context to global such as vpid_mask,
guest_eentry, enter_guest, etc.
3. Add some new tracepoints, such as kvm_trace_idle, kvm_trace_cache,
kvm_trace_gspr, etc.
4. There are some duplicate codes in kvm_handle_exit and kvm_vcpu_run,
so we move it to a new function kvm_pre_enter_guest.
5. Change the RESUME_HOST, RESUME_GUEST value, return 1 for resume guest
and "<= 0" for resume host.
6. Fcsr and fpu registers are saved/restored together.

Changes for v2:
1. Seprate the original patch-01 and patch-03 into small patches, and the
patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
etc.
2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
and we use the common KVM_{GET,SET}_ONE_REG to access register.
3. Use BIT(x) to replace the "1 << n_bits" statement.

Tianrui Zhao (30):
  LoongArch: KVM: Add kvm related header files
  LoongArch: KVM: Implement kvm module related interface
  LoongArch: KVM: Implement kvm hardware enable, disable interface
  LoongArch: KVM: Implement VM related functions
  LoongArch: KVM: Add vcpu related header files
  LoongArch: KVM: Implement vcpu create and destroy interface
  LoongArch: KVM: Implement vcpu run interface
  LoongArch: KVM: Implement vcpu handle exit interface
  LoongArch: KVM: Implement vcpu get, vcpu set registers
  LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  LoongArch: KVM: Implement fpu related operations for vcpu
  LoongArch: KVM: Implement vcpu interrupt operations
  LoongArch: KVM: Implement misc vcpu related interfaces
  LoongArch: KVM: Implement vcpu load and vcpu put operations
  LoongArch: KVM: Implement vcpu status description
  LoongArch: KVM: Implement update VM id function
  LoongArch: KVM: Implement virtual machine tlb operations
  LoongArch: KVM: Implement vcpu timer operations
  LoongArch: KVM: Implement kvm mmu operations
  LoongArch: KVM: Implement handle csr excption
  LoongArch: KVM: Implement handle iocsr exception
  LoongArch: KVM: Implement handle idle exception
  LoongArch: KVM: Implement handle gspr exception
  LoongArch: KVM: Implement handle mmio exception
  LoongArch: KVM: Implement handle fpu exception
  LoongArch: KVM: Implement kvm exception vector
  LoongArch: KVM: Implement vcpu world switch
  LoongArch: KVM: Enable kvm config and add the makefile
  LoongArch: KVM: Supplement kvm document about LoongArch-specific part
  LoongArch: KVM: Add maintainers for LoongArch KVM

 Documentation/virt/kvm/api.rst             |  70 +-
 MAINTAINERS                                |  12 +
 arch/loongarch/Kbuild                      |   1 +
 arch/loongarch/Kconfig                     |   3 +
 arch/loongarch/configs/loongson3_defconfig |   2 +
 arch/loongarch/include/asm/inst.h          |  16 +
 arch/loongarch/include/asm/kvm_csr.h       | 222 +++++
 arch/loongarch/include/asm/kvm_host.h      | 238 ++++++
 arch/loongarch/include/asm/kvm_types.h     |  11 +
 arch/loongarch/include/asm/kvm_vcpu.h      |  95 +++
 arch/loongarch/include/asm/loongarch.h     |  19 +-
 arch/loongarch/include/uapi/asm/kvm.h      | 101 +++
 arch/loongarch/kernel/asm-offsets.c        |  32 +
 arch/loongarch/kvm/Kconfig                 |  45 ++
 arch/loongarch/kvm/Makefile                |  22 +
 arch/loongarch/kvm/csr_ops.S               |  67 ++
 arch/loongarch/kvm/exit.c                  | 702 ++++++++++++++++
 arch/loongarch/kvm/interrupt.c             | 113 +++
 arch/loongarch/kvm/main.c                  | 361 +++++++++
 arch/loongarch/kvm/mmu.c                   | 678 ++++++++++++++++
 arch/loongarch/kvm/switch.S                | 255 ++++++
 arch/loongarch/kvm/timer.c                 | 200 +++++
 arch/loongarch/kvm/tlb.c                   |  34 +
 arch/loongarch/kvm/trace.h                 | 168 ++++
 arch/loongarch/kvm/vcpu.c                  | 898 +++++++++++++++++++++
 arch/loongarch/kvm/vm.c                    |  76 ++
 arch/loongarch/kvm/vmid.c                  |  66 ++
 include/uapi/linux/kvm.h                   |   9 +
 28 files changed, 4502 insertions(+), 14 deletions(-)
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/include/asm/kvm_host.h
 create mode 100644 arch/loongarch/include/asm/kvm_types.h
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
 create mode 100644 arch/loongarch/kvm/Kconfig
 create mode 100644 arch/loongarch/kvm/Makefile
 create mode 100644 arch/loongarch/kvm/csr_ops.S
 create mode 100644 arch/loongarch/kvm/exit.c
 create mode 100644 arch/loongarch/kvm/interrupt.c
 create mode 100644 arch/loongarch/kvm/main.c
 create mode 100644 arch/loongarch/kvm/mmu.c
 create mode 100644 arch/loongarch/kvm/switch.S
 create mode 100644 arch/loongarch/kvm/timer.c
 create mode 100644 arch/loongarch/kvm/tlb.c
 create mode 100644 arch/loongarch/kvm/trace.h
 create mode 100644 arch/loongarch/kvm/vcpu.c
 create mode 100644 arch/loongarch/kvm/vm.c
 create mode 100644 arch/loongarch/kvm/vmid.c

-- 
2.27.0


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-09-11  4:59   ` Huacai Chen
  2023-08-31  8:29 ` [PATCH v20 02/30] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
                   ` (29 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Add LoongArch KVM related header files, including kvm.h,
kvm_host.h, kvm_types.h. All of those are about LoongArch
virtualization features and kvm interfaces.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/kvm_host.h  | 238 +++++++++++++++++++++++++
 arch/loongarch/include/asm/kvm_types.h |  11 ++
 arch/loongarch/include/uapi/asm/kvm.h  | 101 +++++++++++
 include/uapi/linux/kvm.h               |   9 +
 4 files changed, 359 insertions(+)
 create mode 100644 arch/loongarch/include/asm/kvm_host.h
 create mode 100644 arch/loongarch/include/asm/kvm_types.h
 create mode 100644 arch/loongarch/include/uapi/asm/kvm.h

diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
new file mode 100644
index 0000000000..9f23ddaaae
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_host.h
@@ -0,0 +1,238 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_HOST_H__
+#define __ASM_LOONGARCH_KVM_HOST_H__
+
+#include <linux/cpumask.h>
+#include <linux/mutex.h>
+#include <linux/hrtimer.h>
+#include <linux/interrupt.h>
+#include <linux/types.h>
+#include <linux/kvm.h>
+#include <linux/kvm_types.h>
+#include <linux/threads.h>
+#include <linux/spinlock.h>
+
+#include <asm/inst.h>
+#include <asm/loongarch.h>
+
+/* Loongarch KVM register ids */
+#define LOONGARCH_CSR_32(_R, _S)	\
+	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
+
+#define LOONGARCH_CSR_64(_R, _S)	\
+	(KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S)))
+
+#define KVM_IOC_CSRID(id)		LOONGARCH_CSR_64(id, 0)
+#define KVM_GET_IOC_CSRIDX(id)		((id & KVM_CSR_IDX_MASK) >> 3)
+
+#define KVM_MAX_VCPUS			256
+/* memory slots that does not exposed to userspace */
+#define KVM_PRIVATE_MEM_SLOTS		0
+
+#define KVM_HALT_POLL_NS_DEFAULT	500000
+
+struct kvm_vm_stat {
+	struct kvm_vm_stat_generic generic;
+};
+
+struct kvm_vcpu_stat {
+	struct kvm_vcpu_stat_generic generic;
+	u64 idle_exits;
+	u64 signal_exits;
+	u64 int_exits;
+	u64 cpucfg_exits;
+};
+
+struct kvm_arch_memory_slot {
+};
+
+struct kvm_context {
+	unsigned long vpid_cache;
+	struct kvm_vcpu *last_vcpu;
+};
+
+struct kvm_world_switch {
+	int (*guest_eentry)(void);
+	int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+	unsigned long page_order;
+};
+
+struct kvm_arch {
+	/* Guest physical mm */
+	pgd_t *pgd;
+	unsigned long gpa_size;
+
+	s64 time_offset;
+	struct kvm_context __percpu *vmcs;
+};
+
+#define CSR_MAX_NUMS		0x800
+
+struct loongarch_csrs {
+	unsigned long csrs[CSR_MAX_NUMS];
+};
+
+/* Resume Flags */
+#define RESUME_HOST		0
+#define RESUME_GUEST		1
+
+enum emulation_result {
+	EMULATE_DONE,		/* no further processing */
+	EMULATE_DO_MMIO,	/* kvm_run filled with MMIO request */
+	EMULATE_FAIL,		/* can't emulate this instruction */
+	EMULATE_EXCEPT,		/* A guest exception has been generated */
+	EMULATE_DO_IOCSR,	/* handle IOCSR request */
+};
+
+#define KVM_LARCH_CSR		(0x1 << 1)
+#define KVM_LARCH_FPU		(0x1 << 0)
+
+struct kvm_vcpu_arch {
+	/*
+	 * Switch pointer-to-function type to unsigned long
+	 * for loading the value into register directly.
+	 */
+	unsigned long host_eentry;
+	unsigned long guest_eentry;
+
+	/* Pointers stored here for easy accessing from assembly code */
+	int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+	/* Host registers preserved across guest mode execution */
+	unsigned long host_sp;
+	unsigned long host_tp;
+	unsigned long host_pgd;
+
+	/* Host CSRs are used when handling exits from guest */
+	unsigned long badi;
+	unsigned long badv;
+	unsigned long host_ecfg;
+	unsigned long host_estat;
+	unsigned long host_percpu;
+
+	/* GPRs */
+	unsigned long gprs[32];
+	unsigned long pc;
+
+	/* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
+	unsigned int aux_inuse;
+	/* FPU state */
+	struct loongarch_fpu fpu FPU_ALIGN;
+
+	/* CSR state */
+	struct loongarch_csrs *csr;
+
+	/* GPR used as IO source/target */
+	u32 io_gpr;
+
+	struct hrtimer swtimer;
+	/* KVM register to control count timer */
+	u32 count_ctl;
+
+	/* Bitmask of exceptions that are pending */
+	unsigned long irq_pending;
+	/* Bitmask of pending exceptions to be cleared */
+	unsigned long irq_clear;
+
+	/* Cache for pages needed inside spinlock regions */
+	struct kvm_mmu_memory_cache mmu_page_cache;
+
+	/* vcpu's vpid */
+	u64 vpid;
+
+	/* Frequency of stable timer in Hz */
+	u64 timer_mhz;
+	ktime_t expire;
+
+	u64 core_ext_ioisr[4];
+
+	/* Last CPU the vCPU state was loaded on */
+	int last_sched_cpu;
+	/* mp state */
+	struct kvm_mp_state mp_state;
+};
+
+static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
+{
+	return csr->csrs[reg];
+}
+
+static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val)
+{
+	csr->csrs[reg] = val;
+}
+
+/* Helpers */
+static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch)
+{
+	return cpu_has_fpu;
+}
+
+void _kvm_init_fault(void);
+
+/* Debug: dump vcpu state */
+int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
+
+/* MMU handling */
+int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
+void kvm_flush_tlb_all(void);
+void _kvm_destroy_mm(struct kvm *kvm);
+pgd_t *kvm_pgd_alloc(void);
+
+#define KVM_ARCH_WANT_MMU_NOTIFIER
+int kvm_unmap_hva_range(struct kvm *kvm,
+			unsigned long start, unsigned long end, bool blockable);
+void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
+int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
+
+static inline void update_pc(struct kvm_vcpu_arch *arch)
+{
+	arch->pc += 4;
+}
+
+/**
+ * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
+ * @vcpu:	Virtual CPU.
+ *
+ * Returns:	Whether the TLBL exception was likely due to an instruction
+ *		fetch fault rather than a data load fault.
+ */
+static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
+{
+	return arch->pc == arch->badv;
+}
+
+/* Misc */
+static inline void kvm_arch_hardware_unsetup(void) {}
+static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_free_memslot(struct kvm *kvm,
+				   struct kvm_memory_slot *slot) {}
+void _kvm_check_vmid(struct kvm_vcpu *vcpu);
+enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
+int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
+void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
+					const struct kvm_memory_slot *memslot);
+void kvm_init_vmcs(struct kvm *kvm);
+void kvm_vector_entry(void);
+int  kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
+extern const unsigned long kvm_vector_size;
+extern const unsigned long kvm_enter_guest_size;
+extern unsigned long vpid_mask;
+extern struct kvm_world_switch *kvm_loongarch_ops;
+
+#define SW_GCSR		(1 << 0)
+#define HW_GCSR		(1 << 1)
+#define INVALID_GCSR	(1 << 2)
+int get_gcsr_flag(int csr);
+extern void set_hw_gcsr(int csr_id, unsigned long val);
+#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h
new file mode 100644
index 0000000000..2fe1d4bdff
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_types.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef _ASM_LOONGARCH_KVM_TYPES_H
+#define _ASM_LOONGARCH_KVM_TYPES_H
+
+#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE	40
+
+#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h
new file mode 100644
index 0000000000..7ec2f34018
--- /dev/null
+++ b/arch/loongarch/include/uapi/asm/kvm.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __UAPI_ASM_LOONGARCH_KVM_H
+#define __UAPI_ASM_LOONGARCH_KVM_H
+
+#include <linux/types.h>
+
+/*
+ * KVM Loongarch specific structures and definitions.
+ *
+ * Some parts derived from the x86 version of this file.
+ */
+
+#define __KVM_HAVE_READONLY_MEM
+
+#define KVM_COALESCED_MMIO_PAGE_OFFSET	1
+#define KVM_DIRTY_LOG_PAGE_OFFSET	64
+
+/*
+ * for KVM_GET_REGS and KVM_SET_REGS
+ */
+struct kvm_regs {
+	/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
+	__u64 gpr[32];
+	__u64 pc;
+};
+
+/*
+ * for KVM_GET_FPU and KVM_SET_FPU
+ */
+struct kvm_fpu {
+	__u32 fcsr;
+	__u64 fcc;    /* 8x8 */
+	struct kvm_fpureg {
+		__u64 val64[4];
+	} fpr[32];
+};
+
+/*
+ * For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
+ * registers.  The id field is broken down as follows:
+ *
+ *  bits[63..52] - As per linux/kvm.h
+ *  bits[51..32] - Must be zero.
+ *  bits[31..16] - Register set.
+ *
+ * Register set = 0: GP registers from kvm_regs (see definitions below).
+ *
+ * Register set = 1: CSR registers.
+ *
+ * Register set = 2: KVM specific registers (see definitions below).
+ *
+ * Register set = 3: FPU / SIMD registers (see definitions below).
+ *
+ * Other sets registers may be added in the future.  Each set would
+ * have its own identifier in bits[31..16].
+ */
+
+#define KVM_REG_LOONGARCH_GPR		(KVM_REG_LOONGARCH | 0x00000ULL)
+#define KVM_REG_LOONGARCH_CSR		(KVM_REG_LOONGARCH | 0x10000ULL)
+#define KVM_REG_LOONGARCH_KVM		(KVM_REG_LOONGARCH | 0x20000ULL)
+#define KVM_REG_LOONGARCH_FPU		(KVM_REG_LOONGARCH | 0x30000ULL)
+#define KVM_REG_LOONGARCH_MASK		(KVM_REG_LOONGARCH | 0x30000ULL)
+#define KVM_CSR_IDX_MASK		(0x10000 - 1)
+
+/*
+ * KVM_REG_LOONGARCH_KVM - KVM specific control registers.
+ */
+
+#define KVM_REG_LOONGARCH_COUNTER	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3)
+#define KVM_REG_LOONGARCH_VCPU_RESET	(KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4)
+
+struct kvm_debug_exit_arch {
+};
+
+/* for KVM_SET_GUEST_DEBUG */
+struct kvm_guest_debug_arch {
+};
+
+/* definition of registers in kvm_run */
+struct kvm_sync_regs {
+};
+
+/* dummy definition */
+struct kvm_sregs {
+};
+
+struct kvm_iocsr_entry {
+	__u32 addr;
+	__u32 pad;
+	__u64 data;
+};
+
+#define KVM_NR_IRQCHIPS		1
+#define KVM_IRQCHIP_NUM_PINS	64
+#define KVM_MAX_CORES		256
+
+#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f089ab2909..1184171224 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -264,6 +264,7 @@ struct kvm_xen_exit {
 #define KVM_EXIT_RISCV_SBI        35
 #define KVM_EXIT_RISCV_CSR        36
 #define KVM_EXIT_NOTIFY           37
+#define KVM_EXIT_LOONGARCH_IOCSR  38
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 /* Emulate instruction failed. */
@@ -336,6 +337,13 @@ struct kvm_run {
 			__u32 len;
 			__u8  is_write;
 		} mmio;
+		/* KVM_EXIT_LOONGARCH_IOCSR */
+		struct {
+			__u64 phys_addr;
+			__u8  data[8];
+			__u32 len;
+			__u8  is_write;
+		} iocsr_io;
 		/* KVM_EXIT_HYPERCALL */
 		struct {
 			__u64 nr;
@@ -1362,6 +1370,7 @@ struct kvm_dirty_tlb {
 #define KVM_REG_ARM64		0x6000000000000000ULL
 #define KVM_REG_MIPS		0x7000000000000000ULL
 #define KVM_REG_RISCV		0x8000000000000000ULL
+#define KVM_REG_LOONGARCH	0x9000000000000000ULL
 
 #define KVM_REG_SIZE_SHIFT	52
 #define KVM_REG_SIZE_MASK	0x00f0000000000000ULL
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 02/30] LoongArch: KVM: Implement kvm module related interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
                   ` (28 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch kvm module init, module exit interface,
using kvm context to save the vpid info and vcpu world switch
interface pointer.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/main.c | 299 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 299 insertions(+)
 create mode 100644 arch/loongarch/kvm/main.c

diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
new file mode 100644
index 0000000000..c204853b8c
--- /dev/null
+++ b/arch/loongarch/kvm/main.c
@@ -0,0 +1,299 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/kvm_host.h>
+#include <asm/cacheflush.h>
+#include <asm/kvm_csr.h>
+
+static struct kvm_context __percpu *vmcs;
+struct kvm_world_switch *kvm_loongarch_ops;
+unsigned long vpid_mask;
+static int gcsr_flag[CSR_MAX_NUMS];
+
+int get_gcsr_flag(int csr)
+{
+	if (csr < CSR_MAX_NUMS)
+		return gcsr_flag[csr];
+
+	return INVALID_GCSR;
+}
+
+static inline void set_gcsr_sw_flag(int csr)
+{
+	if (csr < CSR_MAX_NUMS)
+		gcsr_flag[csr] |= SW_GCSR;
+}
+
+static inline void set_gcsr_hw_flag(int csr)
+{
+	if (csr < CSR_MAX_NUMS)
+		gcsr_flag[csr] |= HW_GCSR;
+}
+
+/*
+ * The default value of gcsr_flag[CSR] is 0, and we use this
+ * function to set the flag to 1(SW_GCSR) or 2(HW_GCSR) if the
+ * gcsr is software or hardware. It will be used by get/set_gcsr,
+ * if gcsr_flag is HW we should use gcsrrd/gcsrwr to access it,
+ * else use sw csr to emulate it.
+ */
+static void _kvm_init_gcsr_flag(void)
+{
+	set_gcsr_hw_flag(LOONGARCH_CSR_CRMD);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PRMD);
+	set_gcsr_hw_flag(LOONGARCH_CSR_EUEN);
+	set_gcsr_hw_flag(LOONGARCH_CSR_MISC);
+	set_gcsr_hw_flag(LOONGARCH_CSR_ECFG);
+	set_gcsr_hw_flag(LOONGARCH_CSR_ESTAT);
+	set_gcsr_hw_flag(LOONGARCH_CSR_ERA);
+	set_gcsr_hw_flag(LOONGARCH_CSR_BADV);
+	set_gcsr_hw_flag(LOONGARCH_CSR_BADI);
+	set_gcsr_hw_flag(LOONGARCH_CSR_EENTRY);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBIDX);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBEHI);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBELO0);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBELO1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_ASID);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PGDL);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PGDH);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PWCTL0);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PWCTL1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_STLBPGSIZE);
+	set_gcsr_hw_flag(LOONGARCH_CSR_RVACFG);
+	set_gcsr_hw_flag(LOONGARCH_CSR_CPUID);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG2);
+	set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG3);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS0);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS2);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS3);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS4);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS5);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS6);
+	set_gcsr_hw_flag(LOONGARCH_CSR_KS7);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TMID);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TCFG);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TVAL);
+	set_gcsr_hw_flag(LOONGARCH_CSR_CNTC);
+	set_gcsr_hw_flag(LOONGARCH_CSR_LLBCTL);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRENTRY);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRBADV);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRERA);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRSAVE);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRELO0);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRELO1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBREHI);
+	set_gcsr_hw_flag(LOONGARCH_CSR_TLBRPRMD);
+	set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN0);
+	set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN1);
+	set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN2);
+	set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN3);
+	set_gcsr_hw_flag(LOONGARCH_CSR_MWPS);
+	set_gcsr_hw_flag(LOONGARCH_CSR_FWPS);
+
+	set_gcsr_sw_flag(LOONGARCH_CSR_IMPCTL1);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IMPCTL2);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRCTL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRINFO1);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRINFO2);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRENTRY);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRERA);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MERRSAVE);
+	set_gcsr_sw_flag(LOONGARCH_CSR_CTAG);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DEBUG);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DERA);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DESAVE);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PRCFG1);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PRCFG2);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PRCFG3);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PGD);
+	set_gcsr_sw_flag(LOONGARCH_CSR_TINTCLR);
+
+	set_gcsr_sw_flag(LOONGARCH_CSR_FWPS);
+	set_gcsr_sw_flag(LOONGARCH_CSR_FWPC);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MWPS);
+	set_gcsr_sw_flag(LOONGARCH_CSR_MWPC);
+
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB0ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB0MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB0CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB0ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB1ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB1MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB1CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB1ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB2ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB2MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB2CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB2ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB3ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB3MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB3CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB3ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB4ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB4MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB4CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB4ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB5ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB5MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB5CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB5ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB6ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB6MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB6CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB6ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB7ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB7MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB7CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_DB7ASID);
+
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB0ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB0MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB0CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB0ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB1ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB1MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB1CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB1ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB2ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB2MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB2CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB2ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB3ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB3MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB3CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB3ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB4ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB4MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB4CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB4ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB5ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB5MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB5CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB5ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB6ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB6MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB6CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB6ASID);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB7ADDR);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB7MASK);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB7CTRL);
+	set_gcsr_sw_flag(LOONGARCH_CSR_IB7ASID);
+
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL0);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR0);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL1);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR1);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL2);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR2);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL3);
+	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR3);
+}
+
+static int kvm_loongarch_env_init(void)
+{
+	struct kvm_context *context;
+	int cpu, order;
+	void *addr;
+
+	vmcs = alloc_percpu(struct kvm_context);
+	if (!vmcs) {
+		pr_err("kvm: failed to allocate percpu kvm_context\n");
+		return -ENOMEM;
+	}
+
+	kvm_loongarch_ops = kzalloc(sizeof(*kvm_loongarch_ops), GFP_KERNEL);
+	if (!kvm_loongarch_ops) {
+		free_percpu(vmcs);
+		vmcs = NULL;
+		return -ENOMEM;
+	}
+	/*
+	 * There will be problem in world switch code if there
+	 * is page fault reenter, since pgd register is shared
+	 * between root kernel and kvm hypervisor. World switch
+	 * entry need be unmapped area, cannot be tlb mapped area.
+	 * In future if hw pagetable walking is supported, or there
+	 * is separate pgd registers between root kernel and kvm
+	 * hypervisor, copying about world switch code will not be used.
+	 */
+
+	order = get_order(kvm_vector_size + kvm_enter_guest_size);
+	addr = (void *)__get_free_pages(GFP_KERNEL, order);
+	if (!addr) {
+		free_percpu(vmcs);
+		vmcs = NULL;
+		kfree(kvm_loongarch_ops);
+		kvm_loongarch_ops = NULL;
+		return -ENOMEM;
+	}
+
+	memcpy(addr, kvm_vector_entry, kvm_vector_size);
+	memcpy(addr + kvm_vector_size, kvm_enter_guest, kvm_enter_guest_size);
+	flush_icache_range((unsigned long)addr, (unsigned long)addr +
+				kvm_vector_size + kvm_enter_guest_size);
+	kvm_loongarch_ops->guest_eentry = addr;
+	kvm_loongarch_ops->enter_guest = addr + kvm_vector_size;
+	kvm_loongarch_ops->page_order = order;
+
+	vpid_mask = read_csr_gstat();
+	vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT;
+	if (vpid_mask)
+		vpid_mask = GENMASK(vpid_mask - 1, 0);
+
+	for_each_possible_cpu(cpu) {
+		context = per_cpu_ptr(vmcs, cpu);
+		context->vpid_cache = vpid_mask + 1;
+		context->last_vcpu = NULL;
+	}
+
+	_kvm_init_fault();
+	_kvm_init_gcsr_flag();
+
+	return 0;
+}
+
+static void kvm_loongarch_env_exit(void)
+{
+	unsigned long addr;
+
+	if (vmcs)
+		free_percpu(vmcs);
+
+	if (kvm_loongarch_ops) {
+		if (kvm_loongarch_ops->guest_eentry) {
+			addr = (unsigned long)kvm_loongarch_ops->guest_eentry;
+			free_pages(addr, kvm_loongarch_ops->page_order);
+		}
+		kfree(kvm_loongarch_ops);
+	}
+}
+
+static int kvm_loongarch_init(void)
+{
+	int r;
+
+	if (!cpu_has_lvz) {
+		kvm_info("hardware virtualization not available\n");
+		return -ENODEV;
+	}
+	r = kvm_loongarch_env_init();
+	if (r)
+		return r;
+
+	return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE);
+}
+
+static void kvm_loongarch_exit(void)
+{
+	kvm_exit();
+	kvm_loongarch_env_exit();
+}
+
+module_init(kvm_loongarch_init);
+module_exit(kvm_loongarch_exit);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 02/30] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 04/30] LoongArch: KVM: Implement VM related functions Tianrui Zhao
                   ` (27 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm hardware enable, disable interface, setting
the guest config register to enable virtualization features
when called the interface.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/main.c | 62 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
index c204853b8c..46a042735d 100644
--- a/arch/loongarch/kvm/main.c
+++ b/arch/loongarch/kvm/main.c
@@ -195,6 +195,68 @@ static void _kvm_init_gcsr_flag(void)
 	set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR3);
 }
 
+void kvm_init_vmcs(struct kvm *kvm)
+{
+	kvm->arch.vmcs = vmcs;
+}
+
+long kvm_arch_dev_ioctl(struct file *filp,
+			unsigned int ioctl, unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_hardware_enable(void)
+{
+	unsigned long env, gcfg = 0;
+
+	env = read_csr_gcfg();
+	/* First init gtlbc, gcfg, gstat, gintc. All guest use the same config */
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/*
+	 * Enable virtualization features granting guest direct control of
+	 * certain features:
+	 * GCI=2:       Trap on init or unimplement cache instruction.
+	 * TORU=0:      Trap on Root Unimplement.
+	 * CACTRL=1:    Root control cache.
+	 * TOP=0:       Trap on Previlege.
+	 * TOE=0:       Trap on Exception.
+	 * TIT=0:       Trap on Timer.
+	 */
+	if (env & CSR_GCFG_GCIP_ALL)
+		gcfg |= CSR_GCFG_GCI_SECURE;
+	if (env & CSR_GCFG_MATC_ROOT)
+		gcfg |= CSR_GCFG_MATC_ROOT;
+
+	gcfg |= CSR_GCFG_TIT;
+	write_csr_gcfg(gcfg);
+
+	kvm_flush_tlb_all();
+
+	/* Enable using TGID  */
+	set_csr_gtlbc(CSR_GTLBC_USETGID);
+	kvm_debug("gtlbc:%lx gintc:%lx gstat:%lx gcfg:%lx",
+			read_csr_gtlbc(), read_csr_gintc(),
+			read_csr_gstat(), read_csr_gcfg());
+
+	return 0;
+}
+
+void kvm_arch_hardware_disable(void)
+{
+	clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI);
+	write_csr_gcfg(0);
+	write_csr_gstat(0);
+	write_csr_gintc(0);
+
+	/* Flush any remaining guest TLB entries */
+	kvm_flush_tlb_all();
+}
+
 static int kvm_loongarch_env_init(void)
 {
 	struct kvm_context *context;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 04/30] LoongArch: KVM: Implement VM related functions
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (2 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
                   ` (26 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch VM operations: Init and destroy vm interface,
allocating memory page to save the vm pgd when init vm. Implement
vm check extension, such as getting vcpu number info, memory slots
info, and fpu info. And implement vm status description.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vm.c | 76 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)
 create mode 100644 arch/loongarch/kvm/vm.c

diff --git a/arch/loongarch/kvm/vm.c b/arch/loongarch/kvm/vm.c
new file mode 100644
index 0000000000..bde78f0633
--- /dev/null
+++ b/arch/loongarch/kvm/vm.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+
+const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
+	KVM_GENERIC_VM_STATS(),
+};
+
+const struct kvm_stats_header kvm_vm_stats_header = {
+	.name_size = KVM_STATS_NAME_SIZE,
+	.num_desc = ARRAY_SIZE(kvm_vm_stats_desc),
+	.id_offset =  sizeof(struct kvm_stats_header),
+	.desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE,
+	.data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE +
+					sizeof(kvm_vm_stats_desc),
+};
+
+int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+{
+	/* Allocate page table to map GPA -> RPA */
+	kvm->arch.pgd = kvm_pgd_alloc();
+	if (!kvm->arch.pgd)
+		return -ENOMEM;
+
+	kvm_init_vmcs(kvm);
+	kvm->arch.gpa_size = BIT(cpu_vabits - 1);
+	return 0;
+}
+
+void kvm_arch_destroy_vm(struct kvm *kvm)
+{
+	kvm_destroy_vcpus(kvm);
+	_kvm_destroy_mm(kvm);
+}
+
+int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+{
+	int r;
+
+	switch (ext) {
+	case KVM_CAP_ONE_REG:
+	case KVM_CAP_ENABLE_CAP:
+	case KVM_CAP_READONLY_MEM:
+	case KVM_CAP_SYNC_MMU:
+	case KVM_CAP_IMMEDIATE_EXIT:
+	case KVM_CAP_IOEVENTFD:
+	case KVM_CAP_MP_STATE:
+		r = 1;
+		break;
+	case KVM_CAP_NR_VCPUS:
+		r = num_online_cpus();
+		break;
+	case KVM_CAP_MAX_VCPUS:
+		r = KVM_MAX_VCPUS;
+		break;
+	case KVM_CAP_MAX_VCPU_ID:
+		r = KVM_MAX_VCPU_IDS;
+		break;
+	case KVM_CAP_NR_MEMSLOTS:
+		r = KVM_USER_MEM_SLOTS;
+		break;
+	default:
+		r = 0;
+		break;
+	}
+
+	return r;
+}
+
+int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
+{
+	return -ENOIOCTLCMD;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (3 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 04/30] LoongArch: KVM: Implement VM related functions Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-09-11  8:07   ` Huacai Chen
  2023-08-31  8:29 ` [PATCH v20 06/30] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
                   ` (25 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Add LoongArch vcpu related header files, including vcpu csr
information, irq number defines, and some vcpu interfaces.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/kvm_csr.h   | 222 +++++++++++++++++++++++++
 arch/loongarch/include/asm/kvm_vcpu.h  |  95 +++++++++++
 arch/loongarch/include/asm/loongarch.h |  19 ++-
 arch/loongarch/kvm/trace.h             | 168 +++++++++++++++++++
 4 files changed, 499 insertions(+), 5 deletions(-)
 create mode 100644 arch/loongarch/include/asm/kvm_csr.h
 create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
 create mode 100644 arch/loongarch/kvm/trace.h

diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
new file mode 100644
index 0000000000..e27dcacd00
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_csr.h
@@ -0,0 +1,222 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_CSR_H__
+#define __ASM_LOONGARCH_KVM_CSR_H__
+#include <asm/loongarch.h>
+#include <asm/kvm_vcpu.h>
+#include <linux/uaccess.h>
+#include <linux/kvm_host.h>
+
+/* binutils support virtualization instructions */
+#define gcsr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__(					\
+		" gcsrrd %[val], %[reg]\n\t"			\
+		: [val] "=r" (__v)				\
+		: [reg] "i" (csr)				\
+		: "memory");					\
+	__v;							\
+})
+
+#define gcsr_write(v, csr)					\
+({								\
+	register unsigned long __v = v;				\
+	__asm__ __volatile__ (					\
+		" gcsrwr %[val], %[reg]\n\t"			\
+		: [val] "+r" (__v)				\
+		: [reg] "i" (csr)				\
+		: "memory");					\
+})
+
+#define gcsr_xchg(v, m, csr)					\
+({								\
+	register unsigned long __v = v;				\
+	__asm__ __volatile__(					\
+		" gcsrxchg %[val], %[mask], %[reg]\n\t"		\
+		: [val] "+r" (__v)				\
+		: [mask] "r" (m), [reg] "i" (csr)		\
+		: "memory");					\
+	__v;							\
+})
+
+/* Guest CSRS read and write */
+#define read_gcsr_crmd()		gcsr_read(LOONGARCH_CSR_CRMD)
+#define write_gcsr_crmd(val)		gcsr_write(val, LOONGARCH_CSR_CRMD)
+#define read_gcsr_prmd()		gcsr_read(LOONGARCH_CSR_PRMD)
+#define write_gcsr_prmd(val)		gcsr_write(val, LOONGARCH_CSR_PRMD)
+#define read_gcsr_euen()		gcsr_read(LOONGARCH_CSR_EUEN)
+#define write_gcsr_euen(val)		gcsr_write(val, LOONGARCH_CSR_EUEN)
+#define read_gcsr_misc()		gcsr_read(LOONGARCH_CSR_MISC)
+#define write_gcsr_misc(val)		gcsr_write(val, LOONGARCH_CSR_MISC)
+#define read_gcsr_ecfg()		gcsr_read(LOONGARCH_CSR_ECFG)
+#define write_gcsr_ecfg(val)		gcsr_write(val, LOONGARCH_CSR_ECFG)
+#define read_gcsr_estat()		gcsr_read(LOONGARCH_CSR_ESTAT)
+#define write_gcsr_estat(val)		gcsr_write(val, LOONGARCH_CSR_ESTAT)
+#define read_gcsr_era()			gcsr_read(LOONGARCH_CSR_ERA)
+#define write_gcsr_era(val)		gcsr_write(val, LOONGARCH_CSR_ERA)
+#define read_gcsr_badv()		gcsr_read(LOONGARCH_CSR_BADV)
+#define write_gcsr_badv(val)		gcsr_write(val, LOONGARCH_CSR_BADV)
+#define read_gcsr_badi()		gcsr_read(LOONGARCH_CSR_BADI)
+#define write_gcsr_badi(val)		gcsr_write(val, LOONGARCH_CSR_BADI)
+#define read_gcsr_eentry()		gcsr_read(LOONGARCH_CSR_EENTRY)
+#define write_gcsr_eentry(val)		gcsr_write(val, LOONGARCH_CSR_EENTRY)
+
+#define read_gcsr_tlbidx()		gcsr_read(LOONGARCH_CSR_TLBIDX)
+#define write_gcsr_tlbidx(val)		gcsr_write(val, LOONGARCH_CSR_TLBIDX)
+#define read_gcsr_tlbhi()		gcsr_read(LOONGARCH_CSR_TLBEHI)
+#define write_gcsr_tlbhi(val)		gcsr_write(val, LOONGARCH_CSR_TLBEHI)
+#define read_gcsr_tlblo0()		gcsr_read(LOONGARCH_CSR_TLBELO0)
+#define write_gcsr_tlblo0(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO0)
+#define read_gcsr_tlblo1()		gcsr_read(LOONGARCH_CSR_TLBELO1)
+#define write_gcsr_tlblo1(val)		gcsr_write(val, LOONGARCH_CSR_TLBELO1)
+
+#define read_gcsr_asid()		gcsr_read(LOONGARCH_CSR_ASID)
+#define write_gcsr_asid(val)		gcsr_write(val, LOONGARCH_CSR_ASID)
+#define read_gcsr_pgdl()		gcsr_read(LOONGARCH_CSR_PGDL)
+#define write_gcsr_pgdl(val)		gcsr_write(val, LOONGARCH_CSR_PGDL)
+#define read_gcsr_pgdh()		gcsr_read(LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgdh(val)		gcsr_write(val, LOONGARCH_CSR_PGDH)
+#define write_gcsr_pgd(val)		gcsr_write(val, LOONGARCH_CSR_PGD)
+#define read_gcsr_pgd()			gcsr_read(LOONGARCH_CSR_PGD)
+#define read_gcsr_pwctl0()		gcsr_read(LOONGARCH_CSR_PWCTL0)
+#define write_gcsr_pwctl0(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL0)
+#define read_gcsr_pwctl1()		gcsr_read(LOONGARCH_CSR_PWCTL1)
+#define write_gcsr_pwctl1(val)		gcsr_write(val, LOONGARCH_CSR_PWCTL1)
+#define read_gcsr_stlbpgsize()		gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
+#define write_gcsr_stlbpgsize(val)	gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
+#define read_gcsr_rvacfg()		gcsr_read(LOONGARCH_CSR_RVACFG)
+#define write_gcsr_rvacfg(val)		gcsr_write(val, LOONGARCH_CSR_RVACFG)
+
+#define read_gcsr_cpuid()		gcsr_read(LOONGARCH_CSR_CPUID)
+#define write_gcsr_cpuid(val)		gcsr_write(val, LOONGARCH_CSR_CPUID)
+#define read_gcsr_prcfg1()		gcsr_read(LOONGARCH_CSR_PRCFG1)
+#define write_gcsr_prcfg1(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG1)
+#define read_gcsr_prcfg2()		gcsr_read(LOONGARCH_CSR_PRCFG2)
+#define write_gcsr_prcfg2(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG2)
+#define read_gcsr_prcfg3()		gcsr_read(LOONGARCH_CSR_PRCFG3)
+#define write_gcsr_prcfg3(val)		gcsr_write(val, LOONGARCH_CSR_PRCFG3)
+
+#define read_gcsr_kscratch0()		gcsr_read(LOONGARCH_CSR_KS0)
+#define write_gcsr_kscratch0(val)	gcsr_write(val, LOONGARCH_CSR_KS0)
+#define read_gcsr_kscratch1()		gcsr_read(LOONGARCH_CSR_KS1)
+#define write_gcsr_kscratch1(val)	gcsr_write(val, LOONGARCH_CSR_KS1)
+#define read_gcsr_kscratch2()		gcsr_read(LOONGARCH_CSR_KS2)
+#define write_gcsr_kscratch2(val)	gcsr_write(val, LOONGARCH_CSR_KS2)
+#define read_gcsr_kscratch3()		gcsr_read(LOONGARCH_CSR_KS3)
+#define write_gcsr_kscratch3(val)	gcsr_write(val, LOONGARCH_CSR_KS3)
+#define read_gcsr_kscratch4()		gcsr_read(LOONGARCH_CSR_KS4)
+#define write_gcsr_kscratch4(val)	gcsr_write(val, LOONGARCH_CSR_KS4)
+#define read_gcsr_kscratch5()		gcsr_read(LOONGARCH_CSR_KS5)
+#define write_gcsr_kscratch5(val)	gcsr_write(val, LOONGARCH_CSR_KS5)
+#define read_gcsr_kscratch6()		gcsr_read(LOONGARCH_CSR_KS6)
+#define write_gcsr_kscratch6(val)	gcsr_write(val, LOONGARCH_CSR_KS6)
+#define read_gcsr_kscratch7()		gcsr_read(LOONGARCH_CSR_KS7)
+#define write_gcsr_kscratch7(val)	gcsr_write(val, LOONGARCH_CSR_KS7)
+
+#define read_gcsr_timerid()		gcsr_read(LOONGARCH_CSR_TMID)
+#define write_gcsr_timerid(val)		gcsr_write(val, LOONGARCH_CSR_TMID)
+#define read_gcsr_timercfg()		gcsr_read(LOONGARCH_CSR_TCFG)
+#define write_gcsr_timercfg(val)	gcsr_write(val, LOONGARCH_CSR_TCFG)
+#define read_gcsr_timertick()		gcsr_read(LOONGARCH_CSR_TVAL)
+#define write_gcsr_timertick(val)	gcsr_write(val, LOONGARCH_CSR_TVAL)
+#define read_gcsr_timeroffset()		gcsr_read(LOONGARCH_CSR_CNTC)
+#define write_gcsr_timeroffset(val)	gcsr_write(val, LOONGARCH_CSR_CNTC)
+
+#define read_gcsr_llbctl()		gcsr_read(LOONGARCH_CSR_LLBCTL)
+#define write_gcsr_llbctl(val)		gcsr_write(val, LOONGARCH_CSR_LLBCTL)
+
+#define read_gcsr_tlbrentry()		gcsr_read(LOONGARCH_CSR_TLBRENTRY)
+#define write_gcsr_tlbrentry(val)	gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
+#define read_gcsr_tlbrbadv()		gcsr_read(LOONGARCH_CSR_TLBRBADV)
+#define write_gcsr_tlbrbadv(val)	gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
+#define read_gcsr_tlbrera()		gcsr_read(LOONGARCH_CSR_TLBRERA)
+#define write_gcsr_tlbrera(val)		gcsr_write(val, LOONGARCH_CSR_TLBRERA)
+#define read_gcsr_tlbrsave()		gcsr_read(LOONGARCH_CSR_TLBRSAVE)
+#define write_gcsr_tlbrsave(val)	gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
+#define read_gcsr_tlbrelo0()		gcsr_read(LOONGARCH_CSR_TLBRELO0)
+#define write_gcsr_tlbrelo0(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
+#define read_gcsr_tlbrelo1()		gcsr_read(LOONGARCH_CSR_TLBRELO1)
+#define write_gcsr_tlbrelo1(val)	gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
+#define read_gcsr_tlbrehi()		gcsr_read(LOONGARCH_CSR_TLBREHI)
+#define write_gcsr_tlbrehi(val)		gcsr_write(val, LOONGARCH_CSR_TLBREHI)
+#define read_gcsr_tlbrprmd()		gcsr_read(LOONGARCH_CSR_TLBRPRMD)
+#define write_gcsr_tlbrprmd(val)	gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
+
+#define read_gcsr_directwin0()		gcsr_read(LOONGARCH_CSR_DMWIN0)
+#define write_gcsr_directwin0(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN0)
+#define read_gcsr_directwin1()		gcsr_read(LOONGARCH_CSR_DMWIN1)
+#define write_gcsr_directwin1(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN1)
+#define read_gcsr_directwin2()		gcsr_read(LOONGARCH_CSR_DMWIN2)
+#define write_gcsr_directwin2(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN2)
+#define read_gcsr_directwin3()		gcsr_read(LOONGARCH_CSR_DMWIN3)
+#define write_gcsr_directwin3(val)	gcsr_write(val, LOONGARCH_CSR_DMWIN3)
+
+/* Guest related CSRs */
+#define read_csr_gtlbc()		csr_read64(LOONGARCH_CSR_GTLBC)
+#define write_csr_gtlbc(val)		csr_write64(val, LOONGARCH_CSR_GTLBC)
+#define read_csr_trgp()			csr_read64(LOONGARCH_CSR_TRGP)
+#define read_csr_gcfg()			csr_read64(LOONGARCH_CSR_GCFG)
+#define write_csr_gcfg(val)		csr_write64(val, LOONGARCH_CSR_GCFG)
+#define read_csr_gstat()		csr_read64(LOONGARCH_CSR_GSTAT)
+#define write_csr_gstat(val)		csr_write64(val, LOONGARCH_CSR_GSTAT)
+#define read_csr_gintc()		csr_read64(LOONGARCH_CSR_GINTC)
+#define write_csr_gintc(val)		csr_write64(val, LOONGARCH_CSR_GINTC)
+#define read_csr_gcntc()		csr_read64(LOONGARCH_CSR_GCNTC)
+#define write_csr_gcntc(val)		csr_write64(val, LOONGARCH_CSR_GCNTC)
+
+#define __BUILD_GCSR_OP(name)		__BUILD_CSR_COMMON(gcsr_##name)
+
+__BUILD_GCSR_OP(llbctl)
+__BUILD_GCSR_OP(tlbidx)
+__BUILD_CSR_OP(gcfg)
+__BUILD_CSR_OP(gstat)
+__BUILD_CSR_OP(gtlbc)
+__BUILD_CSR_OP(gintc)
+
+#define set_gcsr_estat(val)	\
+	gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
+#define clear_gcsr_estat(val)	\
+	gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
+
+#define kvm_read_hw_gcsr(id)		gcsr_read(id)
+#define kvm_write_hw_gcsr(csr, id, val)	gcsr_write(val, id)
+
+int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
+int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
+
+int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
+
+#define kvm_save_hw_gcsr(csr, gid)	(csr->csrs[gid] = gcsr_read(gid))
+#define kvm_restore_hw_gcsr(csr, gid)	(gcsr_write(csr->csrs[gid], gid))
+
+static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
+{
+	return csr->csrs[gid];
+}
+
+static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
+					      int gid, unsigned long val)
+{
+	csr->csrs[gid] = val;
+}
+
+static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
+					    int gid, unsigned long val)
+{
+	csr->csrs[gid] |= val;
+}
+
+static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
+					       int gid, unsigned long mask,
+					       unsigned long val)
+{
+	unsigned long _mask = mask;
+
+	csr->csrs[gid] &= ~_mask;
+	csr->csrs[gid] |= val & _mask;
+}
+#endif	/* __ASM_LOONGARCH_KVM_CSR_H__ */
diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
new file mode 100644
index 0000000000..3d23a656fe
--- /dev/null
+++ b/arch/loongarch/include/asm/kvm_vcpu.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
+#define __ASM_LOONGARCH_KVM_VCPU_H__
+
+#include <linux/kvm_host.h>
+#include <asm/loongarch.h>
+
+/* Controlled by 0x5 guest exst */
+#define CPU_SIP0			(_ULCAST_(1))
+#define CPU_SIP1			(_ULCAST_(1) << 1)
+#define CPU_PMU				(_ULCAST_(1) << 10)
+#define CPU_TIMER			(_ULCAST_(1) << 11)
+#define CPU_IPI				(_ULCAST_(1) << 12)
+
+/* Controlled by 0x52 guest exception VIP
+ * aligned to exst bit 5~12
+ */
+#define CPU_IP0				(_ULCAST_(1))
+#define CPU_IP1				(_ULCAST_(1) << 1)
+#define CPU_IP2				(_ULCAST_(1) << 2)
+#define CPU_IP3				(_ULCAST_(1) << 3)
+#define CPU_IP4				(_ULCAST_(1) << 4)
+#define CPU_IP5				(_ULCAST_(1) << 5)
+#define CPU_IP6				(_ULCAST_(1) << 6)
+#define CPU_IP7				(_ULCAST_(1) << 7)
+
+#define MNSEC_PER_SEC			(NSEC_PER_SEC >> 20)
+
+/* KVM_IRQ_LINE irq field index values */
+#define KVM_LOONGSON_IRQ_TYPE_SHIFT	24
+#define KVM_LOONGSON_IRQ_TYPE_MASK	0xff
+#define KVM_LOONGSON_IRQ_VCPU_SHIFT	16
+#define KVM_LOONGSON_IRQ_VCPU_MASK	0xff
+#define KVM_LOONGSON_IRQ_NUM_SHIFT	0
+#define KVM_LOONGSON_IRQ_NUM_MASK	0xffff
+
+/* Irq_type field */
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IP	0
+#define KVM_LOONGSON_IRQ_TYPE_CPU_IO	1
+#define KVM_LOONGSON_IRQ_TYPE_HT	2
+#define KVM_LOONGSON_IRQ_TYPE_MSI	3
+#define KVM_LOONGSON_IRQ_TYPE_IOAPIC	4
+#define KVM_LOONGSON_IRQ_TYPE_ROUTE	5
+
+/* Out-of-kernel GIC cpu interrupt injection irq_number field */
+#define KVM_LOONGSON_IRQ_CPU_IRQ	0
+#define KVM_LOONGSON_IRQ_CPU_FIQ	1
+#define KVM_LOONGSON_CPU_IP_NUM		8
+
+typedef union loongarch_instruction  larch_inst;
+typedef int (*exit_handle_fn)(struct kvm_vcpu *);
+
+int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
+int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
+int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
+int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
+int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
+void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
+
+void kvm_own_fpu(struct kvm_vcpu *vcpu);
+void kvm_lose_fpu(struct kvm_vcpu *vcpu);
+void kvm_save_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fpu(struct loongarch_fpu *fpu);
+void kvm_restore_fcsr(struct loongarch_fpu *fpu);
+
+void kvm_acquire_timer(struct kvm_vcpu *vcpu);
+void kvm_reset_timer(struct kvm_vcpu *vcpu);
+void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
+void kvm_restore_timer(struct kvm_vcpu *vcpu);
+void kvm_save_timer(struct kvm_vcpu *vcpu);
+
+int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq);
+/*
+ * Loongarch KVM guest interrupt handling
+ */
+static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	set_bit(irq, &vcpu->arch.irq_pending);
+	clear_bit(irq, &vcpu->arch.irq_clear);
+}
+
+static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
+{
+	clear_bit(irq, &vcpu->arch.irq_pending);
+	set_bit(irq, &vcpu->arch.irq_clear);
+}
+
+#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
index 10748a20a2..b9044c8dfa 100644
--- a/arch/loongarch/include/asm/loongarch.h
+++ b/arch/loongarch/include/asm/loongarch.h
@@ -269,6 +269,7 @@ __asm__(".macro	parse_r var r\n\t"
 #define LOONGARCH_CSR_ECFG		0x4	/* Exception config */
 #define  CSR_ECFG_VS_SHIFT		16
 #define  CSR_ECFG_VS_WIDTH		3
+#define  CSR_ECFG_VS_SHIFT_END		(CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
 #define  CSR_ECFG_VS			(_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
 #define  CSR_ECFG_IM_SHIFT		0
 #define  CSR_ECFG_IM_WIDTH		14
@@ -357,13 +358,14 @@ __asm__(".macro	parse_r var r\n\t"
 #define  CSR_TLBLO1_V			(_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
 
 #define LOONGARCH_CSR_GTLBC		0x15	/* Guest TLB control */
-#define  CSR_GTLBC_RID_SHIFT		16
-#define  CSR_GTLBC_RID_WIDTH		8
-#define  CSR_GTLBC_RID			(_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
+#define  CSR_GTLBC_TGID_SHIFT		16
+#define  CSR_GTLBC_TGID_WIDTH		8
+#define  CSR_GTLBC_TGID_SHIFT_END	(CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
+#define  CSR_GTLBC_TGID			(_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
 #define  CSR_GTLBC_TOTI_SHIFT		13
 #define  CSR_GTLBC_TOTI			(_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
-#define  CSR_GTLBC_USERID_SHIFT		12
-#define  CSR_GTLBC_USERID		(_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
+#define  CSR_GTLBC_USETGID_SHIFT	12
+#define  CSR_GTLBC_USETGID		(_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
 #define  CSR_GTLBC_GMTLBSZ_SHIFT	0
 #define  CSR_GTLBC_GMTLBSZ_WIDTH	6
 #define  CSR_GTLBC_GMTLBSZ		(_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
@@ -518,6 +520,7 @@ __asm__(".macro	parse_r var r\n\t"
 #define LOONGARCH_CSR_GSTAT		0x50	/* Guest status */
 #define  CSR_GSTAT_GID_SHIFT		16
 #define  CSR_GSTAT_GID_WIDTH		8
+#define  CSR_GSTAT_GID_SHIFT_END	(CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
 #define  CSR_GSTAT_GID			(_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
 #define  CSR_GSTAT_GIDBIT_SHIFT		4
 #define  CSR_GSTAT_GIDBIT_WIDTH		6
@@ -568,6 +571,12 @@ __asm__(".macro	parse_r var r\n\t"
 #define  CSR_GCFG_MATC_GUEST		(_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
 #define  CSR_GCFG_MATC_NEST		(_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
+#define  CSR_GCFG_MATP_NEST_SHIFT	2
+#define  CSR_GCFG_MATP_NEST		(_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
+#define  CSR_GCFG_MATP_ROOT_SHIFT	1
+#define  CSR_GCFG_MATP_ROOT		(_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
+#define  CSR_GCFG_MATP_GUEST_SHIFT	0
+#define  CSR_GCFG_MATP_GUEST		(_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
 
 #define LOONGARCH_CSR_GINTC		0x52	/* Guest interrupt control */
 #define  CSR_GINTC_HC_SHIFT		16
diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
new file mode 100644
index 0000000000..17b28d94d5
--- /dev/null
+++ b/arch/loongarch/kvm/trace.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_KVM_H
+
+#include <linux/tracepoint.h>
+#include <asm/kvm_csr.h>
+
+#undef	TRACE_SYSTEM
+#define TRACE_SYSTEM	kvm
+
+/*
+ * Tracepoints for VM enters
+ */
+DECLARE_EVENT_CLASS(kvm_transition,
+	TP_PROTO(struct kvm_vcpu *vcpu),
+	TP_ARGS(vcpu),
+	TP_STRUCT__entry(
+		__field(unsigned long, pc)
+	),
+
+	TP_fast_assign(
+		__entry->pc = vcpu->arch.pc;
+	),
+
+	TP_printk("PC: 0x%08lx",
+		  __entry->pc)
+);
+
+DEFINE_EVENT(kvm_transition, kvm_enter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_reenter,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+DEFINE_EVENT(kvm_transition, kvm_out,
+	     TP_PROTO(struct kvm_vcpu *vcpu),
+	     TP_ARGS(vcpu));
+
+/* Further exit reasons */
+#define KVM_TRACE_EXIT_IDLE		64
+#define KVM_TRACE_EXIT_CACHE		65
+#define KVM_TRACE_EXIT_SIGNAL		66
+
+/* Tracepoints for VM exits */
+#define kvm_trace_symbol_exit_types			\
+	{ KVM_TRACE_EXIT_IDLE,		"IDLE" },	\
+	{ KVM_TRACE_EXIT_CACHE,		"CACHE" },	\
+	{ KVM_TRACE_EXIT_SIGNAL,	"Signal" }
+
+TRACE_EVENT(kvm_exit_gspr,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
+	    TP_ARGS(vcpu, inst_word),
+	    TP_STRUCT__entry(
+			__field(unsigned int, inst_word)
+	    ),
+
+	    TP_fast_assign(
+			__entry->inst_word = inst_word;
+	    ),
+
+	    TP_printk("inst word: 0x%08x",
+		      __entry->inst_word)
+);
+
+
+DECLARE_EVENT_CLASS(kvm_exit,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	    TP_ARGS(vcpu, reason),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(unsigned int, reason)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->reason = reason;
+	    ),
+
+	    TP_printk("[%s]PC: 0x%08lx",
+		      __print_symbolic(__entry->reason,
+				       kvm_trace_symbol_exit_types),
+		      __entry->pc)
+);
+
+DEFINE_EVENT(kvm_exit, kvm_exit_idle,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+DEFINE_EVENT(kvm_exit, kvm_exit_cache,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+DEFINE_EVENT(kvm_exit, kvm_exit,
+	     TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
+	     TP_ARGS(vcpu, reason));
+
+#define KVM_TRACE_AUX_RESTORE		0
+#define KVM_TRACE_AUX_SAVE		1
+#define KVM_TRACE_AUX_ENABLE		2
+#define KVM_TRACE_AUX_DISABLE		3
+#define KVM_TRACE_AUX_DISCARD		4
+
+#define KVM_TRACE_AUX_FPU		1
+
+#define kvm_trace_symbol_aux_op				\
+	{ KVM_TRACE_AUX_RESTORE,	"restore" },	\
+	{ KVM_TRACE_AUX_SAVE,		"save" },	\
+	{ KVM_TRACE_AUX_ENABLE,		"enable" },	\
+	{ KVM_TRACE_AUX_DISABLE,	"disable" },	\
+	{ KVM_TRACE_AUX_DISCARD,	"discard" }
+
+#define kvm_trace_symbol_aux_state			\
+	{ KVM_TRACE_AUX_FPU,     "FPU" }
+
+TRACE_EVENT(kvm_aux,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
+		     unsigned int state),
+	    TP_ARGS(vcpu, op, state),
+	    TP_STRUCT__entry(
+			__field(unsigned long, pc)
+			__field(u8, op)
+			__field(u8, state)
+	    ),
+
+	    TP_fast_assign(
+			__entry->pc = vcpu->arch.pc;
+			__entry->op = op;
+			__entry->state = state;
+	    ),
+
+	    TP_printk("%s %s PC: 0x%08lx",
+		      __print_symbolic(__entry->op,
+				       kvm_trace_symbol_aux_op),
+		      __print_symbolic(__entry->state,
+				       kvm_trace_symbol_aux_state),
+		      __entry->pc)
+);
+
+TRACE_EVENT(kvm_vpid_change,
+	    TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
+	    TP_ARGS(vcpu, vpid),
+	    TP_STRUCT__entry(
+			__field(unsigned long, vpid)
+	    ),
+
+	    TP_fast_assign(
+			__entry->vpid = vpid;
+	    ),
+
+	    TP_printk("vpid: 0x%08lx",
+		      __entry->vpid)
+);
+
+#endif /* _TRACE_LOONGARCH64_KVM_H */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 06/30] LoongArch: KVM: Implement vcpu create and destroy interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (4 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
                   ` (24 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement vcpu create and destroy interface, saving some info
into vcpu arch structure such as vcpu exception entrance, vcpu
enter guest pointer, etc. Init vcpu timer and set address
translation mode when vcpu create.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 87 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 arch/loongarch/kvm/vcpu.c

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
new file mode 100644
index 0000000000..545b18cd1c
--- /dev/null
+++ b/arch/loongarch/kvm/vcpu.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/entry-kvm.h>
+#include <asm/fpu.h>
+#include <asm/loongarch.h>
+#include <asm/setup.h>
+#include <asm/time.h>
+
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+
+int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
+{
+	return 0;
+}
+
+int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+{
+	unsigned long timer_hz;
+	struct loongarch_csrs *csr;
+
+	vcpu->arch.vpid = 0;
+
+	hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+	vcpu->arch.swtimer.function = kvm_swtimer_wakeup;
+
+	vcpu->arch.guest_eentry = (unsigned long)kvm_loongarch_ops->guest_eentry;
+	vcpu->arch.handle_exit = _kvm_handle_exit;
+	vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL);
+	if (!vcpu->arch.csr)
+		return -ENOMEM;
+
+	/*
+	 * kvm all exceptions share one exception entry, and host <-> guest switch
+	 * also switch excfg.VS field, keep host excfg.VS info here
+	 */
+	vcpu->arch.host_ecfg = (read_csr_ecfg() & CSR_ECFG_VS);
+
+	/* Init */
+	vcpu->arch.last_sched_cpu = -1;
+
+	/*
+	 * Initialize guest register state to valid architectural reset state.
+	 */
+	timer_hz = calc_const_freq();
+	kvm_init_timer(vcpu, timer_hz);
+
+	/* Set Initialize mode for GUEST */
+	csr = vcpu->arch.csr;
+	kvm_write_sw_gcsr(csr, LOONGARCH_CSR_CRMD, CSR_CRMD_DA);
+
+	/* Set cpuid */
+	kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TMID, vcpu->vcpu_id);
+
+	/* start with no pending virtual guest interrupts */
+	csr->csrs[LOONGARCH_CSR_GINTC] = 0;
+
+	return 0;
+}
+
+void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+{
+}
+
+void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int cpu;
+	struct kvm_context *context;
+
+	hrtimer_cancel(&vcpu->arch.swtimer);
+	kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
+	kfree(vcpu->arch.csr);
+
+	/*
+	 * If the vCPU is freed and reused as another vCPU, we don't want the
+	 * matching pointer wrongly hanging around in last_vcpu.
+	 */
+	for_each_possible_cpu(cpu) {
+		context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+		if (context->last_vcpu == vcpu)
+			context->last_vcpu = NULL;
+	}
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 07/30] LoongArch: KVM: Implement vcpu run interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (5 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 06/30] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
                   ` (23 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement vcpu run interface, handling mmio, iocsr reading fault
and deliver interrupt, lose fpu before vcpu enter guest.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 130 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 130 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 545b18cd1c..83f2988ea6 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -18,6 +18,91 @@ int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 	return 0;
 }
 
+/*
+ * _kvm_check_requests - check and handle pending vCPU requests
+ *
+ * Return: RESUME_GUEST if we should enter the guest
+ *         RESUME_HOST  if we should exit to userspace
+ */
+static int _kvm_check_requests(struct kvm_vcpu *vcpu)
+{
+	if (!kvm_request_pending(vcpu))
+		return RESUME_GUEST;
+
+	if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
+		/* Drop vpid for this vCPU */
+		vcpu->arch.vpid = 0;
+
+	if (kvm_dirty_ring_check_request(vcpu))
+		return RESUME_HOST;
+
+	return RESUME_GUEST;
+}
+
+/*
+ * Check and handle pending signal and vCPU requests etc
+ * Run with irq enabled and preempt enabled
+ *
+ * Return: RESUME_GUEST if we should enter the guest
+ *         RESUME_HOST  if we should exit to userspace
+ *         < 0 if we should exit to userspace, where the return value
+ *         indicates an error
+ */
+static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
+{
+	int ret;
+
+	/*
+	 * Check conditions before entering the guest
+	 */
+	ret = xfer_to_guest_mode_handle_work(vcpu);
+	if (ret < 0)
+		return ret;
+
+	ret = _kvm_check_requests(vcpu);
+	return ret;
+}
+
+/*
+ * called with irq enabled
+ *
+ * Return: RESUME_GUEST if we should enter the guest, and irq disabled
+ *         Others if we should exit to userspace
+ */
+static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
+{
+	int ret;
+
+	do {
+		ret = kvm_enter_guest_check(vcpu);
+		if (ret != RESUME_GUEST)
+			break;
+
+		/*
+		 * handle vcpu timer, interrupts, check requests and
+		 * check vmid before vcpu enter guest
+		 */
+		local_irq_disable();
+		kvm_acquire_timer(vcpu);
+		_kvm_deliver_intr(vcpu);
+		/* make sure the vcpu mode has been written */
+		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
+		_kvm_check_vmid(vcpu);
+		vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
+		/* clear KVM_LARCH_CSR as csr will change when enter guest */
+		vcpu->arch.aux_inuse &= ~KVM_LARCH_CSR;
+
+		if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) {
+			/* make sure the vcpu mode has been written */
+			smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE);
+			local_irq_enable();
+			ret = -EAGAIN;
+		}
+	} while (ret != RESUME_GUEST);
+
+	return ret;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	unsigned long timer_hz;
@@ -85,3 +170,48 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 			context->last_vcpu = NULL;
 	}
 }
+
+int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+{
+	int r = -EINTR;
+	struct kvm_run *run = vcpu->run;
+
+	if (vcpu->mmio_needed) {
+		if (!vcpu->mmio_is_write)
+			_kvm_complete_mmio_read(vcpu, run);
+		vcpu->mmio_needed = 0;
+	}
+
+	if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) {
+		if (!run->iocsr_io.is_write)
+			_kvm_complete_iocsr_read(vcpu, run);
+	}
+
+	/* clear exit_reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+	if (run->immediate_exit)
+		return r;
+
+	lose_fpu(1);
+	vcpu_load(vcpu);
+	kvm_sigset_activate(vcpu);
+	r = kvm_pre_enter_guest(vcpu);
+	if (r != RESUME_GUEST)
+		goto out;
+
+	guest_timing_enter_irqoff();
+	guest_state_enter_irqoff();
+	trace_kvm_enter(vcpu);
+	r = kvm_loongarch_ops->enter_guest(run, vcpu);
+
+	trace_kvm_out(vcpu);
+	/*
+	 * guest exit is already recorded at _kvm_handle_exit
+	 * return val must not be RESUME_GUEST
+	 */
+	local_irq_enable();
+out:
+	kvm_sigset_deactivate(vcpu);
+	vcpu_put(vcpu);
+	return r;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 08/30] LoongArch: KVM: Implement vcpu handle exit interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (6 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-08-31  8:29 ` [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
                   ` (22 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement vcpu handle exit interface, getting the exit code by ESTAT
register and using kvm exception vector to handle it.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 41 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 83f2988ea6..ca4e8d074e 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -103,6 +103,47 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+/*
+ * Return 1 for resume guest and "<= 0" for resume host.
+ */
+static int _kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	unsigned long exst = vcpu->arch.host_estat;
+	u32 intr = exst & 0x1fff; /* ignore NMI */
+	u32 exccode = (exst & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+	int ret = RESUME_GUEST;
+
+	vcpu->mode = OUTSIDE_GUEST_MODE;
+
+	/* Set a default exit reason */
+	run->exit_reason = KVM_EXIT_UNKNOWN;
+
+	guest_timing_exit_irqoff();
+	guest_state_exit_irqoff();
+	local_irq_enable();
+
+	trace_kvm_exit(vcpu, exccode);
+	if (exccode) {
+		ret = _kvm_handle_fault(vcpu, exccode);
+	} else {
+		WARN(!intr, "vm exiting with suspicious irq\n");
+		++vcpu->stat.int_exits;
+	}
+
+	if (ret == RESUME_GUEST)
+		ret = kvm_pre_enter_guest(vcpu);
+
+	if (ret != RESUME_GUEST) {
+		local_irq_disable();
+		return ret;
+	}
+
+	guest_timing_enter_irqoff();
+	guest_state_enter_irqoff();
+	trace_kvm_reenter(vcpu);
+	return RESUME_GUEST;
+}
+
 int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 {
 	unsigned long timer_hz;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (7 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
@ 2023-08-31  8:29 ` Tianrui Zhao
  2023-09-11  9:03   ` Huacai Chen
  2023-08-31  8:30 ` [PATCH v20 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
                   ` (21 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:29 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu get registers and set registers operations, it
is called when user space use the ioctl interface to get or set regs.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
 arch/loongarch/kvm/vcpu.c    | 206 +++++++++++++++++++++++++++++++++++
 2 files changed, 273 insertions(+)
 create mode 100644 arch/loongarch/kvm/csr_ops.S

diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S
new file mode 100644
index 0000000000..53e44b23a5
--- /dev/null
+++ b/arch/loongarch/kvm/csr_ops.S
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <asm/regdef.h>
+#include <linux/linkage.h>
+	.text
+	.cfi_sections   .debug_frame
+/*
+ * we have splited hw gcsr into three parts, so we can
+ * calculate the code offset by gcsrid and jump here to
+ * run the gcsrwr instruction.
+ */
+SYM_FUNC_START(set_hw_gcsr)
+	addi.d      t0,   a0,   0
+	addi.w      t1,   zero, 96
+	bltu        t1,   t0,   1f
+	la.pcrel    t0,   10f
+	alsl.d      t0,   a0,   t0, 3
+	jr          t0
+1:
+	addi.w      t1,   a0,   -128
+	addi.w      t2,   zero, 15
+	bltu        t2,   t1,   2f
+	la.pcrel    t0,   11f
+	alsl.d      t0,   t1,   t0, 3
+	jr          t0
+2:
+	addi.w      t1,   a0,   -384
+	addi.w      t2,   zero, 3
+	bltu        t2,   t1,   3f
+	la.pcrel    t0,   12f
+	alsl.d      t0,   t1,   t0, 3
+	jr          t0
+3:
+	addi.w      a0,   zero, -1
+	jr          ra
+
+/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
+10:
+	csrnum = 0
+	.rept 0x61
+	gcsrwr a1, csrnum
+	jr ra
+	csrnum = csrnum + 1
+	.endr
+
+/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
+11:
+	csrnum = 0x80
+	.rept 0x10
+	gcsrwr a1, csrnum
+	jr ra
+	csrnum = csrnum + 1
+	.endr
+
+/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
+12:
+	csrnum = 0x180
+	.rept 0x4
+	gcsrwr a1, csrnum
+	jr ra
+	csrnum = csrnum + 1
+	.endr
+
+SYM_FUNC_END(set_hw_gcsr)
diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index ca4e8d074e..f17422a942 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,212 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
+{
+	unsigned long val;
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (get_gcsr_flag(id) & INVALID_GCSR)
+		return -EINVAL;
+
+	if (id == LOONGARCH_CSR_ESTAT) {
+		/* interrupt status IP0 -- IP7 from GINTC */
+		val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
+		*v = kvm_read_sw_gcsr(csr, id) | (val << 2);
+		return 0;
+	}
+
+	/*
+	 * get software csr state if csrid is valid, since software
+	 * csr state is consistent with hardware
+	 */
+	*v = kvm_read_sw_gcsr(csr, id);
+
+	return 0;
+}
+
+int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	int ret = 0, gintc;
+
+	if (get_gcsr_flag(id) & INVALID_GCSR)
+		return -EINVAL;
+
+	if (id == LOONGARCH_CSR_ESTAT) {
+		/* estat IP0~IP7 inject through guestexcept */
+		gintc = (val >> 2) & 0xff;
+		write_csr_gintc(gintc);
+		kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
+
+		gintc = val & ~(0xffUL << 2);
+		write_gcsr_estat(gintc);
+		kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
+
+		return ret;
+	}
+
+	if (get_gcsr_flag(id) & HW_GCSR) {
+		set_hw_gcsr(id, val);
+		/* write sw gcsr to keep consistent with hardware */
+		kvm_write_sw_gcsr(csr, id, val);
+	} else
+		kvm_write_sw_gcsr(csr, id, val);
+
+	return ret;
+}
+
+static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
+		const struct kvm_one_reg *reg, s64 *v)
+{
+	int reg_idx, ret = 0;
+
+	if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		ret = _kvm_getcsr(vcpu, reg_idx, v);
+	} else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
+		*v = drdtime() + vcpu->kvm->arch.time_offset;
+	else
+		ret = -EINVAL;
+
+	return ret;
+}
+
+static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	int ret = -EINVAL;
+	s64 v;
+
+	if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
+		return ret;
+
+	if (_kvm_get_one_reg(vcpu, reg, &v))
+		return ret;
+
+	return put_user(v, (u64 __user *)(long)reg->addr);
+}
+
+static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
+			const struct kvm_one_reg *reg,
+			s64 v)
+{
+	int ret = 0;
+	unsigned long flags;
+	u64 val;
+	int reg_idx;
+
+	val = v;
+	if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
+		reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
+		ret = _kvm_setcsr(vcpu, reg_idx, val);
+	} else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
+		local_irq_save(flags);
+		/*
+		 * gftoffset is relative with board, not vcpu
+		 * only set for the first time for smp system
+		 */
+		if (vcpu->vcpu_id == 0)
+			vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
+		write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
+		local_irq_restore(flags);
+	} else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
+		kvm_reset_timer(vcpu);
+		memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
+		memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
+	} else
+		ret = -EINVAL;
+
+	return ret;
+}
+
+static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	s64 v;
+	int ret = -EINVAL;
+
+	if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
+		return ret;
+
+	if (get_user(v, (u64 __user *)(long)reg->addr))
+		return ret;
+
+	return _kvm_set_one_reg(vcpu, reg, v);
+}
+
+int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
+				  struct kvm_sregs *sregs)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
+				  struct kvm_sregs *sregs)
+{
+	return -ENOIOCTLCMD;
+}
+
+int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	int i;
+
+	vcpu_load(vcpu);
+
+	for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
+		regs->gpr[i] = vcpu->arch.gprs[i];
+
+	regs->pc = vcpu->arch.pc;
+
+	vcpu_put(vcpu);
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+	int i;
+
+	vcpu_load(vcpu);
+
+	for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
+		vcpu->arch.gprs[i] = regs->gpr[i];
+	vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
+	vcpu->arch.pc = regs->pc;
+
+	vcpu_put(vcpu);
+	return 0;
+}
+
+long kvm_arch_vcpu_ioctl(struct file *filp,
+			 unsigned int ioctl, unsigned long arg)
+{
+	struct kvm_vcpu *vcpu = filp->private_data;
+	void __user *argp = (void __user *)arg;
+	long r;
+
+	vcpu_load(vcpu);
+
+	switch (ioctl) {
+	case KVM_SET_ONE_REG:
+	case KVM_GET_ONE_REG: {
+		struct kvm_one_reg reg;
+
+		r = -EFAULT;
+		if (copy_from_user(&reg, argp, sizeof(reg)))
+			break;
+		if (ioctl == KVM_SET_ONE_REG)
+			r = _kvm_set_reg(vcpu, &reg);
+		else
+			r = _kvm_get_reg(vcpu, &reg);
+		break;
+	}
+	default:
+		r = -ENOIOCTLCMD;
+		break;
+	}
+
+	vcpu_put(vcpu);
+	return r;
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (8 preceding siblings ...)
  2023-08-31  8:29 ` [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 11/30] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
                   ` (20 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu KVM_ENABLE_CAP ioctl interface.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index f17422a942..be0c17a433 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -187,6 +187,16 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	return 0;
 }
 
+static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu,
+				     struct kvm_enable_cap *cap)
+{
+	/*
+	 * FPU is enable by default, do not support any other caps,
+	 * and later we will support such as LSX cap.
+	 */
+	return -EINVAL;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
@@ -210,6 +220,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			r = _kvm_get_reg(vcpu, &reg);
 		break;
 	}
+	case KVM_ENABLE_CAP: {
+		struct kvm_enable_cap cap;
+
+		r = -EFAULT;
+		if (copy_from_user(&cap, argp, sizeof(cap)))
+			break;
+		r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
+		break;
+	}
 	default:
 		r = -ENOIOCTLCMD;
 		break;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 11/30] LoongArch: KVM: Implement fpu related operations for vcpu
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (9 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 12/30] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
                   ` (19 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch fpu related interface for vcpu, such as get fpu, set
fpu, own fpu and lose fpu, etc.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 60 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index be0c17a433..2094afcfcd 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -238,6 +238,66 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 	return r;
 }
 
+int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+	int i = 0;
+
+	/* no need vcpu_load and vcpu_put */
+	fpu->fcsr = vcpu->arch.fpu.fcsr;
+	fpu->fcc = vcpu->arch.fpu.fcc;
+	for (i = 0; i < NUM_FPU_REGS; i++)
+		memcpy(&fpu->fpr[i], &vcpu->arch.fpu.fpr[i], FPU_REG_WIDTH / 64);
+
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+	int i = 0;
+
+	/* no need vcpu_load and vcpu_put */
+	vcpu->arch.fpu.fcsr = fpu->fcsr;
+	vcpu->arch.fpu.fcc = fpu->fcc;
+	for (i = 0; i < NUM_FPU_REGS; i++)
+		memcpy(&vcpu->arch.fpu.fpr[i], &fpu->fpr[i], FPU_REG_WIDTH / 64);
+
+	return 0;
+}
+
+/* Enable FPU for guest and restore context */
+void kvm_own_fpu(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+
+	/*
+	 * Enable FPU for guest
+	 */
+	set_csr_euen(CSR_EUEN_FPEN);
+
+	kvm_restore_fpu(&vcpu->arch.fpu);
+	vcpu->arch.aux_inuse |= KVM_LARCH_FPU;
+	trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU);
+
+	preempt_enable();
+}
+
+/* Save and disable FPU */
+void kvm_lose_fpu(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+
+	if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) {
+		kvm_save_fpu(&vcpu->arch.fpu);
+		vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU;
+		trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU);
+
+		/* Disable FPU */
+		clear_csr_euen(CSR_EUEN_FPEN);
+	}
+
+	preempt_enable();
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 12/30] LoongArch: KVM: Implement vcpu interrupt operations
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (10 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 11/30] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 13/30] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
                   ` (18 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement vcpu interrupt operations such as vcpu set irq and
vcpu clear irq, using set_gcsr_estat to set irq which is
parsed by the irq bitmap.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/interrupt.c | 113 +++++++++++++++++++++++++++++++++
 arch/loongarch/kvm/vcpu.c      |  37 +++++++++++
 2 files changed, 150 insertions(+)
 create mode 100644 arch/loongarch/kvm/interrupt.c

diff --git a/arch/loongarch/kvm/interrupt.c b/arch/loongarch/kvm/interrupt.c
new file mode 100644
index 0000000000..14e19653b2
--- /dev/null
+++ b/arch/loongarch/kvm/interrupt.c
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <asm/kvm_vcpu.h>
+#include <asm/kvm_csr.h>
+
+static unsigned int int_to_coreint[EXCCODE_INT_NUM] = {
+	[INT_TI]	= CPU_TIMER,
+	[INT_IPI]	= CPU_IPI,
+	[INT_SWI0]	= CPU_SIP0,
+	[INT_SWI1]	= CPU_SIP1,
+	[INT_HWI0]	= CPU_IP0,
+	[INT_HWI1]	= CPU_IP1,
+	[INT_HWI2]	= CPU_IP2,
+	[INT_HWI3]	= CPU_IP3,
+	[INT_HWI4]	= CPU_IP4,
+	[INT_HWI5]	= CPU_IP5,
+	[INT_HWI6]	= CPU_IP6,
+	[INT_HWI7]	= CPU_IP7,
+};
+
+static int _kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
+{
+	unsigned int irq = 0;
+
+	clear_bit(priority, &vcpu->arch.irq_pending);
+	if (priority < EXCCODE_INT_NUM)
+		irq = int_to_coreint[priority];
+
+	switch (priority) {
+	case INT_TI:
+	case INT_IPI:
+	case INT_SWI0:
+	case INT_SWI1:
+		set_gcsr_estat(irq);
+		break;
+
+	case INT_HWI0 ... INT_HWI7:
+		set_csr_gintc(irq);
+		break;
+
+	default:
+		break;
+	}
+
+	return 1;
+}
+
+static int _kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority)
+{
+	unsigned int irq = 0;
+
+	clear_bit(priority, &vcpu->arch.irq_clear);
+	if (priority < EXCCODE_INT_NUM)
+		irq = int_to_coreint[priority];
+
+	switch (priority) {
+	case INT_TI:
+	case INT_IPI:
+	case INT_SWI0:
+	case INT_SWI1:
+		clear_gcsr_estat(irq);
+		break;
+
+	case INT_HWI0 ... INT_HWI7:
+		clear_csr_gintc(irq);
+		break;
+
+	default:
+		break;
+	}
+
+	return 1;
+}
+
+void _kvm_deliver_intr(struct kvm_vcpu *vcpu)
+{
+	unsigned long *pending = &vcpu->arch.irq_pending;
+	unsigned long *pending_clr = &vcpu->arch.irq_clear;
+	unsigned int priority;
+
+	if (!(*pending) && !(*pending_clr))
+		return;
+
+	if (*pending_clr) {
+		priority = __ffs(*pending_clr);
+		while (priority <= INT_IPI) {
+			_kvm_irq_clear(vcpu, priority);
+			priority = find_next_bit(pending_clr,
+					BITS_PER_BYTE * sizeof(*pending_clr),
+					priority + 1);
+		}
+	}
+
+	if (*pending) {
+		priority = __ffs(*pending);
+		while (priority <= INT_IPI) {
+			_kvm_irq_deliver(vcpu, priority);
+			priority = find_next_bit(pending,
+					BITS_PER_BYTE * sizeof(*pending),
+					priority + 1);
+		}
+	}
+}
+
+int _kvm_pending_timer(struct kvm_vcpu *vcpu)
+{
+	return test_bit(INT_TI, &vcpu->arch.irq_pending);
+}
diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 2094afcfcd..9e36482c53 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -298,6 +298,43 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu)
 	preempt_enable();
 }
 
+int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq)
+{
+	int intr = (int)irq->irq;
+
+	if (intr > 0)
+		_kvm_queue_irq(vcpu, intr);
+	else if (intr < 0)
+		_kvm_dequeue_irq(vcpu, -intr);
+	else {
+		kvm_err("%s: invalid interrupt ioctl %d\n", __func__, irq->irq);
+		return -EINVAL;
+	}
+
+	kvm_vcpu_kick(vcpu);
+	return 0;
+}
+
+long kvm_arch_vcpu_async_ioctl(struct file *filp,
+			       unsigned int ioctl, unsigned long arg)
+{
+	struct kvm_vcpu *vcpu = filp->private_data;
+	void __user *argp = (void __user *)arg;
+
+	if (ioctl == KVM_INTERRUPT) {
+		struct kvm_interrupt irq;
+
+		if (copy_from_user(&irq, argp, sizeof(irq)))
+			return -EFAULT;
+
+		kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__, irq.irq);
+
+		return kvm_vcpu_ioctl_interrupt(vcpu, &irq);
+	}
+
+	return -ENOIOCTLCMD;
+}
+
 int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
 {
 	return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 13/30] LoongArch: KVM: Implement misc vcpu related interfaces
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (11 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 12/30] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 14/30] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
                   ` (17 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement some misc vcpu relaterd interfaces, such as vcpu runnable,
vcpu should kick, vcpu dump regs, etc.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 105 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 105 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 9e36482c53..f170dbf539 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,111 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
+{
+	return !!(vcpu->arch.irq_pending) &&
+		vcpu->arch.mp_state.mp_state == KVM_MP_STATE_RUNNABLE;
+}
+
+int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
+{
+	return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
+}
+
+bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
+{
+	return false;
+}
+
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+{
+	return VM_FAULT_SIGBUS;
+}
+
+int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
+				  struct kvm_translation *tr)
+{
+	return -EINVAL;
+}
+
+int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
+{
+	return _kvm_pending_timer(vcpu) ||
+		kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT) &
+			(1 << INT_TI);
+}
+
+int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+
+	kvm_debug("vCPU Register Dump:\n");
+	kvm_debug("\tpc = 0x%08lx\n", vcpu->arch.pc);
+	kvm_debug("\texceptions: %08lx\n", vcpu->arch.irq_pending);
+
+	for (i = 0; i < 32; i += 4) {
+		kvm_debug("\tgpr%02d: %08lx %08lx %08lx %08lx\n", i,
+		       vcpu->arch.gprs[i],
+		       vcpu->arch.gprs[i + 1],
+		       vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]);
+	}
+
+	kvm_debug("\tCRMOD: 0x%08lx, exst: 0x%08lx\n",
+		  kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD),
+		  kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT));
+
+	kvm_debug("\tERA: 0x%08lx\n", kvm_read_hw_gcsr(LOONGARCH_CSR_ERA));
+
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+				struct kvm_mp_state *mp_state)
+{
+	*mp_state = vcpu->arch.mp_state;
+
+	return 0;
+}
+
+int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
+				struct kvm_mp_state *mp_state)
+{
+	int ret = 0;
+
+	switch (mp_state->mp_state) {
+	case KVM_MP_STATE_RUNNABLE:
+		vcpu->arch.mp_state = *mp_state;
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
+					struct kvm_guest_debug *dbg)
+{
+	return -EINVAL;
+}
+
+/**
+ * kvm_migrate_count() - Migrate timer.
+ * @vcpu:       Virtual CPU.
+ *
+ * Migrate hrtimer to the current CPU by cancelling and restarting it
+ * if it was running prior to being cancelled.
+ *
+ * Must be called when the vCPU is migrated to a different CPU to ensure that
+ * timer expiry during guest execution interrupts the guest and causes the
+ * interrupt to be delivered in a timely manner.
+ */
+static void kvm_migrate_count(struct kvm_vcpu *vcpu)
+{
+	if (hrtimer_cancel(&vcpu->arch.swtimer))
+		hrtimer_restart(&vcpu->arch.swtimer);
+}
+
 int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
 {
 	unsigned long val;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 14/30] LoongArch: KVM: Implement vcpu load and vcpu put operations
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (12 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 13/30] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 15/30] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
                   ` (16 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu load and vcpu put operations, including
load csr value into hardware and save csr value into vcpu structure.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 196 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 196 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index f170dbf539..79e4e22773 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -639,6 +639,202 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
 	}
 }
 
+static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_context *context;
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	bool migrated, all;
+
+	/*
+	 * Have we migrated to a different CPU?
+	 * If so, any old guest TLB state may be stale.
+	 */
+	migrated = (vcpu->arch.last_sched_cpu != cpu);
+
+	/*
+	 * Was this the last vCPU to run on this CPU?
+	 * If not, any old guest state from this vCPU will have been clobbered.
+	 */
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	all = migrated || (context->last_vcpu != vcpu);
+	context->last_vcpu = vcpu;
+
+	/*
+	 * Restore timer state regardless
+	 */
+	kvm_restore_timer(vcpu);
+
+	/* Control guest page CCA attribute */
+	change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT);
+	/* Don't bother restoring registers multiple times unless necessary */
+	if (!all)
+		return 0;
+
+	write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
+	/*
+	 * Restore guest CSR registers
+	 */
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EUEN);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_MISC);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ECFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ERA);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADV);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EENTRY);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ASID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDL);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDH);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_RVACFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CPUID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS2);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS3);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS4);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS5);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS6);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS7);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TMID);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CNTC);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL);
+
+	/* restore Root.Guestexcept from unused Guest guestexcept register */
+	write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]);
+
+	/*
+	 * We should clear linked load bit to break interrupted atomics. This
+	 * prevents a SC on the next vCPU from succeeding by matching a LL on
+	 * the previous vCPU.
+	 */
+	if (vcpu->kvm->created_vcpus > 1)
+		set_gcsr_llbctl(CSR_LLBCTL_WCLLB);
+
+	return 0;
+}
+
+void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	if (vcpu->arch.last_sched_cpu != cpu) {
+		kvm_debug("[%d->%d]KVM vCPU[%d] switch\n",
+				vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id);
+		/*
+		 * Migrate the timer interrupt to the current CPU so that it
+		 * always interrupts the guest and synchronously triggers a
+		 * guest timer interrupt.
+		 */
+		kvm_migrate_count(vcpu);
+	}
+
+	/* restore guest state to registers */
+	_kvm_vcpu_load(vcpu, cpu);
+	local_irq_restore(flags);
+}
+
+static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	kvm_lose_fpu(vcpu);
+	/*
+	 * update csr state from hardware if software csr state is stale,
+	 * most csr registers are kept unchanged during process context
+	 * switch except csr registers like remaining timer tick value and
+	 * injected interrupt state.
+	 */
+	if (!(vcpu->arch.aux_inuse & KVM_LARCH_CSR)) {
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CRMD);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRMD);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EUEN);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_MISC);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ECFG);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ERA);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADV);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADI);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EENTRY);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ASID);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDL);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDH);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_RVACFG);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CPUID);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG2);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG3);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS0);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS2);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS3);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS4);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS5);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS6);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS7);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TMID);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CNTC);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3);
+		vcpu->arch.aux_inuse |= KVM_LARCH_CSR;
+	}
+	/* save Root.Guestexcept in unused Guest guestexcept register */
+	kvm_save_timer(vcpu);
+	csr->csrs[LOONGARCH_CSR_GINTC] = read_csr_gintc();
+	return 0;
+}
+
+void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+{
+	unsigned long flags;
+	int cpu;
+
+	local_irq_save(flags);
+	cpu = smp_processor_id();
+	vcpu->arch.last_sched_cpu = cpu;
+
+	/* save guest state in registers */
+	_kvm_vcpu_put(vcpu, cpu);
+	local_irq_restore(flags);
+}
+
 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 {
 	int r = -EINTR;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 15/30] LoongArch: KVM: Implement vcpu status description
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (13 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 14/30] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function Tianrui Zhao
                   ` (15 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu status description such as idle exits counter,
signal exits counter, cpucfg exits counter, etc.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vcpu.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
index 79e4e22773..60b2e584c6 100644
--- a/arch/loongarch/kvm/vcpu.c
+++ b/arch/loongarch/kvm/vcpu.c
@@ -13,6 +13,23 @@
 #define CREATE_TRACE_POINTS
 #include "trace.h"
 
+const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
+	KVM_GENERIC_VCPU_STATS(),
+	STATS_DESC_COUNTER(VCPU, idle_exits),
+	STATS_DESC_COUNTER(VCPU, signal_exits),
+	STATS_DESC_COUNTER(VCPU, int_exits),
+	STATS_DESC_COUNTER(VCPU, cpucfg_exits),
+};
+
+const struct kvm_stats_header kvm_vcpu_stats_header = {
+	.name_size = KVM_STATS_NAME_SIZE,
+	.num_desc = ARRAY_SIZE(kvm_vcpu_stats_desc),
+	.id_offset = sizeof(struct kvm_stats_header),
+	.desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE,
+	.data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE +
+		       sizeof(kvm_vcpu_stats_desc),
+};
+
 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
 {
 	return !!(vcpu->arch.irq_pending) &&
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (14 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 15/30] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-09-11 10:00   ` Huacai Chen
  2023-08-31  8:30 ` [PATCH v20 17/30] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
                   ` (14 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm check vmid and update vmid, the vmid should be checked before
vcpu enter guest.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/vmid.c | 66 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 arch/loongarch/kvm/vmid.c

diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c
new file mode 100644
index 0000000000..fc25ddc3b7
--- /dev/null
+++ b/arch/loongarch/kvm/vmid.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include "trace.h"
+
+static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu)
+{
+	struct kvm_context *context;
+	unsigned long vpid;
+
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	vpid = context->vpid_cache + 1;
+	if (!(vpid & vpid_mask)) {
+		/* finish round of 64 bit loop */
+		if (unlikely(!vpid))
+			vpid = vpid_mask + 1;
+
+		/* vpid 0 reserved for root */
+		++vpid;
+
+		/* start new vpid cycle */
+		kvm_flush_tlb_all();
+	}
+
+	context->vpid_cache = vpid;
+	vcpu->arch.vpid = vpid;
+}
+
+void _kvm_check_vmid(struct kvm_vcpu *vcpu)
+{
+	struct kvm_context *context;
+	bool migrated;
+	unsigned long ver, old, vpid;
+	int cpu;
+
+	cpu = smp_processor_id();
+	/*
+	 * Are we entering guest context on a different CPU to last time?
+	 * If so, the vCPU's guest TLB state on this CPU may be stale.
+	 */
+	context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
+	migrated = (vcpu->cpu != cpu);
+
+	/*
+	 * Check if our vpid is of an older version
+	 *
+	 * We also discard the stored vpid if we've executed on
+	 * another CPU, as the guest mappings may have changed without
+	 * hypervisor knowledge.
+	 */
+	ver = vcpu->arch.vpid & ~vpid_mask;
+	old = context->vpid_cache  & ~vpid_mask;
+	if (migrated || (ver != old)) {
+		_kvm_update_vpid(vcpu, cpu);
+		trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
+		vcpu->cpu = cpu;
+	}
+
+	/* Restore GSTAT(0x50).vpid */
+	vpid = (vcpu->arch.vpid & vpid_mask)
+		<< CSR_GSTAT_GID_SHIFT;
+	change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 17/30] LoongArch: KVM: Implement virtual machine tlb operations
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (15 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 18/30] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
                   ` (13 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch virtual machine tlb operations such as flush tlb by
specific gpa parameter and flush all of the virt machines tlb.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/tlb.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100644 arch/loongarch/kvm/tlb.c

diff --git a/arch/loongarch/kvm/tlb.c b/arch/loongarch/kvm/tlb.c
new file mode 100644
index 0000000000..0bcbd80ac6
--- /dev/null
+++ b/arch/loongarch/kvm/tlb.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/tlb.h>
+#include <asm/kvm_csr.h>
+
+int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	gpa &= (PAGE_MASK << 1);
+	invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa);
+	local_irq_restore(flags);
+	return 0;
+}
+
+/**
+ * kvm_flush_tlb_all() - Flush all root TLB entries for
+ * guests.
+ *
+ * Invalidate all entries including GVA-->GPA and GPA-->HPA mappings.
+ */
+void kvm_flush_tlb_all(void)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	invtlb_all(INVTLB_ALLGID, 0, 0);
+	local_irq_restore(flags);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 18/30] LoongArch: KVM: Implement vcpu timer operations
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (16 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 17/30] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
                   ` (12 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu timer operations such as init kvm timer,
require kvm timer, save kvm timer and restore kvm timer. When
vcpu exit, we use kvm soft timer to emulate hardware timer. If
timeout happens, the vcpu timer interrupt will be set and it is
going to be handled at vcpu next entrance.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/timer.c | 200 +++++++++++++++++++++++++++++++++++++
 1 file changed, 200 insertions(+)
 create mode 100644 arch/loongarch/kvm/timer.c

diff --git a/arch/loongarch/kvm/timer.c b/arch/loongarch/kvm/timer.c
new file mode 100644
index 0000000000..df56d6fa81
--- /dev/null
+++ b/arch/loongarch/kvm/timer.c
@@ -0,0 +1,200 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_csr.h>
+#include <asm/kvm_vcpu.h>
+
+/*
+ * ktime_to_tick() - Scale ktime_t to timer tick value.
+ */
+static inline u64 ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now)
+{
+	u64 delta;
+
+	delta = ktime_to_ns(now);
+	return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC);
+}
+
+static inline u64 tick_to_ns(struct kvm_vcpu *vcpu, u64 tick)
+{
+	return div_u64(tick * MNSEC_PER_SEC, vcpu->arch.timer_mhz);
+}
+
+/*
+ * Push timer forward on timeout.
+ * Handle an hrtimer event by push the hrtimer forward a period.
+ */
+static enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu)
+{
+	unsigned long cfg, period;
+
+	/* Add periodic tick to current expire time */
+	cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG);
+	if (cfg & CSR_TCFG_PERIOD) {
+		period = tick_to_ns(vcpu, cfg & CSR_TCFG_VAL);
+		hrtimer_add_expires_ns(&vcpu->arch.swtimer, period);
+		return HRTIMER_RESTART;
+	} else
+		return HRTIMER_NORESTART;
+}
+
+/* low level hrtimer wake routine */
+enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer)
+{
+	struct kvm_vcpu *vcpu;
+
+	vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer);
+	_kvm_queue_irq(vcpu, INT_TI);
+	rcuwait_wake_up(&vcpu->wait);
+	return kvm_count_timeout(vcpu);
+}
+
+/*
+ * Initialise the timer to the specified frequency, zero it
+ */
+void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz)
+{
+	vcpu->arch.timer_mhz = timer_hz >> 20;
+
+	/* Starting at 0 */
+	kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TVAL, 0);
+}
+
+/*
+ * Restore soft timer state from saved context.
+ */
+void kvm_restore_timer(struct kvm_vcpu *vcpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	ktime_t expire, now;
+	unsigned long cfg, delta, period;
+
+	/*
+	 * Set guest stable timer cfg csr
+	 */
+	cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+	kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
+	if (!(cfg & CSR_TCFG_EN)) {
+		/* guest timer is disabled, just restore timer registers */
+		kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
+		return;
+	}
+
+	/*
+	 * set remainder tick value if not expired
+	 */
+	now = ktime_get();
+	expire = vcpu->arch.expire;
+	if (ktime_before(now, expire))
+		delta = ktime_to_tick(vcpu, ktime_sub(expire, now));
+	else {
+		if (cfg & CSR_TCFG_PERIOD) {
+			period = cfg & CSR_TCFG_VAL;
+			delta = ktime_to_tick(vcpu, ktime_sub(now, expire));
+			delta = period - (delta % period);
+		} else
+			delta = 0;
+		/*
+		 * inject timer here though sw timer should inject timer
+		 * interrupt async already, since sw timer may be cancelled
+		 * during injecting intr async in function kvm_acquire_timer
+		 */
+		_kvm_queue_irq(vcpu, INT_TI);
+	}
+
+	write_gcsr_timertick(delta);
+}
+
+/*
+ *
+ * Restore hard timer state and enable guest to access timer registers
+ * without trap
+ *
+ * it is called with irq disabled
+ */
+void kvm_acquire_timer(struct kvm_vcpu *vcpu)
+{
+	unsigned long cfg;
+
+	cfg = read_csr_gcfg();
+	if (!(cfg & CSR_GCFG_TIT))
+		return;
+
+	/* enable guest access to hard timer */
+	write_csr_gcfg(cfg & ~CSR_GCFG_TIT);
+
+	/*
+	 * Freeze the soft-timer and sync the guest stable timer with it. We do
+	 * this with interrupts disabled to avoid latency.
+	 */
+	hrtimer_cancel(&vcpu->arch.swtimer);
+}
+
+/*
+ * Save guest timer state and switch to software emulation of guest
+ * timer. The hard timer must already be in use, so preemption should be
+ * disabled.
+ */
+static void _kvm_save_timer(struct kvm_vcpu *vcpu)
+{
+	unsigned long ticks, delta;
+	ktime_t expire;
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	ticks = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL);
+	delta = tick_to_ns(vcpu, ticks);
+	expire = ktime_add_ns(ktime_get(), delta);
+	vcpu->arch.expire = expire;
+	if (ticks) {
+		/*
+		 * Update hrtimer to use new timeout
+		 * HRTIMER_MODE_PINNED is suggested since vcpu may run in
+		 * the same physical cpu in next time
+		 */
+		hrtimer_cancel(&vcpu->arch.swtimer);
+		hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);
+	} else
+		/*
+		 * inject timer interrupt so that hall polling can dectect
+		 * and exit
+		 */
+		_kvm_queue_irq(vcpu, INT_TI);
+}
+
+/*
+ * Save guest timer state and switch to soft guest timer if hard timer was in
+ * use.
+ */
+void kvm_save_timer(struct kvm_vcpu *vcpu)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	unsigned long cfg;
+
+	preempt_disable();
+	cfg = read_csr_gcfg();
+	if (!(cfg & CSR_GCFG_TIT)) {
+		/* disable guest use of hard timer */
+		write_csr_gcfg(cfg | CSR_GCFG_TIT);
+
+		/* save hard timer state */
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
+		kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
+		if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN)
+			_kvm_save_timer(vcpu);
+	}
+
+	/* save timer-related state to vCPU context */
+	kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
+	preempt_enable();
+}
+
+void kvm_reset_timer(struct kvm_vcpu *vcpu)
+{
+	write_gcsr_timercfg(0);
+	kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0);
+	hrtimer_cancel(&vcpu->arch.swtimer);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (17 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 18/30] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-09-07 19:57   ` WANG Xuerui
  2023-08-31  8:30 ` [PATCH v20 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
                   ` (11 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch kvm mmu, it is used to switch gpa to hpa when
guest exit because of address translation exception. This patch
implement allocate gpa page table, search gpa from it and flush guest
gpa in the table.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/mmu.c | 678 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 678 insertions(+)
 create mode 100644 arch/loongarch/kvm/mmu.c

diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
new file mode 100644
index 0000000000..4bb20393f4
--- /dev/null
+++ b/arch/loongarch/kvm/mmu.c
@@ -0,0 +1,678 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/highmem.h>
+#include <linux/page-flags.h>
+#include <linux/kvm_host.h>
+#include <linux/uaccess.h>
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels
+ * for which pages need to be cached.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1)
+
+static inline void kvm_set_pte(pte_t *ptep, pte_t pteval)
+{
+	*ptep = pteval;
+}
+
+/**
+ * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory.
+ *
+ * Allocate a blank KVM GPA page directory (PGD) for representing guest physical
+ * to host physical page mappings.
+ *
+ * Returns:	Pointer to new KVM GPA page directory.
+ *		NULL on allocation failure.
+ */
+pgd_t *kvm_pgd_alloc(void)
+{
+	pgd_t *pgd;
+
+	pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0);
+	if (pgd)
+		pgd_init((void *)pgd);
+
+	return pgd;
+}
+
+/*
+ * Caller must hold kvm->mm_lock
+ *
+ * Walk the page tables of kvm to find the PTE corresponding to the
+ * address @addr. If page tables don't exist for @addr, they will be created
+ * from the MMU cache if @cache is not NULL.
+ */
+static pte_t *kvm_populate_gpa(struct kvm *kvm,
+				struct kvm_mmu_memory_cache *cache,
+				unsigned long addr)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = kvm->arch.pgd + pgd_index(addr);
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d)) {
+		if (!cache)
+			return NULL;
+
+		pud = kvm_mmu_memory_cache_alloc(cache);
+		pud_init(pud);
+		p4d_populate(NULL, p4d, pud);
+	}
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud)) {
+		if (!cache)
+			return NULL;
+		pmd = kvm_mmu_memory_cache_alloc(cache);
+		pmd_init(pmd);
+		pud_populate(NULL, pud, pmd);
+	}
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd)) {
+		pte_t *pte;
+
+		if (!cache)
+			return NULL;
+		pte = kvm_mmu_memory_cache_alloc(cache);
+		clear_page(pte);
+		pmd_populate_kernel(NULL, pmd, pte);
+	}
+
+	return pte_offset_kernel(pmd, addr);
+}
+
+typedef int (*kvm_pte_ops)(pte_t *pte);
+
+struct kvm_ptw_ctx {
+	kvm_pte_ops	ops;
+	int		need_flush;
+};
+
+static int kvm_ptw_pte(pmd_t *pmd, unsigned long addr, unsigned long end,
+			struct kvm_ptw_ctx *context)
+{
+	pte_t *pte;
+	unsigned long next, start;
+	int ret;
+
+	ret = 0;
+	start = addr;
+	pte = pte_offset_kernel(pmd, addr);
+	do {
+		next = addr + PAGE_SIZE;
+		if (!pte_present(*pte))
+			continue;
+
+		ret |= context->ops(pte);
+	} while (pte++, addr = next, addr != end);
+
+	if (context->need_flush && (start + PMD_SIZE == end)) {
+		pte = pte_offset_kernel(pmd, 0);
+		pmd_clear(pmd);
+		free_page((unsigned long)pte);
+	}
+
+	return ret;
+}
+
+static int kvm_ptw_pmd(pud_t *pud, unsigned long addr, unsigned long end,
+			struct kvm_ptw_ctx *context)
+{
+	pmd_t *pmd;
+	unsigned long next, start;
+	int ret;
+
+	ret = 0;
+	start = addr;
+	pmd = pmd_offset(pud, addr);
+	do {
+		next = pmd_addr_end(addr, end);
+		if (!pmd_present(*pmd))
+			continue;
+
+		ret |= kvm_ptw_pte(pmd, addr, next, context);
+	} while (pmd++, addr = next, addr != end);
+
+#ifndef __PAGETABLE_PMD_FOLDED
+	if (context->need_flush && (start + PUD_SIZE == end)) {
+		pmd = pmd_offset(pud, 0);
+		pud_clear(pud);
+		free_page((unsigned long)pmd);
+	}
+#endif
+
+	return ret;
+}
+
+static int kvm_ptw_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
+			struct kvm_ptw_ctx *context)
+{
+	p4d_t *p4d;
+	pud_t *pud;
+	int ret = 0;
+	unsigned long next;
+#ifndef __PAGETABLE_PUD_FOLDED
+	unsigned long start = addr;
+#endif
+
+	p4d = p4d_offset(pgd, addr);
+	pud = pud_offset(p4d, addr);
+	do {
+		next = pud_addr_end(addr, end);
+		if (!pud_present(*pud))
+			continue;
+
+		ret |= kvm_ptw_pmd(pud, addr, next, context);
+	} while (pud++, addr = next, addr != end);
+
+#ifndef __PAGETABLE_PUD_FOLDED
+	if (context->need_flush && (start + PGDIR_SIZE == end)) {
+		pud = pud_offset(p4d, 0);
+		p4d_clear(p4d);
+		free_page((unsigned long)pud);
+	}
+#endif
+
+	return ret;
+}
+
+static int kvm_ptw_pgd(pgd_t *pgd, unsigned long addr, unsigned long end,
+			struct kvm_ptw_ctx *context)
+{
+	unsigned long next;
+	int ret;
+
+	ret = 0;
+	if (addr > end - 1)
+		return ret;
+	pgd = pgd + pgd_index(addr);
+	do {
+		next = pgd_addr_end(addr, end);
+		if (!pgd_present(*pgd))
+			continue;
+
+		ret |= kvm_ptw_pud(pgd, addr, next, context);
+	}  while (pgd++, addr = next, addr != end);
+
+	return ret;
+}
+
+/*
+ * clear pte entry
+ */
+static int kvm_flush_pte(pte_t *pte)
+{
+	kvm_set_pte(pte, __pte(0));
+	return 1;
+}
+
+/**
+ * kvm_flush_range() - Flush a range of guest physical addresses.
+ * @kvm:	KVM pointer.
+ * @start_gfn:	Guest frame number of first page in GPA range to flush.
+ * @end_gfn:	Guest frame number of last page in GPA range to flush.
+ *
+ * Flushes a range of GPA mappings from the GPA page tables.
+ *
+ * The caller must hold the @kvm->mmu_lock spinlock.
+ *
+ * Returns:	Whether its safe to remove the top level page directory because
+ *		all lower levels have been removed.
+ */
+static bool kvm_flush_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
+{
+	struct kvm_ptw_ctx ctx;
+
+	ctx.ops = kvm_flush_pte;
+	ctx.need_flush = 1;
+
+	return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
+				end_gfn << PAGE_SHIFT, &ctx);
+}
+
+/*
+ * kvm_mkclean_pte
+ * Mark a range of guest physical address space clean (writes fault) in the VM's
+ * GPA page table to allow dirty page tracking.
+ */
+static int kvm_mkclean_pte(pte_t *pte)
+{
+	pte_t val;
+
+	val = *pte;
+	if (pte_dirty(val)) {
+		*pte = pte_mkclean(val);
+		return 1;
+	}
+	return 0;
+}
+
+/*
+ * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean.
+ * @kvm:	KVM pointer.
+ * @start_gfn:	Guest frame number of first page in GPA range to flush.
+ * @end_gfn:	Guest frame number of last page in GPA range to flush.
+ *
+ * Make a range of GPA mappings clean so that guest writes will fault and
+ * trigger dirty page logging.
+ *
+ * The caller must hold the @kvm->mmu_lock spinlock.
+ *
+ * Returns:	Whether any GPA mappings were modified, which would require
+ *		derived mappings (GVA page tables & TLB enties) to be
+ *		invalidated.
+ */
+static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
+{
+	struct kvm_ptw_ctx ctx;
+
+	ctx.ops = kvm_mkclean_pte;
+	ctx.need_flush = 0;
+	return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
+				end_gfn << PAGE_SHIFT, &ctx);
+}
+
+/*
+ * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages
+ * @kvm:	The KVM pointer
+ * @slot:	The memory slot associated with mask
+ * @gfn_offset:	The gfn offset in memory slot
+ * @mask:	The mask of dirty pages at offset 'gfn_offset' in this memory
+ *		slot to be write protected
+ *
+ * Walks bits set in mask write protects the associated pte's. Caller must
+ * acquire @kvm->mmu_lock.
+ */
+void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
+		struct kvm_memory_slot *slot,
+		gfn_t gfn_offset, unsigned long mask)
+{
+	gfn_t base_gfn = slot->base_gfn + gfn_offset;
+	gfn_t start = base_gfn +  __ffs(mask);
+	gfn_t end = base_gfn + __fls(mask) + 1;
+
+	kvm_mkclean_gpa_pt(kvm, start, end);
+}
+
+void kvm_arch_commit_memory_region(struct kvm *kvm,
+				   struct kvm_memory_slot *old,
+				   const struct kvm_memory_slot *new,
+				   enum kvm_mr_change change)
+{
+	int needs_flush;
+
+	/*
+	 * If dirty page logging is enabled, write protect all pages in the slot
+	 * ready for dirty logging.
+	 *
+	 * There is no need to do this in any of the following cases:
+	 * CREATE:	No dirty mappings will already exist.
+	 * MOVE/DELETE:	The old mappings will already have been cleaned up by
+	 *		kvm_arch_flush_shadow_memslot()
+	 */
+	if (change == KVM_MR_FLAGS_ONLY &&
+	    (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
+	     new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+		spin_lock(&kvm->mmu_lock);
+		/* Write protect GPA page table entries */
+		needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn,
+					new->base_gfn + new->npages);
+		if (needs_flush)
+			kvm_flush_remote_tlbs(kvm);
+		spin_unlock(&kvm->mmu_lock);
+	}
+}
+
+void kvm_arch_flush_shadow_all(struct kvm *kvm)
+{
+	/* Flush whole GPA */
+	kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
+	/* Flush vpid for each vCPU individually */
+	kvm_flush_remote_tlbs(kvm);
+}
+
+void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+		struct kvm_memory_slot *slot)
+{
+	int ret;
+
+	/*
+	 * The slot has been made invalid (ready for moving or deletion), so we
+	 * need to ensure that it can no longer be accessed by any guest vCPUs.
+	 */
+	spin_lock(&kvm->mmu_lock);
+	/* Flush slot from GPA */
+	ret = kvm_flush_range(kvm, slot->base_gfn,
+			slot->base_gfn + slot->npages);
+	/* Let implementation do the rest */
+	if (ret)
+		kvm_flush_remote_tlbs(kvm);
+	spin_unlock(&kvm->mmu_lock);
+}
+
+void _kvm_destroy_mm(struct kvm *kvm)
+{
+	/* It should always be safe to remove after flushing the whole range */
+	kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
+	pgd_free(NULL, kvm->arch.pgd);
+	kvm->arch.pgd = NULL;
+}
+
+/*
+ * Mark a range of guest physical address space old (all accesses fault) in the
+ * VM's GPA page table to allow detection of commonly used pages.
+ */
+static int kvm_mkold_pte(pte_t *pte)
+{
+	pte_t val;
+
+	val = *pte;
+	if (pte_young(val)) {
+		*pte = pte_mkold(val);
+		return 1;
+	}
+	return 0;
+}
+
+bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	return kvm_flush_range(kvm, range->start, range->end);
+}
+
+bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	gpa_t gpa = range->start << PAGE_SHIFT;
+	pte_t hva_pte = range->pte;
+	pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
+	pte_t old_pte;
+
+	if (!ptep)
+		return false;
+
+	/* Mapping may need adjusting depending on memslot flags */
+	old_pte = *ptep;
+	if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte))
+		hva_pte = pte_mkclean(hva_pte);
+	else if (range->slot->flags & KVM_MEM_READONLY)
+		hva_pte = pte_wrprotect(hva_pte);
+
+	kvm_set_pte(ptep, hva_pte);
+
+	/* Replacing an absent or old page doesn't need flushes */
+	if (!pte_present(old_pte) || !pte_young(old_pte))
+		return false;
+
+	/* Pages swapped, aged, moved, or cleaned require flushes */
+	return !pte_present(hva_pte) ||
+	       !pte_young(hva_pte) ||
+	       pte_pfn(old_pte) != pte_pfn(hva_pte) ||
+	       (pte_dirty(old_pte) && !pte_dirty(hva_pte));
+}
+
+bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	struct kvm_ptw_ctx ctx;
+
+	ctx.ops = kvm_mkold_pte;
+	ctx.need_flush = 0;
+	return kvm_ptw_pgd(kvm->arch.pgd, range->start << PAGE_SHIFT,
+				range->end << PAGE_SHIFT, &ctx);
+}
+
+bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
+{
+	gpa_t gpa = range->start << PAGE_SHIFT;
+	pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
+
+	if (ptep && pte_present(*ptep) && pte_young(*ptep))
+		return true;
+
+	return false;
+}
+
+/**
+ * kvm_map_page_fast() - Fast path GPA fault handler.
+ * @vcpu:		vCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write:	Whether the fault was due to a write.
+ *
+ * Perform fast path GPA fault handling, doing all that can be done without
+ * calling into KVM. This handles marking old pages young (for idle page
+ * tracking), and dirtying of clean pages (for dirty page logging).
+ *
+ * Returns:	0 on success, in which case we can update derived mappings and
+ *		resume guest execution.
+ *		-EFAULT on failure due to absent GPA mapping or write to
+ *		read-only page, in which case KVM must be consulted.
+ */
+static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
+				   bool write)
+{
+	struct kvm *kvm = vcpu->kvm;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	pte_t *ptep;
+	kvm_pfn_t pfn = 0;
+	bool pfn_valid = false, pfn_dirty = false;
+	int ret = 0;
+
+	spin_lock(&kvm->mmu_lock);
+
+	/* Fast path - just check GPA page table for an existing entry */
+	ptep = kvm_populate_gpa(kvm, NULL, gpa);
+	if (!ptep || !pte_present(*ptep)) {
+		ret = -EFAULT;
+		goto out;
+	}
+
+	/* Track access to pages marked old */
+	if (!pte_young(*ptep)) {
+		kvm_set_pte(ptep, pte_mkyoung(*ptep));
+		pfn = pte_pfn(*ptep);
+		pfn_valid = true;
+		/* call kvm_set_pfn_accessed() after unlock */
+	}
+	if (write && !pte_dirty(*ptep)) {
+		if (!pte_write(*ptep)) {
+			ret = -EFAULT;
+			goto out;
+		}
+
+		/* Track dirtying of writeable pages */
+		kvm_set_pte(ptep, pte_mkdirty(*ptep));
+		pfn = pte_pfn(*ptep);
+		pfn_dirty = true;
+	}
+
+out:
+	spin_unlock(&kvm->mmu_lock);
+	if (pfn_valid)
+		kvm_set_pfn_accessed(pfn);
+	if (pfn_dirty) {
+		mark_page_dirty(kvm, gfn);
+		kvm_set_pfn_dirty(pfn);
+	}
+	return ret;
+}
+
+/**
+ * kvm_map_page() - Map a guest physical page.
+ * @vcpu:		vCPU pointer.
+ * @gpa:		Guest physical address of fault.
+ * @write:	Whether the fault was due to a write.
+ *
+ * Handle GPA faults by creating a new GPA mapping (or updating an existing
+ * one).
+ *
+ * This takes care of marking pages young or dirty (idle/dirty page tracking),
+ * asking KVM for the corresponding PFN, and creating a mapping in the GPA page
+ * tables. Derived mappings (GVA page tables and TLBs) must be handled by the
+ * caller.
+ *
+ * Returns:	0 on success
+ *		-EFAULT if there is no memory region at @gpa or a write was
+ *		attempted to a read-only memory region. This is usually handled
+ *		as an MMIO access.
+ */
+static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+{
+	bool writeable;
+	int srcu_idx, err = 0, retry_no = 0;
+	unsigned long hva;
+	unsigned long mmu_seq;
+	unsigned long prot_bits;
+	pte_t *ptep, new_pte;
+	kvm_pfn_t pfn;
+	gfn_t gfn = gpa >> PAGE_SHIFT;
+	struct vm_area_struct *vma;
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_memory_slot *memslot;
+	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
+
+	/* Try the fast path to handle old / clean pages */
+	srcu_idx = srcu_read_lock(&kvm->srcu);
+	err = kvm_map_page_fast(vcpu, gpa, write);
+	if (!err)
+		goto out;
+
+	memslot = gfn_to_memslot(kvm, gfn);
+	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable);
+	if (kvm_is_error_hva(hva) || (write && !writeable))
+		goto out;
+
+	mmap_read_lock(current->mm);
+	vma = find_vma_intersection(current->mm, hva, hva + 1);
+	if (unlikely(!vma)) {
+		kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
+		mmap_read_unlock(current->mm);
+		err = -EFAULT;
+		goto out;
+	}
+	mmap_read_unlock(current->mm);
+
+	/* We need a minimum of cached pages ready for page table creation */
+	err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES);
+	if (err)
+		goto out;
+
+retry:
+	/*
+	 * Used to check for invalidations in progress, of the pfn that is
+	 * returned by pfn_to_pfn_prot below.
+	 */
+	mmu_seq = kvm->mmu_invalidate_seq;
+	/*
+	 * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in
+	 * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
+	 * risk the page we get a reference to getting unmapped before we have a
+	 * chance to grab the mmu_lock without mmu_invalidate_retry() noticing.
+	 *
+	 * This smp_rmb() pairs with the effective smp_wmb() of the combination
+	 * of the pte_unmap_unlock() after the PTE is zapped, and the
+	 * spin_lock() in kvm_mmu_invalidate_invalidate_<page|range_end>() before
+	 * mmu_invalidate_seq is incremented.
+	 */
+	smp_rmb();
+
+	/* Slow path - ask KVM core whether we can access this GPA */
+	pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable);
+	if (is_error_noslot_pfn(pfn)) {
+		err = -EFAULT;
+		goto out;
+	}
+
+	/* Check if an invalidation has taken place since we got pfn */
+	if (mmu_invalidate_retry(kvm, mmu_seq)) {
+		/*
+		 * This can happen when mappings are changed asynchronously, but
+		 * also synchronously if a COW is triggered by
+		 * gfn_to_pfn_prot().
+		 */
+		kvm_set_pfn_accessed(pfn);
+		kvm_release_pfn_clean(pfn);
+		if (retry_no > 100) {
+			retry_no = 0;
+			schedule();
+		}
+		retry_no++;
+		goto retry;
+	}
+
+	/*
+	 * For emulated devices such virtio device, actual cache attribute is
+	 * determined by physical machine.
+	 * For pass through physical device, it should be uncachable
+	 */
+	prot_bits = _PAGE_PRESENT | __READABLE;
+	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
+		prot_bits |= _CACHE_SUC;
+	else
+		prot_bits |= _CACHE_CC;
+
+	if (writeable) {
+		prot_bits |= _PAGE_WRITE;
+		if (write)
+			prot_bits |= __WRITEABLE;
+	}
+
+	/* Ensure page tables are allocated */
+	spin_lock(&kvm->mmu_lock);
+	ptep = kvm_populate_gpa(kvm, memcache, gpa);
+	new_pte = pfn_pte(pfn, __pgprot(prot_bits));
+	kvm_set_pte(ptep, new_pte);
+
+	err = 0;
+	spin_unlock(&kvm->mmu_lock);
+
+	if (prot_bits & _PAGE_DIRTY) {
+		mark_page_dirty(kvm, gfn);
+		kvm_set_pfn_dirty(pfn);
+	}
+
+	kvm_set_pfn_accessed(pfn);
+	kvm_release_pfn_clean(pfn);
+out:
+	srcu_read_unlock(&kvm->srcu, srcu_idx);
+	return err;
+}
+
+int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
+{
+	int ret;
+
+	ret = kvm_map_page(vcpu, gpa, write);
+	if (ret)
+		return ret;
+
+	/* Invalidate this entry in the TLB */
+	return kvm_flush_tlb_gpa(vcpu, gpa);
+}
+
+void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+
+}
+
+int kvm_arch_prepare_memory_region(struct kvm *kvm,
+				   const struct kvm_memory_slot *old,
+				   struct kvm_memory_slot *new,
+				   enum kvm_mr_change change)
+{
+	return 0;
+}
+
+void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
+					const struct kvm_memory_slot *memslot)
+{
+	kvm_flush_remote_tlbs(kvm);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 20/30] LoongArch: KVM: Implement handle csr excption
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (18 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 21/30] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
                   ` (10 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm handle LoongArch vcpu exit caused by reading and
writing csr. Using csr structure to emulate the registers.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 98 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)
 create mode 100644 arch/loongarch/kvm/exit.c

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
new file mode 100644
index 0000000000..18635333fc
--- /dev/null
+++ b/arch/loongarch/kvm/exit.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/preempt.h>
+#include <linux/vmalloc.h>
+#include <asm/fpu.h>
+#include <asm/inst.h>
+#include <asm/time.h>
+#include <asm/tlb.h>
+#include <asm/loongarch.h>
+#include <asm/numa.h>
+#include <asm/kvm_vcpu.h>
+#include <asm/kvm_csr.h>
+#include <linux/kvm_host.h>
+#include <asm/mmzone.h>
+#include "trace.h"
+
+static unsigned long _kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+	unsigned long val = 0;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR)
+		val = kvm_read_sw_gcsr(csr, csrid);
+	else
+		pr_warn_once("Unsupport csrread 0x%x with pc %lx\n",
+			csrid, vcpu->arch.pc);
+	return val;
+}
+
+static void _kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR)
+		kvm_write_sw_gcsr(csr, csrid, val);
+	else
+		pr_warn_once("Unsupport csrwrite 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static void _kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid,
+	unsigned long csr_mask, unsigned long val)
+{
+	struct loongarch_csrs *csr = vcpu->arch.csr;
+
+	if (get_gcsr_flag(csrid) & SW_GCSR) {
+		unsigned long orig;
+
+		orig = kvm_read_sw_gcsr(csr, csrid);
+		orig &= ~csr_mask;
+		orig |= val & csr_mask;
+		kvm_write_sw_gcsr(csr, csrid, orig);
+	} else
+		pr_warn_once("Unsupport csrxchg 0x%x with pc %lx\n",
+				csrid, vcpu->arch.pc);
+}
+
+static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	unsigned int rd, rj, csrid;
+	unsigned long csr_mask;
+	unsigned long val = 0;
+
+	/*
+	 * CSR value mask imm
+	 * rj = 0 means csrrd
+	 * rj = 1 means csrwr
+	 * rj != 0,1 means csrxchg
+	 */
+	rd = inst.reg2csr_format.rd;
+	rj = inst.reg2csr_format.rj;
+	csrid = inst.reg2csr_format.csr;
+
+	/* Process CSR ops */
+	if (rj == 0) {
+		/* process csrrd */
+		val = _kvm_emu_read_csr(vcpu, csrid);
+		vcpu->arch.gprs[rd] = val;
+	} else if (rj == 1) {
+		/* process csrwr */
+		val = vcpu->arch.gprs[rd];
+		_kvm_emu_write_csr(vcpu, csrid, val);
+	} else {
+		/* process csrxchg */
+		val = vcpu->arch.gprs[rd];
+		csr_mask = vcpu->arch.gprs[rj];
+		_kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val);
+	}
+
+	return EMULATE_DONE;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 21/30] LoongArch: KVM: Implement handle iocsr exception
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (19 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 22/30] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
                   ` (9 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm handle vcpu iocsr exception, setting the iocsr info into
vcpu_run and return to user space to handle it.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/include/asm/inst.h | 16 ++++++
 arch/loongarch/kvm/exit.c         | 92 +++++++++++++++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
index 71e1ed4165..008a88ead6 100644
--- a/arch/loongarch/include/asm/inst.h
+++ b/arch/loongarch/include/asm/inst.h
@@ -65,6 +65,14 @@ enum reg2_op {
 	revbd_op	= 0x0f,
 	revh2w_op	= 0x10,
 	revhd_op	= 0x11,
+	iocsrrdb_op     = 0x19200,
+	iocsrrdh_op     = 0x19201,
+	iocsrrdw_op     = 0x19202,
+	iocsrrdd_op     = 0x19203,
+	iocsrwrb_op     = 0x19204,
+	iocsrwrh_op     = 0x19205,
+	iocsrwrw_op     = 0x19206,
+	iocsrwrd_op     = 0x19207,
 };
 
 enum reg2i5_op {
@@ -318,6 +326,13 @@ struct reg2bstrd_format {
 	unsigned int opcode : 10;
 };
 
+struct reg2csr_format {
+	unsigned int rd : 5;
+	unsigned int rj : 5;
+	unsigned int csr : 14;
+	unsigned int opcode : 8;
+};
+
 struct reg3_format {
 	unsigned int rd : 5;
 	unsigned int rj : 5;
@@ -346,6 +361,7 @@ union loongarch_instruction {
 	struct reg2i14_format	reg2i14_format;
 	struct reg2i16_format	reg2i16_format;
 	struct reg2bstrd_format	reg2bstrd_format;
+	struct reg2csr_format   reg2csr_format;
 	struct reg3_format	reg3_format;
 	struct reg3sa2_format	reg3sa2_format;
 };
diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 18635333fc..32edd915eb 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -96,3 +96,95 @@ static int _kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst)
 
 	return EMULATE_DONE;
 }
+
+int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu)
+{
+	u32 rd, rj, opcode;
+	u32 addr;
+	unsigned long val;
+	int ret;
+
+	/*
+	 * Each IOCSR with different opcode
+	 */
+	rd = inst.reg2_format.rd;
+	rj = inst.reg2_format.rj;
+	opcode = inst.reg2_format.opcode;
+	addr = vcpu->arch.gprs[rj];
+	ret = EMULATE_DO_IOCSR;
+	run->iocsr_io.phys_addr = addr;
+	run->iocsr_io.is_write = 0;
+
+	/* LoongArch is Little endian */
+	switch (opcode) {
+	case iocsrrdb_op:
+		run->iocsr_io.len = 1;
+		break;
+	case iocsrrdh_op:
+		run->iocsr_io.len = 2;
+		break;
+	case iocsrrdw_op:
+		run->iocsr_io.len = 4;
+		break;
+	case iocsrrdd_op:
+		run->iocsr_io.len = 8;
+		break;
+	case iocsrwrb_op:
+		run->iocsr_io.len = 1;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrh_op:
+		run->iocsr_io.len = 2;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrw_op:
+		run->iocsr_io.len = 4;
+		run->iocsr_io.is_write = 1;
+		break;
+	case iocsrwrd_op:
+		run->iocsr_io.len = 8;
+		run->iocsr_io.is_write = 1;
+		break;
+	default:
+		ret = EMULATE_FAIL;
+		break;
+	}
+
+	if (ret == EMULATE_DO_IOCSR) {
+		if (run->iocsr_io.is_write) {
+			val = vcpu->arch.gprs[rd];
+			memcpy(run->iocsr_io.data, &val, run->iocsr_io.len);
+		}
+		vcpu->arch.io_gpr = rd;
+	}
+
+	return ret;
+}
+
+int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	enum emulation_result er = EMULATE_DONE;
+
+	switch (run->iocsr_io.len) {
+	case 8:
+		*gpr = *(s64 *)run->iocsr_io.data;
+		break;
+	case 4:
+		*gpr = *(int *)run->iocsr_io.data;
+		break;
+	case 2:
+		*gpr = *(short *)run->iocsr_io.data;
+		break;
+	case 1:
+		*gpr = *(char *) run->iocsr_io.data;
+		break;
+	default:
+		kvm_err("Bad IOCSR length: %d,addr is 0x%lx",
+				run->iocsr_io.len, vcpu->arch.badv);
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	return er;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 22/30] LoongArch: KVM: Implement handle idle exception
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (20 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 21/30] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 23/30] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
                   ` (8 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm handle LoongArch vcpu idle exception, using kvm_vcpu_block
to emulate it.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 32edd915eb..30748238c7 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -188,3 +188,23 @@ int _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 	return er;
 }
+
+int _kvm_emu_idle(struct kvm_vcpu *vcpu)
+{
+	++vcpu->stat.idle_exits;
+	trace_kvm_exit_idle(vcpu, KVM_TRACE_EXIT_IDLE);
+
+	if (!kvm_arch_vcpu_runnable(vcpu)) {
+		/*
+		 * Switch to the software timer before halt-polling/blocking as
+		 * the guest's timer may be a break event for the vCPU, and the
+		 * hypervisor timer runs only when the CPU is in guest mode.
+		 * Switch before halt-polling so that KVM recognizes an expired
+		 * timer before blocking.
+		 */
+		kvm_save_timer(vcpu);
+		kvm_vcpu_block(vcpu);
+	}
+
+	return EMULATE_DONE;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 23/30] LoongArch: KVM: Implement handle gspr exception
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (21 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 22/30] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 24/30] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
                   ` (7 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm handle gspr exception interface, including emulate the
reading and writing of cpucfg, csr, iocsr resource.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 112 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 112 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 30748238c7..b0781ea100 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -208,3 +208,115 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu)
 
 	return EMULATE_DONE;
 }
+
+static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	struct kvm_run *run = vcpu->run;
+	larch_inst inst;
+	unsigned long curr_pc;
+	int rd, rj;
+	unsigned int index;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	inst.word = vcpu->arch.badi;
+	curr_pc = vcpu->arch.pc;
+	update_pc(&vcpu->arch);
+
+	trace_kvm_exit_gspr(vcpu, inst.word);
+	er = EMULATE_FAIL;
+	switch (((inst.word >> 24) & 0xff)) {
+	case 0x0:
+		/* cpucfg GSPR */
+		if (inst.reg2_format.opcode == 0x1B) {
+			rd = inst.reg2_format.rd;
+			rj = inst.reg2_format.rj;
+			++vcpu->stat.cpucfg_exits;
+			index = vcpu->arch.gprs[rj];
+
+			vcpu->arch.gprs[rd] = read_cpucfg(index);
+			/* Nested KVM is not supported */
+			if (index == 2)
+				vcpu->arch.gprs[rd] &= ~CPUCFG2_LVZP;
+			if (index == 6)
+				vcpu->arch.gprs[rd] &= ~CPUCFG6_PMP;
+			er = EMULATE_DONE;
+		}
+		break;
+	case 0x4:
+		/* csr GSPR */
+		er = _kvm_handle_csr(vcpu, inst);
+		break;
+	case 0x6:
+		/* iocsr,cache,idle GSPR */
+		switch (((inst.word >> 22) & 0x3ff)) {
+		case 0x18:
+			/* cache GSPR */
+			er = EMULATE_DONE;
+			trace_kvm_exit_cache(vcpu, KVM_TRACE_EXIT_CACHE);
+			break;
+		case 0x19:
+			/* iocsr/idle GSPR */
+			switch (((inst.word >> 15) & 0x1ffff)) {
+			case 0xc90:
+				/* iocsr GSPR */
+				er = _kvm_emu_iocsr(inst, run, vcpu);
+				break;
+			case 0xc91:
+				/* idle GSPR */
+				er = _kvm_emu_idle(vcpu);
+				break;
+			default:
+				er = EMULATE_FAIL;
+				break;
+			}
+			break;
+		default:
+			er = EMULATE_FAIL;
+			break;
+		}
+		break;
+	default:
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	/* Rollback PC only if emulation was unsuccessful */
+	if (er == EMULATE_FAIL) {
+		kvm_err("[%#lx]%s: unsupported gspr instruction 0x%08x\n",
+			curr_pc, __func__, inst.word);
+
+		kvm_arch_vcpu_dump_regs(vcpu);
+		vcpu->arch.pc = curr_pc;
+	}
+	return er;
+}
+
+/*
+ * Execute cpucfg instruction will tirggerGSPR,
+ * Also the access to unimplemented csrs 0x15
+ * 0x16, 0x50~0x53, 0x80, 0x81, 0x90~0x95, 0x98
+ * 0xc0~0xff, 0x100~0x109, 0x500~0x502,
+ * cache_op, idle_op iocsr ops the same
+ */
+static int _kvm_handle_gspr(struct kvm_vcpu *vcpu)
+{
+	enum emulation_result er = EMULATE_DONE;
+	int ret = RESUME_GUEST;
+
+	er = _kvm_trap_handle_gspr(vcpu);
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_IOCSR) {
+		vcpu->run->exit_reason = KVM_EXIT_LOONGARCH_IOCSR;
+		ret = RESUME_HOST;
+	} else {
+		kvm_err("%s internal error\n", __func__);
+		vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+	return ret;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 24/30] LoongArch: KVM: Implement handle mmio exception
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (22 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 23/30] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 25/30] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
                   ` (6 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement handle mmio exception, setting the mmio info into vcpu_run and
return to user space to handle it.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 308 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 308 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index b0781ea100..491d1c39a9 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -209,6 +209,265 @@ int _kvm_emu_idle(struct kvm_vcpu *vcpu)
 	return EMULATE_DONE;
 }
 
+int _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	struct kvm_run *run = vcpu->run;
+	unsigned int rd, op8, opcode;
+	unsigned long rd_val = 0;
+	void *data = run->mmio.data;
+	unsigned long curr_pc;
+	int ret;
+
+	/*
+	 * Update PC and hold onto current PC in case there is
+	 * an error and we want to rollback the PC
+	 */
+	curr_pc = vcpu->arch.pc;
+	update_pc(&vcpu->arch);
+
+	op8 = (inst.word >> 24) & 0xff;
+	run->mmio.phys_addr = vcpu->arch.badv;
+	ret = EMULATE_DO_MMIO;
+	if (op8 < 0x28) {
+		/* stptrw/d process */
+		rd = inst.reg2i14_format.rd;
+		opcode = inst.reg2i14_format.opcode;
+
+		switch (opcode) {
+		case stptrd_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = vcpu->arch.gprs[rd];
+			break;
+		case stptrw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = vcpu->arch.gprs[rd];
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 < 0x30) {
+		/* st.b/h/w/d  process */
+		rd = inst.reg2i12_format.rd;
+		opcode = inst.reg2i12_format.opcode;
+		rd_val = vcpu->arch.gprs[rd];
+
+		switch (opcode) {
+		case std_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = rd_val;
+			break;
+		case stw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = rd_val;
+			break;
+		case sth_op:
+			run->mmio.len = 2;
+			*(unsigned short *)data = rd_val;
+			break;
+		case stb_op:
+			run->mmio.len = 1;
+			*(unsigned char *)data = rd_val;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 == 0x38) {
+		/* stxb/h/w/d process */
+		rd = inst.reg3_format.rd;
+		opcode = inst.reg3_format.opcode;
+
+		switch (opcode) {
+		case stxb_op:
+			run->mmio.len = 1;
+			*(unsigned char *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxh_op:
+			run->mmio.len = 2;
+			*(unsigned short *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxw_op:
+			run->mmio.len = 4;
+			*(unsigned int *)data = vcpu->arch.gprs[rd];
+			break;
+		case stxd_op:
+			run->mmio.len = 8;
+			*(unsigned long *)data = vcpu->arch.gprs[rd];
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else
+		ret = EMULATE_FAIL;
+
+	if (ret == EMULATE_DO_MMIO) {
+		run->mmio.is_write = 1;
+		vcpu->mmio_needed = 1;
+		vcpu->mmio_is_write = 1;
+	} else {
+		vcpu->arch.pc = curr_pc;
+		kvm_err("Write not supporded inst=0x%08x @%lx BadVaddr:%#lx\n",
+			inst.word, vcpu->arch.pc, vcpu->arch.badv);
+		kvm_arch_vcpu_dump_regs(vcpu);
+		/* Rollback PC if emulation was unsuccessful */
+	}
+
+	return ret;
+}
+
+int _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst)
+{
+	unsigned int op8, opcode, rd;
+	struct kvm_run *run = vcpu->run;
+	int ret;
+
+	run->mmio.phys_addr = vcpu->arch.badv;
+	vcpu->mmio_needed = 2;	/* signed */
+	op8 = (inst.word >> 24) & 0xff;
+	ret = EMULATE_DO_MMIO;
+
+	if (op8 < 0x28) {
+		/* ldptr.w/d process */
+		rd = inst.reg2i14_format.rd;
+		opcode = inst.reg2i14_format.opcode;
+
+		switch (opcode) {
+		case ldptrd_op:
+			run->mmio.len = 8;
+			break;
+		case ldptrw_op:
+			run->mmio.len = 4;
+			break;
+		default:
+			break;
+		}
+	} else if (op8 < 0x2f) {
+		/* ld.b/h/w/d, ld.bu/hu/wu process */
+		rd = inst.reg2i12_format.rd;
+		opcode = inst.reg2i12_format.opcode;
+
+		switch (opcode) {
+		case ldd_op:
+			run->mmio.len = 8;
+			break;
+		case ldwu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 4;
+			break;
+		case ldw_op:
+			run->mmio.len = 4;
+			break;
+		case ldhu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 2;
+			break;
+		case ldh_op:
+			run->mmio.len = 2;
+			break;
+		case ldbu_op:
+			vcpu->mmio_needed = 1;	/* unsigned */
+			run->mmio.len = 1;
+			break;
+		case ldb_op:
+			run->mmio.len = 1;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else if (op8 == 0x38) {
+		/* ldxb/h/w/d, ldxb/h/wu, ldgtb/h/w/d, ldleb/h/w/d process */
+		rd = inst.reg3_format.rd;
+		opcode = inst.reg3_format.opcode;
+
+		switch (opcode) {
+		case ldxb_op:
+			run->mmio.len = 1;
+			break;
+		case ldxbu_op:
+			run->mmio.len = 1;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxh_op:
+			run->mmio.len = 2;
+			break;
+		case ldxhu_op:
+			run->mmio.len = 2;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxw_op:
+			run->mmio.len = 4;
+			break;
+		case ldxwu_op:
+			run->mmio.len = 4;
+			vcpu->mmio_needed = 1;	/* unsigned */
+			break;
+		case ldxd_op:
+			run->mmio.len = 8;
+			break;
+		default:
+			ret = EMULATE_FAIL;
+			break;
+		}
+	} else
+		ret = EMULATE_FAIL;
+
+	if (ret == EMULATE_DO_MMIO) {
+		/* Set for _kvm_complete_mmio_read use */
+		vcpu->arch.io_gpr = rd;
+		run->mmio.is_write = 0;
+		vcpu->mmio_is_write = 0;
+	} else {
+		kvm_err("Load not supporded inst=0x%08x @%lx BadVaddr:%#lx\n",
+			inst.word, vcpu->arch.pc, vcpu->arch.badv);
+		kvm_arch_vcpu_dump_regs(vcpu);
+		vcpu->mmio_needed = 0;
+	}
+	return ret;
+}
+
+int _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr];
+	enum emulation_result er = EMULATE_DONE;
+
+	/* update with new PC */
+	update_pc(&vcpu->arch);
+	switch (run->mmio.len) {
+	case 8:
+		*gpr = *(s64 *)run->mmio.data;
+		break;
+	case 4:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(int *)run->mmio.data;
+		else
+			*gpr = *(unsigned int *)run->mmio.data;
+		break;
+	case 2:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(short *) run->mmio.data;
+		else
+			*gpr = *(unsigned short *)run->mmio.data;
+
+		break;
+	case 1:
+		if (vcpu->mmio_needed == 2)
+			*gpr = *(char *) run->mmio.data;
+		else
+			*gpr = *(unsigned char *) run->mmio.data;
+		break;
+	default:
+		kvm_err("Bad MMIO length: %d,addr is 0x%lx",
+				run->mmio.len, vcpu->arch.badv);
+		er = EMULATE_FAIL;
+		break;
+	}
+
+	return er;
+}
+
 static int _kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
 {
 	enum emulation_result er = EMULATE_DONE;
@@ -320,3 +579,52 @@ static int _kvm_handle_gspr(struct kvm_vcpu *vcpu)
 	}
 	return ret;
 }
+
+static int _kvm_handle_mmu_fault(struct kvm_vcpu *vcpu, bool write)
+{
+	struct kvm_run *run = vcpu->run;
+	unsigned long badv = vcpu->arch.badv;
+	larch_inst inst;
+	enum emulation_result er = EMULATE_DONE;
+	int ret;
+
+	ret = kvm_handle_mm_fault(vcpu, badv, write);
+	if (ret) {
+		/* Treat as MMIO */
+		inst.word = vcpu->arch.badi;
+		if (write) {
+			er = _kvm_emu_mmio_write(vcpu, inst);
+		} else {
+			/* A code fetch fault doesn't count as an MMIO */
+			if (kvm_is_ifetch_fault(&vcpu->arch)) {
+				kvm_err("%s ifetch error addr:%lx\n", __func__, badv);
+				run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+				return RESUME_HOST;
+			}
+
+			er = _kvm_emu_mmio_read(vcpu, inst);
+		}
+	}
+
+	if (er == EMULATE_DONE) {
+		ret = RESUME_GUEST;
+	} else if (er == EMULATE_DO_MMIO) {
+		run->exit_reason = KVM_EXIT_MMIO;
+		ret = RESUME_HOST;
+	} else {
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		ret = RESUME_HOST;
+	}
+
+	return ret;
+}
+
+static int _kvm_handle_write_fault(struct kvm_vcpu *vcpu)
+{
+	return _kvm_handle_mmu_fault(vcpu, true);
+}
+
+static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu)
+{
+	return _kvm_handle_mmu_fault(vcpu, false);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 25/30] LoongArch: KVM: Implement handle fpu exception
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (23 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 24/30] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 26/30] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
                   ` (5 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement handle fpu exception, using kvm_own_fpu to enable fpu for
guest.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index 491d1c39a9..f9c7951261 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -628,3 +628,29 @@ static int _kvm_handle_read_fault(struct kvm_vcpu *vcpu)
 {
 	return _kvm_handle_mmu_fault(vcpu, false);
 }
+
+/**
+ * _kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host
+ * @vcpu:	Virtual CPU context.
+ *
+ * Handle when the guest attempts to use fpu which hasn't been allowed
+ * by the root context.
+ */
+static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
+{
+	struct kvm_run *run = vcpu->run;
+
+	/*
+	 * If guest FPU not present, the FPU operation should have been
+	 * treated as a reserved instruction!
+	 * If FPU already in use, we shouldn't get this at all.
+	 */
+	if (WARN_ON(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) {
+		kvm_err("%s internal error\n", __func__);
+		run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+		return RESUME_HOST;
+	}
+
+	kvm_own_fpu(vcpu);
+	return RESUME_GUEST;
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 26/30] LoongArch: KVM: Implement kvm exception vector
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (24 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 25/30] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
                   ` (4 subsequent siblings)
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement kvm exception vector, using _kvm_fault_tables array to save
the handle function pointer and it is used when vcpu handle exit.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kvm/exit.c | 46 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
index f9c7951261..a6df2b56ed 100644
--- a/arch/loongarch/kvm/exit.c
+++ b/arch/loongarch/kvm/exit.c
@@ -654,3 +654,49 @@ static int _kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
 	kvm_own_fpu(vcpu);
 	return RESUME_GUEST;
 }
+
+/*
+ * Loongarch KVM callback handling for not implemented guest exiting
+ */
+static int _kvm_fault_ni(struct kvm_vcpu *vcpu)
+{
+	unsigned long estat, badv;
+	unsigned int exccode, inst;
+
+	/*
+	 *  Fetch the instruction.
+	 */
+	badv = vcpu->arch.badv;
+	estat = vcpu->arch.host_estat;
+	exccode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+	inst = vcpu->arch.badi;
+	kvm_err("Exccode: %d PC=%#lx inst=0x%08x BadVaddr=%#lx estat=%#lx\n",
+			exccode, vcpu->arch.pc, inst, badv, read_gcsr_estat());
+	kvm_arch_vcpu_dump_regs(vcpu);
+	vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+
+	return RESUME_HOST;
+}
+
+static exit_handle_fn _kvm_fault_tables[EXCCODE_INT_START] = {
+	[EXCCODE_TLBL]			= _kvm_handle_read_fault,
+	[EXCCODE_TLBI]			= _kvm_handle_read_fault,
+	[EXCCODE_TLBS]			= _kvm_handle_write_fault,
+	[EXCCODE_TLBM]			= _kvm_handle_write_fault,
+	[EXCCODE_FPDIS]			= _kvm_handle_fpu_disabled,
+	[EXCCODE_GSPR]			= _kvm_handle_gspr,
+};
+
+void _kvm_init_fault(void)
+{
+	int i;
+
+	for (i = 0; i < EXCCODE_INT_START; i++)
+		if (!_kvm_fault_tables[i])
+			_kvm_fault_tables[i] = _kvm_fault_ni;
+}
+
+int _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault)
+{
+	return _kvm_fault_tables[fault](vcpu);
+}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (25 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 26/30] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-09-07 20:04   ` WANG Xuerui
  2023-08-31  8:30 ` [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
                   ` (3 subsequent siblings)
  30 siblings, 1 reply; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui

Implement LoongArch vcpu world switch, including vcpu enter guest and
vcpu exit from guest, both operations need to save or restore the host
and guest registers.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/kernel/asm-offsets.c |  32 ++++
 arch/loongarch/kvm/switch.S         | 255 ++++++++++++++++++++++++++++
 2 files changed, 287 insertions(+)
 create mode 100644 arch/loongarch/kvm/switch.S

diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c
index 505e4bf596..d4bbaa74c1 100644
--- a/arch/loongarch/kernel/asm-offsets.c
+++ b/arch/loongarch/kernel/asm-offsets.c
@@ -9,6 +9,7 @@
 #include <linux/mm.h>
 #include <linux/kbuild.h>
 #include <linux/suspend.h>
+#include <linux/kvm_host.h>
 #include <asm/cpu-info.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
@@ -285,3 +286,34 @@ void output_fgraph_ret_regs_defines(void)
 	BLANK();
 }
 #endif
+
+static void __used output_kvm_defines(void)
+{
+	COMMENT(" KVM/LOONGARCH Specific offsets. ");
+
+	OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr);
+	OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc);
+	BLANK();
+
+	OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch);
+	OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
+	OFFSET(KVM_VCPU_RUN, kvm_vcpu, run);
+	BLANK();
+
+	OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp);
+	OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp);
+	OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
+	OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
+	OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);
+	OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc);
+	OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs);
+	OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat);
+	OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv);
+	OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi);
+	OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg);
+	OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
+	OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu);
+
+	OFFSET(KVM_GPGD, kvm, arch.pgd);
+	BLANK();
+}
diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
new file mode 100644
index 0000000000..f637fcd56c
--- /dev/null
+++ b/arch/loongarch/kvm/switch.S
@@ -0,0 +1,255 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
+ */
+
+#include <linux/linkage.h>
+#include <asm/stackframe.h>
+#include <asm/asm.h>
+#include <asm/asmmacro.h>
+#include <asm/regdef.h>
+#include <asm/loongarch.h>
+
+#define PT_GPR_OFFSET(x)	(PT_R0 + 8*x)
+#define GGPR_OFFSET(x)		(KVM_ARCH_GGPR + 8*x)
+
+.macro kvm_save_host_gpr base
+	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
+	st.d	$r\n, \base, PT_GPR_OFFSET(\n)
+	.endr
+.endm
+
+.macro kvm_restore_host_gpr base
+	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
+	ld.d	$r\n, \base, PT_GPR_OFFSET(\n)
+	.endr
+.endm
+
+/*
+ * save and restore all gprs except base register,
+ * and default value of base register is a2.
+ */
+.macro kvm_save_guest_gprs base
+	.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
+	st.d	$r\n, \base, GGPR_OFFSET(\n)
+	.endr
+.endm
+
+.macro kvm_restore_guest_gprs base
+	.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
+	ld.d	$r\n, \base, GGPR_OFFSET(\n)
+	.endr
+.endm
+
+/*
+ * prepare switch to guest, save host reg and restore guest reg.
+ * a2: kvm_vcpu_arch, don't touch it until 'ertn'
+ * t0, t1: temp register
+ */
+.macro kvm_switch_to_guest
+	/* set host excfg.VS=0, all exceptions share one exception entry */
+	csrrd		t0, LOONGARCH_CSR_ECFG
+	bstrins.w	t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT
+	csrwr		t0, LOONGARCH_CSR_ECFG
+
+	/* Load up the new EENTRY */
+	ld.d	t0, a2, KVM_ARCH_GEENTRY
+	csrwr	t0, LOONGARCH_CSR_EENTRY
+
+	/* Set Guest ERA */
+	ld.d	t0, a2, KVM_ARCH_GPC
+	csrwr	t0, LOONGARCH_CSR_ERA
+
+	/* Save host PGDL */
+	csrrd	t0, LOONGARCH_CSR_PGDL
+	st.d	t0, a2, KVM_ARCH_HPGD
+
+	/* Switch to kvm */
+	ld.d	t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH
+
+	/* Load guest PGDL */
+	li.w    t0, KVM_GPGD
+	ldx.d   t0, t1, t0
+	csrwr	t0, LOONGARCH_CSR_PGDL
+
+	/* Mix GID and RID */
+	csrrd		t1, LOONGARCH_CSR_GSTAT
+	bstrpick.w	t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT
+	csrrd		t0, LOONGARCH_CSR_GTLBC
+	bstrins.w	t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
+	csrwr		t0, LOONGARCH_CSR_GTLBC
+
+	/*
+	 * Switch to guest:
+	 *  GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0
+	 *  ertn
+	 */
+
+	/*
+	 * Enable intr in root mode with future ertn so that host interrupt
+	 * can be responsed during VM runs
+	 * guest crmd comes from separate gcsr_CRMD register
+	 */
+	ori	t0, zero, CSR_PRMD_PIE
+	csrxchg	t0, t0,   LOONGARCH_CSR_PRMD
+
+	/* Set PVM bit to setup ertn to guest context */
+	ori	t0, zero, CSR_GSTAT_PVM
+	csrxchg	t0, t0,   LOONGARCH_CSR_GSTAT
+
+	/* Load Guest gprs */
+	kvm_restore_guest_gprs a2
+	/* Load KVM_ARCH register */
+	ld.d	a2, a2,	(KVM_ARCH_GGPR + 8 * REG_A2)
+
+	ertn
+.endm
+
+	/*
+	 * exception entry for general exception from guest mode
+	 *  - IRQ is disabled
+	 *  - kernel privilege in root mode
+	 *  - page mode keep unchanged from previous prmd in root mode
+	 *  - Fixme: tlb exception cannot happen since registers relative with TLB
+	 *  -        is still in guest mode, such as pgd table/vmid registers etc,
+	 *  -        will fix with hw page walk enabled in future
+	 * load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to KVM_TEMP_KS
+	 */
+	.text
+	.cfi_sections	.debug_frame
+SYM_CODE_START(kvm_vector_entry)
+	csrwr	a2,   KVM_TEMP_KS
+	csrrd	a2,   KVM_VCPU_KS
+	addi.d	a2,   a2, KVM_VCPU_ARCH
+
+	/* After save gprs, free to use any gpr */
+	kvm_save_guest_gprs a2
+	/* Save guest a2 */
+	csrrd	t0,	KVM_TEMP_KS
+	st.d	t0,	a2,	(KVM_ARCH_GGPR + 8 * REG_A2)
+
+	/* a2: kvm_vcpu_arch, a1 is free to use */
+	csrrd	s1,   KVM_VCPU_KS
+	ld.d	s0,   s1, KVM_VCPU_RUN
+
+	csrrd	t0,   LOONGARCH_CSR_ESTAT
+	st.d	t0,   a2, KVM_ARCH_HESTAT
+	csrrd	t0,   LOONGARCH_CSR_ERA
+	st.d	t0,   a2, KVM_ARCH_GPC
+	csrrd	t0,   LOONGARCH_CSR_BADV
+	st.d	t0,   a2, KVM_ARCH_HBADV
+	csrrd	t0,   LOONGARCH_CSR_BADI
+	st.d	t0,   a2, KVM_ARCH_HBADI
+
+	/* Restore host excfg.VS */
+	csrrd	t0, LOONGARCH_CSR_ECFG
+	ld.d	t1, a2, KVM_ARCH_HECFG
+	or	t0, t0, t1
+	csrwr	t0, LOONGARCH_CSR_ECFG
+
+	/* Restore host eentry */
+	ld.d	t0, a2, KVM_ARCH_HEENTRY
+	csrwr	t0, LOONGARCH_CSR_EENTRY
+
+	/* restore host pgd table */
+	ld.d    t0, a2, KVM_ARCH_HPGD
+	csrwr   t0, LOONGARCH_CSR_PGDL
+
+	/*
+	 * Disable PGM bit to enter root mode by default with next ertn
+	 */
+	ori	t0, zero, CSR_GSTAT_PVM
+	csrxchg	zero, t0, LOONGARCH_CSR_GSTAT
+	/*
+	 * Clear GTLBC.TGID field
+	 *       0: for root  tlb update in future tlb instr
+	 *  others: for guest tlb update like gpa to hpa in future tlb instr
+	 */
+	csrrd	t0, LOONGARCH_CSR_GTLBC
+	bstrins.w	t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
+	csrwr	t0, LOONGARCH_CSR_GTLBC
+	ld.d	tp, a2, KVM_ARCH_HTP
+	ld.d	sp, a2, KVM_ARCH_HSP
+	/* restore per cpu register */
+	ld.d	u0, a2, KVM_ARCH_HPERCPU
+	addi.d	sp, sp, -PT_SIZE
+
+	/* Prepare handle exception */
+	or	a0, s0, zero
+	or	a1, s1, zero
+	ld.d	t8, a2, KVM_ARCH_HANDLE_EXIT
+	jirl	ra, t8, 0
+
+	or	a2, s1, zero
+	addi.d	a2, a2, KVM_VCPU_ARCH
+
+	/* resume host when ret <= 0 */
+	bge	zero, a0, ret_to_host
+
+	/*
+         * return to guest
+         * save per cpu register again, maybe switched to another cpu
+         */
+	st.d	u0, a2, KVM_ARCH_HPERCPU
+
+	/* Save kvm_vcpu to kscratch */
+	csrwr	s1, KVM_VCPU_KS
+	kvm_switch_to_guest
+
+ret_to_host:
+	ld.d    a2, a2, KVM_ARCH_HSP
+	addi.d  a2, a2, -PT_SIZE
+	kvm_restore_host_gpr    a2
+	jr      ra
+
+SYM_INNER_LABEL(kvm_vector_entry_end, SYM_L_LOCAL)
+SYM_CODE_END(kvm_vector_entry)
+
+/*
+ * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu)
+ *
+ * @register_param:
+ *  a0: kvm_run* run
+ *  a1: kvm_vcpu* vcpu
+ */
+SYM_FUNC_START(kvm_enter_guest)
+	/* allocate space in stack bottom */
+	addi.d	a2, sp, -PT_SIZE
+	/* save host gprs */
+	kvm_save_host_gpr a2
+
+	/* save host crmd,prmd csr to stack */
+	csrrd	a3, LOONGARCH_CSR_CRMD
+	st.d	a3, a2, PT_CRMD
+	csrrd	a3, LOONGARCH_CSR_PRMD
+	st.d	a3, a2, PT_PRMD
+
+	addi.d	a2, a1, KVM_VCPU_ARCH
+	st.d	sp, a2, KVM_ARCH_HSP
+	st.d	tp, a2, KVM_ARCH_HTP
+	/* Save per cpu register */
+	st.d	u0, a2, KVM_ARCH_HPERCPU
+
+	/* Save kvm_vcpu to kscratch */
+	csrwr	a1, KVM_VCPU_KS
+	kvm_switch_to_guest
+SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL)
+SYM_FUNC_END(kvm_enter_guest)
+
+SYM_FUNC_START(kvm_save_fpu)
+	fpu_save_csr	a0 t1
+	fpu_save_double a0 t1
+	fpu_save_cc	a0 t1 t2
+	jr              ra
+SYM_FUNC_END(kvm_save_fpu)
+
+SYM_FUNC_START(kvm_restore_fpu)
+	fpu_restore_double a0 t1
+	fpu_restore_csr    a0 t1
+	fpu_restore_cc	   a0 t1 t2
+	jr                 ra
+SYM_FUNC_END(kvm_restore_fpu)
+
+	.section ".rodata"
+SYM_DATA(kvm_vector_size, .quad kvm_vector_entry_end - kvm_vector_entry)
+SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (26 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-09-07 20:10   ` WANG Xuerui
  2023-09-11  7:30   ` WANG Xuerui
  2023-08-31  8:30 ` [PATCH v20 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
                   ` (2 subsequent siblings)
  30 siblings, 2 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, kernel test robot

Enable LoongArch kvm config and add the makefile to support build kvm
module.

Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 arch/loongarch/Kbuild                      |  1 +
 arch/loongarch/Kconfig                     |  3 ++
 arch/loongarch/configs/loongson3_defconfig |  2 +
 arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
 arch/loongarch/kvm/Makefile                | 22 +++++++++++
 5 files changed, 73 insertions(+)
 create mode 100644 arch/loongarch/kvm/Kconfig
 create mode 100644 arch/loongarch/kvm/Makefile

diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
index b01f5cdb27..40be8a1696 100644
--- a/arch/loongarch/Kbuild
+++ b/arch/loongarch/Kbuild
@@ -2,6 +2,7 @@ obj-y += kernel/
 obj-y += mm/
 obj-y += net/
 obj-y += vdso/
+obj-y += kvm/
 
 # for cleaning
 subdir- += boot
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index ecf282dee5..7f2f7ccc76 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -123,6 +123,7 @@ config LOONGARCH
 	select HAVE_KPROBES
 	select HAVE_KPROBES_ON_FTRACE
 	select HAVE_KRETPROBES
+	select HAVE_KVM
 	select HAVE_MOD_ARCH_SPECIFIC
 	select HAVE_NMI
 	select HAVE_PCI
@@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
 source "drivers/acpi/Kconfig"
 
 endmenu
+
+source "arch/loongarch/kvm/Kconfig"
diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
index d64849b4cb..7acb4ae7af 100644
--- a/arch/loongarch/configs/loongson3_defconfig
+++ b/arch/loongarch/configs/loongson3_defconfig
@@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
 CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
 CONFIG_EFI_CAPSULE_LOADER=m
 CONFIG_EFI_TEST=m
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=m
 CONFIG_MODULES=y
 CONFIG_MODULE_FORCE_LOAD=y
 CONFIG_MODULE_UNLOAD=y
diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
new file mode 100644
index 0000000000..bf7d6e7cde
--- /dev/null
+++ b/arch/loongarch/kvm/Kconfig
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# KVM configuration
+#
+
+source "virt/kvm/Kconfig"
+
+menuconfig VIRTUALIZATION
+	bool "Virtualization"
+	help
+	  Say Y here to get to see options for using your Linux host to run
+	  other operating systems inside virtual machines (guests).
+	  This option alone does not add any kernel code.
+
+	  If you say N, all options in this submenu will be skipped and
+	  disabled.
+
+if VIRTUALIZATION
+
+config AS_HAS_LVZ_EXTENSION
+	def_bool $(as-instr,hvcl 0)
+
+config KVM
+	tristate "Kernel-based Virtual Machine (KVM) support"
+	depends on HAVE_KVM
+	depends on AS_HAS_LVZ_EXTENSION
+	select MMU_NOTIFIER
+	select ANON_INODES
+	select PREEMPT_NOTIFIERS
+	select KVM_MMIO
+	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
+	select KVM_GENERIC_HARDWARE_ENABLING
+	select KVM_XFER_TO_GUEST_WORK
+	select HAVE_KVM_DIRTY_RING_ACQ_REL
+	select HAVE_KVM_VCPU_ASYNC_IOCTL
+	select HAVE_KVM_EVENTFD
+	select SRCU
+	help
+	  Support hosting virtualized guest machines using hardware
+	  virtualization extensions. You will need a fairly processor
+	  equipped with virtualization extensions.
+
+	  If unsure, say N.
+
+endif # VIRTUALIZATION
diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
new file mode 100644
index 0000000000..2335e873a6
--- /dev/null
+++ b/arch/loongarch/kvm/Makefile
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for LOONGARCH KVM support
+#
+
+ccflags-y += -I $(srctree)/$(src)
+
+include $(srctree)/virt/kvm/Makefile.kvm
+
+obj-$(CONFIG_KVM) += kvm.o
+
+kvm-y += main.o
+kvm-y += vm.o
+kvm-y += vmid.o
+kvm-y += tlb.o
+kvm-y += mmu.o
+kvm-y += vcpu.o
+kvm-y += exit.o
+kvm-y += interrupt.o
+kvm-y += timer.o
+kvm-y += switch.o
+kvm-y += csr_ops.o
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (27 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-08-31  8:30 ` [PATCH v20 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
  2023-09-11  4:02 ` [PATCH v20 00/30] Add KVM LoongArch support Huacai Chen
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, Huacai Chen

Supplement kvm document about LoongArch-specific part, such as add
api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
etc.

Reviewed-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 Documentation/virt/kvm/api.rst | 70 +++++++++++++++++++++++++++++-----
 1 file changed, 61 insertions(+), 9 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index c0ddd30354..8ad10ec17a 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -416,6 +416,13 @@ Reads the general purpose registers from the vcpu.
 	__u64 pc;
   };
 
+  /* LoongArch */
+  struct kvm_regs {
+	/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
+        unsigned long gpr[32];
+        unsigned long pc;
+  };
+
 
 4.12 KVM_SET_REGS
 -----------------
@@ -506,7 +513,7 @@ translation mode.
 ------------------
 
 :Capability: basic
-:Architectures: x86, ppc, mips, riscv
+:Architectures: x86, ppc, mips, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_interrupt (in)
 :Returns: 0 on success, negative on failure.
@@ -592,6 +599,14 @@ b) KVM_INTERRUPT_UNSET
 
 This is an asynchronous vcpu ioctl and can be invoked from any thread.
 
+LOONGARCH:
+^^^^^^^^^^
+
+Queues an external interrupt to be injected into the virtual CPU. A negative
+interrupt number dequeues the interrupt.
+
+This is an asynchronous vcpu ioctl and can be invoked from any thread.
+
 
 4.17 KVM_DEBUG_GUEST
 --------------------
@@ -737,7 +752,7 @@ signal mask.
 ----------------
 
 :Capability: basic
-:Architectures: x86
+:Architectures: x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_fpu (out)
 :Returns: 0 on success, -1 on error
@@ -746,7 +761,7 @@ Reads the floating point state from the vcpu.
 
 ::
 
-  /* for KVM_GET_FPU and KVM_SET_FPU */
+  /* x86: for KVM_GET_FPU and KVM_SET_FPU */
   struct kvm_fpu {
 	__u8  fpr[8][16];
 	__u16 fcw;
@@ -761,12 +776,21 @@ Reads the floating point state from the vcpu.
 	__u32 pad2;
   };
 
+  /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
+  struct kvm_fpu {
+        __u32 fcsr;
+        __u64 fcc;
+        struct kvm_fpureg {
+                __u64 val64[4];
+        }fpr[32];
+  };
+
 
 4.23 KVM_SET_FPU
 ----------------
 
 :Capability: basic
-:Architectures: x86
+:Architectures: x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_fpu (in)
 :Returns: 0 on success, -1 on error
@@ -775,7 +799,7 @@ Writes the floating point state to the vcpu.
 
 ::
 
-  /* for KVM_GET_FPU and KVM_SET_FPU */
+  /* x86: for KVM_GET_FPU and KVM_SET_FPU */
   struct kvm_fpu {
 	__u8  fpr[8][16];
 	__u16 fcw;
@@ -790,6 +814,15 @@ Writes the floating point state to the vcpu.
 	__u32 pad2;
   };
 
+  /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
+  struct kvm_fpu {
+        __u32 fcsr;
+        __u64 fcc;
+        struct kvm_fpureg {
+                __u64 val64[4];
+        }fpr[32];
+  };
+
 
 4.24 KVM_CREATE_IRQCHIP
 -----------------------
@@ -1387,7 +1420,7 @@ documentation when it pops into existence).
 -------------------
 
 :Capability: KVM_CAP_ENABLE_CAP
-:Architectures: mips, ppc, s390, x86
+:Architectures: mips, ppc, s390, x86, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_enable_cap (in)
 :Returns: 0 on success; -1 on error
@@ -1442,7 +1475,7 @@ for vm-wide capabilities.
 ---------------------
 
 :Capability: KVM_CAP_MP_STATE
-:Architectures: x86, s390, arm64, riscv
+:Architectures: x86, s390, arm64, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_mp_state (out)
 :Returns: 0 on success; -1 on error
@@ -1460,7 +1493,7 @@ Possible values are:
 
    ==========================    ===============================================
    KVM_MP_STATE_RUNNABLE         the vcpu is currently running
-                                 [x86,arm64,riscv]
+                                 [x86,arm64,riscv,loongarch]
    KVM_MP_STATE_UNINITIALIZED    the vcpu is an application processor (AP)
                                  which has not yet received an INIT signal [x86]
    KVM_MP_STATE_INIT_RECEIVED    the vcpu has received an INIT signal, and is
@@ -1516,11 +1549,14 @@ For riscv:
 The only states that are valid are KVM_MP_STATE_STOPPED and
 KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
 
+On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
+whether the vcpu is runnable.
+
 4.39 KVM_SET_MP_STATE
 ---------------------
 
 :Capability: KVM_CAP_MP_STATE
-:Architectures: x86, s390, arm64, riscv
+:Architectures: x86, s390, arm64, riscv, loongarch
 :Type: vcpu ioctl
 :Parameters: struct kvm_mp_state (in)
 :Returns: 0 on success; -1 on error
@@ -1538,6 +1574,9 @@ For arm64/riscv:
 The only states that are valid are KVM_MP_STATE_STOPPED and
 KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
 
+On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
+whether the vcpu is runnable.
+
 4.40 KVM_SET_IDENTITY_MAP_ADDR
 ------------------------------
 
@@ -2839,6 +2878,19 @@ Following are the RISC-V D-extension registers:
   0x8020 0000 0600 0020 fcsr      Floating point control and status register
 ======================= ========= =============================================
 
+LoongArch registers are mapped using the lower 32 bits. The upper 16 bits of
+that is the register group type.
+
+LoongArch csr registers are used to control guest cpu or get status of guest
+cpu, and they have the following id bit patterns::
+
+  0x9030 0000 0001 00 <reg:5> <sel:3>   (64-bit)
+
+LoongArch KVM control registers are used to implement some new defined functions
+such as set vcpu counter or reset vcpu, and they have the following id bit patterns::
+
+  0x9030 0000 0002 <reg:16>
+
 
 4.69 KVM_GET_ONE_REG
 --------------------
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v20 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (28 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
@ 2023-08-31  8:30 ` Tianrui Zhao
  2023-09-11  4:02 ` [PATCH v20 00/30] Add KVM LoongArch support Huacai Chen
  30 siblings, 0 replies; 56+ messages in thread
From: Tianrui Zhao @ 2023-08-31  8:30 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, WANG Xuerui, Greg Kroah-Hartman,
	loongarch, Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton,
	maobibo, Xi Ruoyao, zhaotianrui, Huacai Chen

Add maintainers for LoongArch KVM.

Acked-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
---
 MAINTAINERS | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 242178802c..11eb27dd66 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11472,6 +11472,18 @@ F:	include/kvm/arm_*
 F:	tools/testing/selftests/kvm/*/aarch64/
 F:	tools/testing/selftests/kvm/aarch64/
 
+KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
+M:	Tianrui Zhao <zhaotianrui@loongson.cn>
+M:	Bibo Mao <maobibo@loongson.cn>
+M:	Huacai Chen <chenhuacai@kernel.org>
+L:	kvm@vger.kernel.org
+L:	loongarch@lists.linux.dev
+S:	Maintained
+T:	git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
+F:	arch/loongarch/include/asm/kvm*
+F:	arch/loongarch/include/uapi/asm/kvm*
+F:	arch/loongarch/kvm/
+
 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
 M:	Huacai Chen <chenhuacai@kernel.org>
 L:	linux-mips@vger.kernel.org
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations
  2023-08-31  8:30 ` [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
@ 2023-09-07 19:57   ` WANG Xuerui
  2023-09-12  9:42     ` zhaotianrui
  0 siblings, 1 reply; 56+ messages in thread
From: WANG Xuerui @ 2023-09-07 19:57 UTC (permalink / raw)
  To: Tianrui Zhao, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao

On 8/31/23 16:30, Tianrui Zhao wrote:
> Implement LoongArch kvm mmu, it is used to switch gpa to hpa when
> guest exit because of address translation exception. This patch
> implement allocate gpa page table, search gpa from it and flush guest
> gpa in the table.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/kvm/mmu.c | 678 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 678 insertions(+)
>   create mode 100644 arch/loongarch/kvm/mmu.c
>
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> new file mode 100644
> index 0000000000..4bb20393f4
> --- /dev/null
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -0,0 +1,678 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#include <linux/highmem.h>
> +#include <linux/page-flags.h>
> +#include <linux/kvm_host.h>
> +#include <linux/uaccess.h>
> +#include <asm/mmu_context.h>
> +#include <asm/pgalloc.h>
> +#include <asm/tlb.h>
> +
> +/*
> + * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels
> + * for which pages need to be cached.
> + */
> +#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1)
> +
> +static inline void kvm_set_pte(pte_t *ptep, pte_t pteval)
> +{
> +	*ptep = pteval;
> +}
> +
> +/**
> + * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory.
> + *
> + * Allocate a blank KVM GPA page directory (PGD) for representing guest physical
> + * to host physical page mappings.
> + *
> + * Returns:	Pointer to new KVM GPA page directory.
> + *		NULL on allocation failure.
> + */
> +pgd_t *kvm_pgd_alloc(void)
> +{
> +	pgd_t *pgd;
> +
> +	pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0);
> +	if (pgd)
> +		pgd_init((void *)pgd);
> +
> +	return pgd;
> +}
> +
> +/*
> + * Caller must hold kvm->mm_lock
> + *
> + * Walk the page tables of kvm to find the PTE corresponding to the
> + * address @addr. If page tables don't exist for @addr, they will be created
> + * from the MMU cache if @cache is not NULL.
> + */
> +static pte_t *kvm_populate_gpa(struct kvm *kvm,
> +				struct kvm_mmu_memory_cache *cache,
> +				unsigned long addr)
> +{
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +
> +	pgd = kvm->arch.pgd + pgd_index(addr);
> +	p4d = p4d_offset(pgd, addr);
> +	if (p4d_none(*p4d)) {
> +		if (!cache)
> +			return NULL;
> +
> +		pud = kvm_mmu_memory_cache_alloc(cache);
> +		pud_init(pud);
> +		p4d_populate(NULL, p4d, pud);
> +	}
> +
> +	pud = pud_offset(p4d, addr);
> +	if (pud_none(*pud)) {
> +		if (!cache)
> +			return NULL;
> +		pmd = kvm_mmu_memory_cache_alloc(cache);
> +		pmd_init(pmd);
> +		pud_populate(NULL, pud, pmd);
> +	}
> +
> +	pmd = pmd_offset(pud, addr);
> +	if (pmd_none(*pmd)) {
> +		pte_t *pte;
> +
> +		if (!cache)
> +			return NULL;
> +		pte = kvm_mmu_memory_cache_alloc(cache);
> +		clear_page(pte);
> +		pmd_populate_kernel(NULL, pmd, pte);
> +	}
> +
> +	return pte_offset_kernel(pmd, addr);
> +}
> +
> +typedef int (*kvm_pte_ops)(pte_t *pte);
> +
> +struct kvm_ptw_ctx {
> +	kvm_pte_ops	ops;
> +	int		need_flush;
> +};
> +
> +static int kvm_ptw_pte(pmd_t *pmd, unsigned long addr, unsigned long end,
> +			struct kvm_ptw_ctx *context)
> +{
> +	pte_t *pte;
> +	unsigned long next, start;
> +	int ret;
> +
> +	ret = 0;
> +	start = addr;
> +	pte = pte_offset_kernel(pmd, addr);
> +	do {
> +		next = addr + PAGE_SIZE;
> +		if (!pte_present(*pte))
> +			continue;
> +
> +		ret |= context->ops(pte);
> +	} while (pte++, addr = next, addr != end);
> +
> +	if (context->need_flush && (start + PMD_SIZE == end)) {
> +		pte = pte_offset_kernel(pmd, 0);
> +		pmd_clear(pmd);
> +		free_page((unsigned long)pte);
> +	}
> +
> +	return ret;
> +}
> +
> +static int kvm_ptw_pmd(pud_t *pud, unsigned long addr, unsigned long end,
> +			struct kvm_ptw_ctx *context)
> +{
> +	pmd_t *pmd;
> +	unsigned long next, start;
> +	int ret;
> +
> +	ret = 0;
> +	start = addr;
> +	pmd = pmd_offset(pud, addr);
> +	do {
> +		next = pmd_addr_end(addr, end);
> +		if (!pmd_present(*pmd))
> +			continue;
> +
> +		ret |= kvm_ptw_pte(pmd, addr, next, context);
> +	} while (pmd++, addr = next, addr != end);
> +
> +#ifndef __PAGETABLE_PMD_FOLDED
> +	if (context->need_flush && (start + PUD_SIZE == end)) {
> +		pmd = pmd_offset(pud, 0);
> +		pud_clear(pud);
> +		free_page((unsigned long)pmd);
> +	}
> +#endif
> +
> +	return ret;
> +}
> +
> +static int kvm_ptw_pud(pgd_t *pgd, unsigned long addr, unsigned long end,
> +			struct kvm_ptw_ctx *context)
> +{
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	int ret = 0;
> +	unsigned long next;
> +#ifndef __PAGETABLE_PUD_FOLDED
> +	unsigned long start = addr;
> +#endif
> +
> +	p4d = p4d_offset(pgd, addr);
> +	pud = pud_offset(p4d, addr);
> +	do {
> +		next = pud_addr_end(addr, end);
> +		if (!pud_present(*pud))
> +			continue;
> +
> +		ret |= kvm_ptw_pmd(pud, addr, next, context);
> +	} while (pud++, addr = next, addr != end);
> +
> +#ifndef __PAGETABLE_PUD_FOLDED
> +	if (context->need_flush && (start + PGDIR_SIZE == end)) {
> +		pud = pud_offset(p4d, 0);
> +		p4d_clear(p4d);
> +		free_page((unsigned long)pud);
> +	}
> +#endif
> +
> +	return ret;
> +}
> +
> +static int kvm_ptw_pgd(pgd_t *pgd, unsigned long addr, unsigned long end,
> +			struct kvm_ptw_ctx *context)
> +{
> +	unsigned long next;
> +	int ret;
> +
> +	ret = 0;
> +	if (addr > end - 1)
> +		return ret;
> +	pgd = pgd + pgd_index(addr);
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		if (!pgd_present(*pgd))
> +			continue;
> +
> +		ret |= kvm_ptw_pud(pgd, addr, next, context);
> +	}  while (pgd++, addr = next, addr != end);
> +
> +	return ret;
> +}
> +
> +/*
> + * clear pte entry
> + */
> +static int kvm_flush_pte(pte_t *pte)
> +{
> +	kvm_set_pte(pte, __pte(0));
> +	return 1;
> +}
> +
> +/**
> + * kvm_flush_range() - Flush a range of guest physical addresses.
> + * @kvm:	KVM pointer.
> + * @start_gfn:	Guest frame number of first page in GPA range to flush.
> + * @end_gfn:	Guest frame number of last page in GPA range to flush.
> + *
> + * Flushes a range of GPA mappings from the GPA page tables.
> + *
> + * The caller must hold the @kvm->mmu_lock spinlock.
> + *
> + * Returns:	Whether its safe to remove the top level page directory because
> + *		all lower levels have been removed.
> + */
> +static bool kvm_flush_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
> +{
> +	struct kvm_ptw_ctx ctx;
> +
> +	ctx.ops = kvm_flush_pte;
> +	ctx.need_flush = 1;
> +
> +	return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
> +				end_gfn << PAGE_SHIFT, &ctx);
> +}
> +
> +/*
> + * kvm_mkclean_pte
> + * Mark a range of guest physical address space clean (writes fault) in the VM's
> + * GPA page table to allow dirty page tracking.
> + */
> +static int kvm_mkclean_pte(pte_t *pte)
> +{
> +	pte_t val;
> +
> +	val = *pte;
> +	if (pte_dirty(val)) {
> +		*pte = pte_mkclean(val);
> +		return 1;
> +	}
> +	return 0;
> +}
> +
> +/*
> + * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean.
> + * @kvm:	KVM pointer.
> + * @start_gfn:	Guest frame number of first page in GPA range to flush.
> + * @end_gfn:	Guest frame number of last page in GPA range to flush.
> + *
> + * Make a range of GPA mappings clean so that guest writes will fault and
> + * trigger dirty page logging.
> + *
> + * The caller must hold the @kvm->mmu_lock spinlock.
> + *
> + * Returns:	Whether any GPA mappings were modified, which would require
> + *		derived mappings (GVA page tables & TLB enties) to be
> + *		invalidated.
> + */
> +static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
> +{
> +	struct kvm_ptw_ctx ctx;
> +
> +	ctx.ops = kvm_mkclean_pte;
> +	ctx.need_flush = 0;
> +	return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
> +				end_gfn << PAGE_SHIFT, &ctx);
> +}
> +
> +/*
> + * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages
> + * @kvm:	The KVM pointer
> + * @slot:	The memory slot associated with mask
> + * @gfn_offset:	The gfn offset in memory slot
> + * @mask:	The mask of dirty pages at offset 'gfn_offset' in this memory
> + *		slot to be write protected
> + *
> + * Walks bits set in mask write protects the associated pte's. Caller must
> + * acquire @kvm->mmu_lock.
> + */
> +void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
> +		struct kvm_memory_slot *slot,
> +		gfn_t gfn_offset, unsigned long mask)
> +{
> +	gfn_t base_gfn = slot->base_gfn + gfn_offset;
> +	gfn_t start = base_gfn +  __ffs(mask);
One extra space after the plus sign?
> +	gfn_t end = base_gfn + __fls(mask) + 1;
> +
> +	kvm_mkclean_gpa_pt(kvm, start, end);
> +}
> +
> +void kvm_arch_commit_memory_region(struct kvm *kvm,
> +				   struct kvm_memory_slot *old,
> +				   const struct kvm_memory_slot *new,
> +				   enum kvm_mr_change change)
> +{
> +	int needs_flush;
> +
> +	/*
> +	 * If dirty page logging is enabled, write protect all pages in the slot
> +	 * ready for dirty logging.
> +	 *
> +	 * There is no need to do this in any of the following cases:
> +	 * CREATE:	No dirty mappings will already exist.
> +	 * MOVE/DELETE:	The old mappings will already have been cleaned up by
> +	 *		kvm_arch_flush_shadow_memslot()
> +	 */
> +	if (change == KVM_MR_FLAGS_ONLY &&
> +	    (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
> +	     new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
> +		spin_lock(&kvm->mmu_lock);
> +		/* Write protect GPA page table entries */
> +		needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn,
> +					new->base_gfn + new->npages);
> +		if (needs_flush)
> +			kvm_flush_remote_tlbs(kvm);
> +		spin_unlock(&kvm->mmu_lock);
> +	}
> +}
> +
> +void kvm_arch_flush_shadow_all(struct kvm *kvm)
> +{
> +	/* Flush whole GPA */
> +	kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
> +	/* Flush vpid for each vCPU individually */
> +	kvm_flush_remote_tlbs(kvm);
> +}
> +
> +void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
> +		struct kvm_memory_slot *slot)
> +{
> +	int ret;
> +
> +	/*
> +	 * The slot has been made invalid (ready for moving or deletion), so we
> +	 * need to ensure that it can no longer be accessed by any guest vCPUs.
> +	 */
> +	spin_lock(&kvm->mmu_lock);
> +	/* Flush slot from GPA */
> +	ret = kvm_flush_range(kvm, slot->base_gfn,
> +			slot->base_gfn + slot->npages);
> +	/* Let implementation do the rest */
> +	if (ret)
> +		kvm_flush_remote_tlbs(kvm);
> +	spin_unlock(&kvm->mmu_lock);
> +}
> +
> +void _kvm_destroy_mm(struct kvm *kvm)
> +{
> +	/* It should always be safe to remove after flushing the whole range */
> +	kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
> +	pgd_free(NULL, kvm->arch.pgd);
> +	kvm->arch.pgd = NULL;
> +}
> +
> +/*
> + * Mark a range of guest physical address space old (all accesses fault) in the
> + * VM's GPA page table to allow detection of commonly used pages.
> + */
> +static int kvm_mkold_pte(pte_t *pte)
> +{
> +	pte_t val;
> +
> +	val = *pte;
"pte_t val = *pte" would be enough... You may want to check the entire 
patch series for simplifications like this.
> +	if (pte_young(val)) {
> +		*pte = pte_mkold(val);
> +		return 1;
> +	}
> +	return 0;
> +}
> +
> +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> +{
> +	return kvm_flush_range(kvm, range->start, range->end);
> +}
> +
> +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> +{
> +	gpa_t gpa = range->start << PAGE_SHIFT;
> +	pte_t hva_pte = range->pte;
This has become "range->arg.pte" since commit 3e1efe2b67d3 ("KVM: Wrap 
kvm_{gfn,hva}_range.pte in a per-action union") which is already inside 
linux-next.
> +	pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
> +	pte_t old_pte;
> +
> +	if (!ptep)
> +		return false;
> +
> +	/* Mapping may need adjusting depending on memslot flags */
> +	old_pte = *ptep;
> +	if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && !pte_dirty(old_pte))
> +		hva_pte = pte_mkclean(hva_pte);
> +	else if (range->slot->flags & KVM_MEM_READONLY)
> +		hva_pte = pte_wrprotect(hva_pte);
> +
> +	kvm_set_pte(ptep, hva_pte);
> +
> +	/* Replacing an absent or old page doesn't need flushes */
> +	if (!pte_present(old_pte) || !pte_young(old_pte))
> +		return false;
> +
> +	/* Pages swapped, aged, moved, or cleaned require flushes */
> +	return !pte_present(hva_pte) ||
> +	       !pte_young(hva_pte) ||
> +	       pte_pfn(old_pte) != pte_pfn(hva_pte) ||
> +	       (pte_dirty(old_pte) && !pte_dirty(hva_pte));
> +}
> +
> +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> +{
> +	struct kvm_ptw_ctx ctx;
> +
> +	ctx.ops = kvm_mkold_pte;
> +	ctx.need_flush = 0;
> +	return kvm_ptw_pgd(kvm->arch.pgd, range->start << PAGE_SHIFT,
> +				range->end << PAGE_SHIFT, &ctx);
> +}
> +
> +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> +{
> +	gpa_t gpa = range->start << PAGE_SHIFT;
> +	pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
> +
> +	if (ptep && pte_present(*ptep) && pte_young(*ptep))
> +		return true;
> +
> +	return false;
> +}
> +
> +/**
> + * kvm_map_page_fast() - Fast path GPA fault handler.
> + * @vcpu:		vCPU pointer.
> + * @gpa:		Guest physical address of fault.
> + * @write:	Whether the fault was due to a write.
> + *
> + * Perform fast path GPA fault handling, doing all that can be done without
> + * calling into KVM. This handles marking old pages young (for idle page
> + * tracking), and dirtying of clean pages (for dirty page logging).
> + *
> + * Returns:	0 on success, in which case we can update derived mappings and
> + *		resume guest execution.
> + *		-EFAULT on failure due to absent GPA mapping or write to
> + *		read-only page, in which case KVM must be consulted.
> + */
> +static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
> +				   bool write)
> +{
> +	struct kvm *kvm = vcpu->kvm;
> +	gfn_t gfn = gpa >> PAGE_SHIFT;
> +	pte_t *ptep;
> +	kvm_pfn_t pfn = 0;
> +	bool pfn_valid = false, pfn_dirty = false;
> +	int ret = 0;
> +
> +	spin_lock(&kvm->mmu_lock);
> +
> +	/* Fast path - just check GPA page table for an existing entry */
> +	ptep = kvm_populate_gpa(kvm, NULL, gpa);
> +	if (!ptep || !pte_present(*ptep)) {
> +		ret = -EFAULT;
> +		goto out;
> +	}
> +
> +	/* Track access to pages marked old */
> +	if (!pte_young(*ptep)) {
> +		kvm_set_pte(ptep, pte_mkyoung(*ptep));
> +		pfn = pte_pfn(*ptep);
> +		pfn_valid = true;
> +		/* call kvm_set_pfn_accessed() after unlock */
> +	}
> +	if (write && !pte_dirty(*ptep)) {
> +		if (!pte_write(*ptep)) {
> +			ret = -EFAULT;
> +			goto out;
> +		}
> +
> +		/* Track dirtying of writeable pages */
> +		kvm_set_pte(ptep, pte_mkdirty(*ptep));
> +		pfn = pte_pfn(*ptep);
> +		pfn_dirty = true;
> +	}
> +
> +out:
> +	spin_unlock(&kvm->mmu_lock);
> +	if (pfn_valid)
> +		kvm_set_pfn_accessed(pfn);
> +	if (pfn_dirty) {
> +		mark_page_dirty(kvm, gfn);
> +		kvm_set_pfn_dirty(pfn);
> +	}
> +	return ret;
> +}
> +
> +/**
> + * kvm_map_page() - Map a guest physical page.
> + * @vcpu:		vCPU pointer.
> + * @gpa:		Guest physical address of fault.
> + * @write:	Whether the fault was due to a write.
> + *
> + * Handle GPA faults by creating a new GPA mapping (or updating an existing
> + * one).
> + *
> + * This takes care of marking pages young or dirty (idle/dirty page tracking),
> + * asking KVM for the corresponding PFN, and creating a mapping in the GPA page
> + * tables. Derived mappings (GVA page tables and TLBs) must be handled by the
> + * caller.
> + *
> + * Returns:	0 on success
> + *		-EFAULT if there is no memory region at @gpa or a write was
> + *		attempted to a read-only memory region. This is usually handled
> + *		as an MMIO access.
> + */
> +static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
> +{
> +	bool writeable;
> +	int srcu_idx, err = 0, retry_no = 0;
> +	unsigned long hva;
> +	unsigned long mmu_seq;
> +	unsigned long prot_bits;
> +	pte_t *ptep, new_pte;
> +	kvm_pfn_t pfn;
> +	gfn_t gfn = gpa >> PAGE_SHIFT;
> +	struct vm_area_struct *vma;
> +	struct kvm *kvm = vcpu->kvm;
> +	struct kvm_memory_slot *memslot;
> +	struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> +
> +	/* Try the fast path to handle old / clean pages */
> +	srcu_idx = srcu_read_lock(&kvm->srcu);
> +	err = kvm_map_page_fast(vcpu, gpa, write);
> +	if (!err)
> +		goto out;
> +
> +	memslot = gfn_to_memslot(kvm, gfn);
> +	hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable);
> +	if (kvm_is_error_hva(hva) || (write && !writeable))
> +		goto out;
> +
> +	mmap_read_lock(current->mm);
> +	vma = find_vma_intersection(current->mm, hva, hva + 1);
> +	if (unlikely(!vma)) {
> +		kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
> +		mmap_read_unlock(current->mm);
> +		err = -EFAULT;
> +		goto out;
> +	}
> +	mmap_read_unlock(current->mm);
> +
> +	/* We need a minimum of cached pages ready for page table creation */
> +	err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES);
> +	if (err)
> +		goto out;
> +
> +retry:
> +	/*
> +	 * Used to check for invalidations in progress, of the pfn that is
> +	 * returned by pfn_to_pfn_prot below.
> +	 */
> +	mmu_seq = kvm->mmu_invalidate_seq;
> +	/*
> +	 * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in
> +	 * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
> +	 * risk the page we get a reference to getting unmapped before we have a
> +	 * chance to grab the mmu_lock without mmu_invalidate_retry() noticing.
> +	 *
> +	 * This smp_rmb() pairs with the effective smp_wmb() of the combination
> +	 * of the pte_unmap_unlock() after the PTE is zapped, and the
> +	 * spin_lock() in kvm_mmu_invalidate_invalidate_<page|range_end>() before
> +	 * mmu_invalidate_seq is incremented.
> +	 */
> +	smp_rmb();
> +
> +	/* Slow path - ask KVM core whether we can access this GPA */
> +	pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable);
> +	if (is_error_noslot_pfn(pfn)) {
> +		err = -EFAULT;
> +		goto out;
> +	}
> +
> +	/* Check if an invalidation has taken place since we got pfn */
> +	if (mmu_invalidate_retry(kvm, mmu_seq)) {
> +		/*
Wrong indentation?
> +		 * This can happen when mappings are changed asynchronously, but
> +		 * also synchronously if a COW is triggered by
> +		 * gfn_to_pfn_prot().
> +		 */
> +		kvm_set_pfn_accessed(pfn);
> +		kvm_release_pfn_clean(pfn);
> +		if (retry_no > 100) {
> +			retry_no = 0;
> +			schedule();
> +		}
> +		retry_no++;
> +		goto retry;
> +	}
> +
> +	/*
> +	 * For emulated devices such virtio device, actual cache attribute is
> +	 * determined by physical machine.
> +	 * For pass through physical device, it should be uncachable
> +	 */
> +	prot_bits = _PAGE_PRESENT | __READABLE;
> +	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
> +		prot_bits |= _CACHE_SUC;
> +	else
> +		prot_bits |= _CACHE_CC;
> +
> +	if (writeable) {
> +		prot_bits |= _PAGE_WRITE;
> +		if (write)
> +			prot_bits |= __WRITEABLE;
> +	}
> +
> +	/* Ensure page tables are allocated */
> +	spin_lock(&kvm->mmu_lock);
> +	ptep = kvm_populate_gpa(kvm, memcache, gpa);
> +	new_pte = pfn_pte(pfn, __pgprot(prot_bits));
> +	kvm_set_pte(ptep, new_pte);
> +
> +	err = 0;
> +	spin_unlock(&kvm->mmu_lock);
> +
> +	if (prot_bits & _PAGE_DIRTY) {
> +		mark_page_dirty(kvm, gfn);
> +		kvm_set_pfn_dirty(pfn);
> +	}
> +
> +	kvm_set_pfn_accessed(pfn);
> +	kvm_release_pfn_clean(pfn);
> +out:
> +	srcu_read_unlock(&kvm->srcu, srcu_idx);
> +	return err;
> +}
> +
> +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
> +{
> +	int ret;
> +
> +	ret = kvm_map_page(vcpu, gpa, write);
> +	if (ret)
> +		return ret;
> +
> +	/* Invalidate this entry in the TLB */
> +	return kvm_flush_tlb_gpa(vcpu, gpa);
> +}
> +
> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
> +{
> +
> +}
> +
> +int kvm_arch_prepare_memory_region(struct kvm *kvm,
> +				   const struct kvm_memory_slot *old,
> +				   struct kvm_memory_slot *new,
> +				   enum kvm_mr_change change)
> +{
> +	return 0;
> +}
> +
> +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
> +					const struct kvm_memory_slot *memslot)
> +{
> +	kvm_flush_remote_tlbs(kvm);
> +}

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch
  2023-08-31  8:30 ` [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
@ 2023-09-07 20:04   ` WANG Xuerui
  2023-09-12  9:55     ` zhaotianrui
  0 siblings, 1 reply; 56+ messages in thread
From: WANG Xuerui @ 2023-09-07 20:04 UTC (permalink / raw)
  To: Tianrui Zhao, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao


On 8/31/23 16:30, Tianrui Zhao wrote:
> Implement LoongArch vcpu world switch, including vcpu enter guest and
> vcpu exit from guest, both operations need to save or restore the host
> and guest registers.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/kernel/asm-offsets.c |  32 ++++
>   arch/loongarch/kvm/switch.S         | 255 ++++++++++++++++++++++++++++
>   2 files changed, 287 insertions(+)
>   create mode 100644 arch/loongarch/kvm/switch.S
>
> diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c
> index 505e4bf596..d4bbaa74c1 100644
> --- a/arch/loongarch/kernel/asm-offsets.c
> +++ b/arch/loongarch/kernel/asm-offsets.c
> @@ -9,6 +9,7 @@
>   #include <linux/mm.h>
>   #include <linux/kbuild.h>
>   #include <linux/suspend.h>
> +#include <linux/kvm_host.h>
>   #include <asm/cpu-info.h>
>   #include <asm/ptrace.h>
>   #include <asm/processor.h>
> @@ -285,3 +286,34 @@ void output_fgraph_ret_regs_defines(void)
>   	BLANK();
>   }
>   #endif
> +
> +static void __used output_kvm_defines(void)
> +{
> +	COMMENT(" KVM/LOONGARCH Specific offsets. ");
"LoongArch"?
> +
> +	OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr);
> +	OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc);
> +	BLANK();
> +
> +	OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch);
> +	OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
> +	OFFSET(KVM_VCPU_RUN, kvm_vcpu, run);
> +	BLANK();
> +
> +	OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp);
> +	OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp);
> +	OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
> +	OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
> +	OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);
> +	OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc);
> +	OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs);
> +	OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat);
> +	OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv);
> +	OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi);
> +	OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg);
> +	OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
> +	OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu);
> +
> +	OFFSET(KVM_GPGD, kvm, arch.pgd);
> +	BLANK();
> +}
> diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
> new file mode 100644
> index 0000000000..f637fcd56c
> --- /dev/null
> +++ b/arch/loongarch/kvm/switch.S
> @@ -0,0 +1,255 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/stackframe.h>
> +#include <asm/asm.h>
> +#include <asm/asmmacro.h>
> +#include <asm/regdef.h>
> +#include <asm/loongarch.h>
> +
> +#define PT_GPR_OFFSET(x)	(PT_R0 + 8*x)
> +#define GGPR_OFFSET(x)		(KVM_ARCH_GGPR + 8*x)
> +
> +.macro kvm_save_host_gpr base
> +	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
> +	st.d	$r\n, \base, PT_GPR_OFFSET(\n)
> +	.endr
> +.endm
> +
> +.macro kvm_restore_host_gpr base
> +	.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
> +	ld.d	$r\n, \base, PT_GPR_OFFSET(\n)
> +	.endr
> +.endm
> +
> +/*
> + * save and restore all gprs except base register,
> + * and default value of base register is a2.
> + */
> +.macro kvm_save_guest_gprs base
> +	.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
> +	st.d	$r\n, \base, GGPR_OFFSET(\n)
> +	.endr
> +.endm
> +
> +.macro kvm_restore_guest_gprs base
> +	.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
> +	ld.d	$r\n, \base, GGPR_OFFSET(\n)
> +	.endr
> +.endm
> +
> +/*
> + * prepare switch to guest, save host reg and restore guest reg.
> + * a2: kvm_vcpu_arch, don't touch it until 'ertn'
> + * t0, t1: temp register
> + */
> +.macro kvm_switch_to_guest
> +	/* set host excfg.VS=0, all exceptions share one exception entry */
> +	csrrd		t0, LOONGARCH_CSR_ECFG
> +	bstrins.w	t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT
> +	csrwr		t0, LOONGARCH_CSR_ECFG
> +
> +	/* Load up the new EENTRY */
> +	ld.d	t0, a2, KVM_ARCH_GEENTRY
> +	csrwr	t0, LOONGARCH_CSR_EENTRY
> +
> +	/* Set Guest ERA */
> +	ld.d	t0, a2, KVM_ARCH_GPC
> +	csrwr	t0, LOONGARCH_CSR_ERA
> +
> +	/* Save host PGDL */
> +	csrrd	t0, LOONGARCH_CSR_PGDL
> +	st.d	t0, a2, KVM_ARCH_HPGD
> +
> +	/* Switch to kvm */
> +	ld.d	t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH
> +
> +	/* Load guest PGDL */
> +	li.w    t0, KVM_GPGD
> +	ldx.d   t0, t1, t0
> +	csrwr	t0, LOONGARCH_CSR_PGDL
> +
> +	/* Mix GID and RID */
> +	csrrd		t1, LOONGARCH_CSR_GSTAT
> +	bstrpick.w	t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT
> +	csrrd		t0, LOONGARCH_CSR_GTLBC
> +	bstrins.w	t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
> +	csrwr		t0, LOONGARCH_CSR_GTLBC
> +
> +	/*
> +	 * Switch to guest:
> +	 *  GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0
> +	 *  ertn
> +	 */
> +
> +	/*
> +	 * Enable intr in root mode with future ertn so that host interrupt
> +	 * can be responsed during VM runs
> +	 * guest crmd comes from separate gcsr_CRMD register
> +	 */
> +	ori	t0, zero, CSR_PRMD_PIE
Use "li.w" like the place several lines before?
> +	csrxchg	t0, t0,   LOONGARCH_CSR_PRMD
> +
> +	/* Set PVM bit to setup ertn to guest context */
> +	ori	t0, zero, CSR_GSTAT_PVM
Similarly here...
> +	csrxchg	t0, t0,   LOONGARCH_CSR_GSTAT
> +
> +	/* Load Guest gprs */
> +	kvm_restore_guest_gprs a2
> +	/* Load KVM_ARCH register */
> +	ld.d	a2, a2,	(KVM_ARCH_GGPR + 8 * REG_A2)
> +
> +	ertn
> +.endm
> +
> +	/*
> +	 * exception entry for general exception from guest mode
> +	 *  - IRQ is disabled
> +	 *  - kernel privilege in root mode
> +	 *  - page mode keep unchanged from previous prmd in root mode
> +	 *  - Fixme: tlb exception cannot happen since registers relative with TLB
> +	 *  -        is still in guest mode, such as pgd table/vmid registers etc,
> +	 *  -        will fix with hw page walk enabled in future
> +	 * load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to KVM_TEMP_KS
> +	 */
> +	.text
> +	.cfi_sections	.debug_frame
> +SYM_CODE_START(kvm_vector_entry)
> +	csrwr	a2,   KVM_TEMP_KS
> +	csrrd	a2,   KVM_VCPU_KS
> +	addi.d	a2,   a2, KVM_VCPU_ARCH
> +
> +	/* After save gprs, free to use any gpr */
> +	kvm_save_guest_gprs a2
> +	/* Save guest a2 */
> +	csrrd	t0,	KVM_TEMP_KS
> +	st.d	t0,	a2,	(KVM_ARCH_GGPR + 8 * REG_A2)
> +
> +	/* a2: kvm_vcpu_arch, a1 is free to use */
> +	csrrd	s1,   KVM_VCPU_KS
> +	ld.d	s0,   s1, KVM_VCPU_RUN
> +
> +	csrrd	t0,   LOONGARCH_CSR_ESTAT
> +	st.d	t0,   a2, KVM_ARCH_HESTAT
> +	csrrd	t0,   LOONGARCH_CSR_ERA
> +	st.d	t0,   a2, KVM_ARCH_GPC
> +	csrrd	t0,   LOONGARCH_CSR_BADV
> +	st.d	t0,   a2, KVM_ARCH_HBADV
> +	csrrd	t0,   LOONGARCH_CSR_BADI
> +	st.d	t0,   a2, KVM_ARCH_HBADI
> +
> +	/* Restore host excfg.VS */
> +	csrrd	t0, LOONGARCH_CSR_ECFG
> +	ld.d	t1, a2, KVM_ARCH_HECFG
> +	or	t0, t0, t1
> +	csrwr	t0, LOONGARCH_CSR_ECFG
> +
> +	/* Restore host eentry */
> +	ld.d	t0, a2, KVM_ARCH_HEENTRY
> +	csrwr	t0, LOONGARCH_CSR_EENTRY
> +
> +	/* restore host pgd table */
> +	ld.d    t0, a2, KVM_ARCH_HPGD
> +	csrwr   t0, LOONGARCH_CSR_PGDL
> +
> +	/*
> +	 * Disable PGM bit to enter root mode by default with next ertn
> +	 */
> +	ori	t0, zero, CSR_GSTAT_PVM
And here.
> +	csrxchg	zero, t0, LOONGARCH_CSR_GSTAT
> +	/*
> +	 * Clear GTLBC.TGID field
> +	 *       0: for root  tlb update in future tlb instr
> +	 *  others: for guest tlb update like gpa to hpa in future tlb instr
> +	 */
> +	csrrd	t0, LOONGARCH_CSR_GTLBC
> +	bstrins.w	t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
> +	csrwr	t0, LOONGARCH_CSR_GTLBC
> +	ld.d	tp, a2, KVM_ARCH_HTP
> +	ld.d	sp, a2, KVM_ARCH_HSP
> +	/* restore per cpu register */
> +	ld.d	u0, a2, KVM_ARCH_HPERCPU
> +	addi.d	sp, sp, -PT_SIZE
> +
> +	/* Prepare handle exception */
> +	or	a0, s0, zero
> +	or	a1, s1, zero
Similarly "move X, Y" should be clearer here.
> +	ld.d	t8, a2, KVM_ARCH_HANDLE_EXIT
> +	jirl	ra, t8, 0
> +
> +	or	a2, s1, zero
> +	addi.d	a2, a2, KVM_VCPU_ARCH
> +
> +	/* resume host when ret <= 0 */
> +	bge	zero, a0, ret_to_host
"blez a0, ret_to_host"
> +
> +	/*
> +         * return to guest
> +         * save per cpu register again, maybe switched to another cpu
> +         */
> +	st.d	u0, a2, KVM_ARCH_HPERCPU
> +
> +	/* Save kvm_vcpu to kscratch */
> +	csrwr	s1, KVM_VCPU_KS
> +	kvm_switch_to_guest
> +
> +ret_to_host:
> +	ld.d    a2, a2, KVM_ARCH_HSP
> +	addi.d  a2, a2, -PT_SIZE
> +	kvm_restore_host_gpr    a2
> +	jr      ra
> +
> +SYM_INNER_LABEL(kvm_vector_entry_end, SYM_L_LOCAL)
> +SYM_CODE_END(kvm_vector_entry)
> +
> +/*
> + * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu)
> + *
> + * @register_param:
> + *  a0: kvm_run* run
> + *  a1: kvm_vcpu* vcpu
> + */
> +SYM_FUNC_START(kvm_enter_guest)
> +	/* allocate space in stack bottom */
> +	addi.d	a2, sp, -PT_SIZE
> +	/* save host gprs */
> +	kvm_save_host_gpr a2
> +
> +	/* save host crmd,prmd csr to stack */
> +	csrrd	a3, LOONGARCH_CSR_CRMD
> +	st.d	a3, a2, PT_CRMD
> +	csrrd	a3, LOONGARCH_CSR_PRMD
> +	st.d	a3, a2, PT_PRMD
> +
> +	addi.d	a2, a1, KVM_VCPU_ARCH
> +	st.d	sp, a2, KVM_ARCH_HSP
> +	st.d	tp, a2, KVM_ARCH_HTP
> +	/* Save per cpu register */
> +	st.d	u0, a2, KVM_ARCH_HPERCPU
> +
> +	/* Save kvm_vcpu to kscratch */
> +	csrwr	a1, KVM_VCPU_KS
> +	kvm_switch_to_guest
> +SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL)
> +SYM_FUNC_END(kvm_enter_guest)
> +
> +SYM_FUNC_START(kvm_save_fpu)
> +	fpu_save_csr	a0 t1
> +	fpu_save_double a0 t1
> +	fpu_save_cc	a0 t1 t2
> +	jr              ra
> +SYM_FUNC_END(kvm_save_fpu)
> +
> +SYM_FUNC_START(kvm_restore_fpu)
> +	fpu_restore_double a0 t1
> +	fpu_restore_csr    a0 t1
This needs to become "fpu_restore_csr a0 t1 t2" after commit 
bd3c5798484a ("LoongArch: Add Loongson Binary Translation (LBT) 
extension support") which is slated for Linux 6.6 and already inside 
linux-next.
> +	fpu_restore_cc	   a0 t1 t2
> +	jr                 ra
> +SYM_FUNC_END(kvm_restore_fpu)
> +
> +	.section ".rodata"
> +SYM_DATA(kvm_vector_size, .quad kvm_vector_entry_end - kvm_vector_entry)
> +SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest)

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-08-31  8:30 ` [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
@ 2023-09-07 20:10   ` WANG Xuerui
  2023-09-08  1:40     ` Huacai Chen
  2023-09-12  9:47     ` zhaotianrui
  2023-09-11  7:30   ` WANG Xuerui
  1 sibling, 2 replies; 56+ messages in thread
From: WANG Xuerui @ 2023-09-07 20:10 UTC (permalink / raw)
  To: Tianrui Zhao, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao, kernel test robot


On 8/31/23 16:30, Tianrui Zhao wrote:
> Enable LoongArch kvm config and add the makefile to support build kvm
> module.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Reported-by: kernel test robot <lkp@intel.com>
> Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>   arch/loongarch/Kbuild                      |  1 +
>   arch/loongarch/Kconfig                     |  3 ++
>   arch/loongarch/configs/loongson3_defconfig |  2 +
>   arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
>   arch/loongarch/kvm/Makefile                | 22 +++++++++++
>   5 files changed, 73 insertions(+)
>   create mode 100644 arch/loongarch/kvm/Kconfig
>   create mode 100644 arch/loongarch/kvm/Makefile
>
> diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
> index b01f5cdb27..40be8a1696 100644
> --- a/arch/loongarch/Kbuild
> +++ b/arch/loongarch/Kbuild
> @@ -2,6 +2,7 @@ obj-y += kernel/
>   obj-y += mm/
>   obj-y += net/
>   obj-y += vdso/
> +obj-y += kvm/
Do we want to keep the list alphabetically sorted here?
>   
>   # for cleaning
>   subdir- += boot
> diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
> index ecf282dee5..7f2f7ccc76 100644
> --- a/arch/loongarch/Kconfig
> +++ b/arch/loongarch/Kconfig
> @@ -123,6 +123,7 @@ config LOONGARCH
>   	select HAVE_KPROBES
>   	select HAVE_KPROBES_ON_FTRACE
>   	select HAVE_KRETPROBES
> +	select HAVE_KVM
>   	select HAVE_MOD_ARCH_SPECIFIC
>   	select HAVE_NMI
>   	select HAVE_PCI
> @@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
>   source "drivers/acpi/Kconfig"
>   
>   endmenu
> +
> +source "arch/loongarch/kvm/Kconfig"
> diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
> index d64849b4cb..7acb4ae7af 100644
> --- a/arch/loongarch/configs/loongson3_defconfig
> +++ b/arch/loongarch/configs/loongson3_defconfig
> @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
>   CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
>   CONFIG_EFI_CAPSULE_LOADER=m
>   CONFIG_EFI_TEST=m
> +CONFIG_VIRTUALIZATION=y
> +CONFIG_KVM=m
>   CONFIG_MODULES=y
>   CONFIG_MODULE_FORCE_LOAD=y
>   CONFIG_MODULE_UNLOAD=y
> diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
> new file mode 100644
> index 0000000000..bf7d6e7cde
> --- /dev/null
> +++ b/arch/loongarch/kvm/Kconfig
> @@ -0,0 +1,45 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# KVM configuration
> +#
> +
> +source "virt/kvm/Kconfig"
> +
> +menuconfig VIRTUALIZATION
> +	bool "Virtualization"
> +	help
> +	  Say Y here to get to see options for using your Linux host to run
> +	  other operating systems inside virtual machines (guests).
> +	  This option alone does not add any kernel code.
> +
> +	  If you say N, all options in this submenu will be skipped and
> +	  disabled.
> +
> +if VIRTUALIZATION
> +
> +config AS_HAS_LVZ_EXTENSION
> +	def_bool $(as-instr,hvcl 0)
> +
> +config KVM
> +	tristate "Kernel-based Virtual Machine (KVM) support"
> +	depends on HAVE_KVM
> +	depends on AS_HAS_LVZ_EXTENSION
> +	select MMU_NOTIFIER
> +	select ANON_INODES
> +	select PREEMPT_NOTIFIERS
> +	select KVM_MMIO
> +	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
> +	select KVM_GENERIC_HARDWARE_ENABLING
> +	select KVM_XFER_TO_GUEST_WORK
> +	select HAVE_KVM_DIRTY_RING_ACQ_REL
> +	select HAVE_KVM_VCPU_ASYNC_IOCTL
> +	select HAVE_KVM_EVENTFD
> +	select SRCU
Make the list of selects also alphabetically sorted?
> +	help
> +	  Support hosting virtualized guest machines using hardware
> +	  virtualization extensions. You will need a fairly processor
> +	  equipped with virtualization extensions.

The word "fairly" seems extraneous here, and can be simply dropped.

(I suppose you forgot to delete it after tweaking the original sentence, 
that came from arch/x86/kvm: "You will need a fairly recent processor 
..." -- all LoongArch processors are recent!)

> +
> +	  If unsure, say N.
> +
> +endif # VIRTUALIZATION
> diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
> new file mode 100644
> index 0000000000..2335e873a6
> --- /dev/null
> +++ b/arch/loongarch/kvm/Makefile
> @@ -0,0 +1,22 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Makefile for LOONGARCH KVM support
"LoongArch" -- you may want to check the entire patch series for such 
ALL-CAPS references to LoongArch in natural language paragraphs, they 
all want to be spelled "LoongArch".
> +#
> +
> +ccflags-y += -I $(srctree)/$(src)
> +
> +include $(srctree)/virt/kvm/Makefile.kvm
> +
> +obj-$(CONFIG_KVM) += kvm.o
> +
> +kvm-y += main.o
> +kvm-y += vm.o
> +kvm-y += vmid.o
> +kvm-y += tlb.o
> +kvm-y += mmu.o
> +kvm-y += vcpu.o
> +kvm-y += exit.o
> +kvm-y += interrupt.o
> +kvm-y += timer.o
> +kvm-y += switch.o
> +kvm-y += csr_ops.o
I'd suggest sorting this list too to better avoid editing conflicts in 
the future.

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-09-07 20:10   ` WANG Xuerui
@ 2023-09-08  1:40     ` Huacai Chen
  2023-09-08  1:49       ` bibo mao
  2023-09-12  9:47     ` zhaotianrui
  1 sibling, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-08  1:40 UTC (permalink / raw)
  To: WANG Xuerui
  Cc: Tianrui Zhao, linux-kernel, kvm, Paolo Bonzini,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao,
	kernel test robot

On Fri, Sep 8, 2023 at 4:10 AM WANG Xuerui <kernel@xen0n.name> wrote:
>
>
> On 8/31/23 16:30, Tianrui Zhao wrote:
> > Enable LoongArch kvm config and add the makefile to support build kvm
> > module.
> >
> > Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> > Reported-by: kernel test robot <lkp@intel.com>
> > Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
> > Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> > ---
> >   arch/loongarch/Kbuild                      |  1 +
> >   arch/loongarch/Kconfig                     |  3 ++
> >   arch/loongarch/configs/loongson3_defconfig |  2 +
> >   arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
> >   arch/loongarch/kvm/Makefile                | 22 +++++++++++
> >   5 files changed, 73 insertions(+)
> >   create mode 100644 arch/loongarch/kvm/Kconfig
> >   create mode 100644 arch/loongarch/kvm/Makefile
> >
> > diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
> > index b01f5cdb27..40be8a1696 100644
> > --- a/arch/loongarch/Kbuild
> > +++ b/arch/loongarch/Kbuild
> > @@ -2,6 +2,7 @@ obj-y += kernel/
> >   obj-y += mm/
> >   obj-y += net/
> >   obj-y += vdso/
> > +obj-y += kvm/
> Do we want to keep the list alphabetically sorted here?
kvm directory can be at last, but I'm afraid that it should be

ifdef CONFIG_KVM
obj-y += kvm/
endif

If such a guard is unnecessary, then I agree to use alphabetical order.

Huacai

> >
> >   # for cleaning
> >   subdir- += boot
> > diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
> > index ecf282dee5..7f2f7ccc76 100644
> > --- a/arch/loongarch/Kconfig
> > +++ b/arch/loongarch/Kconfig
> > @@ -123,6 +123,7 @@ config LOONGARCH
> >       select HAVE_KPROBES
> >       select HAVE_KPROBES_ON_FTRACE
> >       select HAVE_KRETPROBES
> > +     select HAVE_KVM
> >       select HAVE_MOD_ARCH_SPECIFIC
> >       select HAVE_NMI
> >       select HAVE_PCI
> > @@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
> >   source "drivers/acpi/Kconfig"
> >
> >   endmenu
> > +
> > +source "arch/loongarch/kvm/Kconfig"
> > diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
> > index d64849b4cb..7acb4ae7af 100644
> > --- a/arch/loongarch/configs/loongson3_defconfig
> > +++ b/arch/loongarch/configs/loongson3_defconfig
> > @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
> >   CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
> >   CONFIG_EFI_CAPSULE_LOADER=m
> >   CONFIG_EFI_TEST=m
> > +CONFIG_VIRTUALIZATION=y
> > +CONFIG_KVM=m
> >   CONFIG_MODULES=y
> >   CONFIG_MODULE_FORCE_LOAD=y
> >   CONFIG_MODULE_UNLOAD=y
> > diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
> > new file mode 100644
> > index 0000000000..bf7d6e7cde
> > --- /dev/null
> > +++ b/arch/loongarch/kvm/Kconfig
> > @@ -0,0 +1,45 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +#
> > +# KVM configuration
> > +#
> > +
> > +source "virt/kvm/Kconfig"
> > +
> > +menuconfig VIRTUALIZATION
> > +     bool "Virtualization"
> > +     help
> > +       Say Y here to get to see options for using your Linux host to run
> > +       other operating systems inside virtual machines (guests).
> > +       This option alone does not add any kernel code.
> > +
> > +       If you say N, all options in this submenu will be skipped and
> > +       disabled.
> > +
> > +if VIRTUALIZATION
> > +
> > +config AS_HAS_LVZ_EXTENSION
> > +     def_bool $(as-instr,hvcl 0)
> > +
> > +config KVM
> > +     tristate "Kernel-based Virtual Machine (KVM) support"
> > +     depends on HAVE_KVM
> > +     depends on AS_HAS_LVZ_EXTENSION
> > +     select MMU_NOTIFIER
> > +     select ANON_INODES
> > +     select PREEMPT_NOTIFIERS
> > +     select KVM_MMIO
> > +     select KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > +     select KVM_GENERIC_HARDWARE_ENABLING
> > +     select KVM_XFER_TO_GUEST_WORK
> > +     select HAVE_KVM_DIRTY_RING_ACQ_REL
> > +     select HAVE_KVM_VCPU_ASYNC_IOCTL
> > +     select HAVE_KVM_EVENTFD
> > +     select SRCU
> Make the list of selects also alphabetically sorted?
> > +     help
> > +       Support hosting virtualized guest machines using hardware
> > +       virtualization extensions. You will need a fairly processor
> > +       equipped with virtualization extensions.
>
> The word "fairly" seems extraneous here, and can be simply dropped.
>
> (I suppose you forgot to delete it after tweaking the original sentence,
> that came from arch/x86/kvm: "You will need a fairly recent processor
> ..." -- all LoongArch processors are recent!)
>
> > +
> > +       If unsure, say N.
> > +
> > +endif # VIRTUALIZATION
> > diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
> > new file mode 100644
> > index 0000000000..2335e873a6
> > --- /dev/null
> > +++ b/arch/loongarch/kvm/Makefile
> > @@ -0,0 +1,22 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +#
> > +# Makefile for LOONGARCH KVM support
> "LoongArch" -- you may want to check the entire patch series for such
> ALL-CAPS references to LoongArch in natural language paragraphs, they
> all want to be spelled "LoongArch".
> > +#
> > +
> > +ccflags-y += -I $(srctree)/$(src)
> > +
> > +include $(srctree)/virt/kvm/Makefile.kvm
> > +
> > +obj-$(CONFIG_KVM) += kvm.o
> > +
> > +kvm-y += main.o
> > +kvm-y += vm.o
> > +kvm-y += vmid.o
> > +kvm-y += tlb.o
> > +kvm-y += mmu.o
> > +kvm-y += vcpu.o
> > +kvm-y += exit.o
> > +kvm-y += interrupt.o
> > +kvm-y += timer.o
> > +kvm-y += switch.o
> > +kvm-y += csr_ops.o
> I'd suggest sorting this list too to better avoid editing conflicts in
> the future.
>
> --
> WANG "xen0n" Xuerui
>
> Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-09-08  1:40     ` Huacai Chen
@ 2023-09-08  1:49       ` bibo mao
  2023-09-08  1:54         ` Huacai Chen
  0 siblings, 1 reply; 56+ messages in thread
From: bibo mao @ 2023-09-08  1:49 UTC (permalink / raw)
  To: Huacai Chen, WANG Xuerui
  Cc: Tianrui Zhao, linux-kernel, kvm, Paolo Bonzini,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao, kernel test robot



在 2023/9/8 09:40, Huacai Chen 写道:
> On Fri, Sep 8, 2023 at 4:10 AM WANG Xuerui <kernel@xen0n.name> wrote:
>>
>>
>> On 8/31/23 16:30, Tianrui Zhao wrote:
>>> Enable LoongArch kvm config and add the makefile to support build kvm
>>> module.
>>>
>>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>>> Reported-by: kernel test robot <lkp@intel.com>
>>> Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
>>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>>> ---
>>>   arch/loongarch/Kbuild                      |  1 +
>>>   arch/loongarch/Kconfig                     |  3 ++
>>>   arch/loongarch/configs/loongson3_defconfig |  2 +
>>>   arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
>>>   arch/loongarch/kvm/Makefile                | 22 +++++++++++
>>>   5 files changed, 73 insertions(+)
>>>   create mode 100644 arch/loongarch/kvm/Kconfig
>>>   create mode 100644 arch/loongarch/kvm/Makefile
>>>
>>> diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
>>> index b01f5cdb27..40be8a1696 100644
>>> --- a/arch/loongarch/Kbuild
>>> +++ b/arch/loongarch/Kbuild
>>> @@ -2,6 +2,7 @@ obj-y += kernel/
>>>   obj-y += mm/
>>>   obj-y += net/
>>>   obj-y += vdso/
>>> +obj-y += kvm/
>> Do we want to keep the list alphabetically sorted here?
> kvm directory can be at last, but I'm afraid that it should be
> 
> ifdef CONFIG_KVM
> obj-y += kvm/
> endif
Agree, how about this like other architectures.
obj-$(CONFIG_KVM) += kvm/
> 
> If such a guard is unnecessary, then I agree to use alphabetical order.
Is there any document about "alphabetical order"? I check Kbuild in other
directories, it is not sorted by alphabetical order.

$ cat  arch/riscv/Kbuild 
obj-y += kernel/ mm/ net/
obj-$(CONFIG_BUILTIN_DTB) += boot/dts/
obj-y += errata/
obj-$(CONFIG_KVM) += kvm/
obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += purgatory/
# for cleaning
subdir- += boot

$ cat arch/arm64/Kbuild 
obj-y                   += kernel/ mm/ net/
obj-$(CONFIG_KVM)       += kvm/
obj-$(CONFIG_XEN)       += xen/
obj-$(subst m,y,$(CONFIG_HYPERV))       += hyperv/
obj-$(CONFIG_CRYPTO)    += crypto/

# for cleaning
subdir- += boot


Regards
Bibo Mao
> 
> Huacai
> 
>>>
>>>   # for cleaning
>>>   subdir- += boot
>>> diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
>>> index ecf282dee5..7f2f7ccc76 100644
>>> --- a/arch/loongarch/Kconfig
>>> +++ b/arch/loongarch/Kconfig
>>> @@ -123,6 +123,7 @@ config LOONGARCH
>>>       select HAVE_KPROBES
>>>       select HAVE_KPROBES_ON_FTRACE
>>>       select HAVE_KRETPROBES
>>> +     select HAVE_KVM
>>>       select HAVE_MOD_ARCH_SPECIFIC
>>>       select HAVE_NMI
>>>       select HAVE_PCI
>>> @@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
>>>   source "drivers/acpi/Kconfig"
>>>
>>>   endmenu
>>> +
>>> +source "arch/loongarch/kvm/Kconfig"
>>> diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
>>> index d64849b4cb..7acb4ae7af 100644
>>> --- a/arch/loongarch/configs/loongson3_defconfig
>>> +++ b/arch/loongarch/configs/loongson3_defconfig
>>> @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
>>>   CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
>>>   CONFIG_EFI_CAPSULE_LOADER=m
>>>   CONFIG_EFI_TEST=m
>>> +CONFIG_VIRTUALIZATION=y
>>> +CONFIG_KVM=m
>>>   CONFIG_MODULES=y
>>>   CONFIG_MODULE_FORCE_LOAD=y
>>>   CONFIG_MODULE_UNLOAD=y
>>> diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
>>> new file mode 100644
>>> index 0000000000..bf7d6e7cde
>>> --- /dev/null
>>> +++ b/arch/loongarch/kvm/Kconfig
>>> @@ -0,0 +1,45 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>> +#
>>> +# KVM configuration
>>> +#
>>> +
>>> +source "virt/kvm/Kconfig"
>>> +
>>> +menuconfig VIRTUALIZATION
>>> +     bool "Virtualization"
>>> +     help
>>> +       Say Y here to get to see options for using your Linux host to run
>>> +       other operating systems inside virtual machines (guests).
>>> +       This option alone does not add any kernel code.
>>> +
>>> +       If you say N, all options in this submenu will be skipped and
>>> +       disabled.
>>> +
>>> +if VIRTUALIZATION
>>> +
>>> +config AS_HAS_LVZ_EXTENSION
>>> +     def_bool $(as-instr,hvcl 0)
>>> +
>>> +config KVM
>>> +     tristate "Kernel-based Virtual Machine (KVM) support"
>>> +     depends on HAVE_KVM
>>> +     depends on AS_HAS_LVZ_EXTENSION
>>> +     select MMU_NOTIFIER
>>> +     select ANON_INODES
>>> +     select PREEMPT_NOTIFIERS
>>> +     select KVM_MMIO
>>> +     select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>>> +     select KVM_GENERIC_HARDWARE_ENABLING
>>> +     select KVM_XFER_TO_GUEST_WORK
>>> +     select HAVE_KVM_DIRTY_RING_ACQ_REL
>>> +     select HAVE_KVM_VCPU_ASYNC_IOCTL
>>> +     select HAVE_KVM_EVENTFD
>>> +     select SRCU
>> Make the list of selects also alphabetically sorted?
>>> +     help
>>> +       Support hosting virtualized guest machines using hardware
>>> +       virtualization extensions. You will need a fairly processor
>>> +       equipped with virtualization extensions.
>>
>> The word "fairly" seems extraneous here, and can be simply dropped.
>>
>> (I suppose you forgot to delete it after tweaking the original sentence,
>> that came from arch/x86/kvm: "You will need a fairly recent processor
>> ..." -- all LoongArch processors are recent!)
>>
>>> +
>>> +       If unsure, say N.
>>> +
>>> +endif # VIRTUALIZATION
>>> diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
>>> new file mode 100644
>>> index 0000000000..2335e873a6
>>> --- /dev/null
>>> +++ b/arch/loongarch/kvm/Makefile
>>> @@ -0,0 +1,22 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>> +#
>>> +# Makefile for LOONGARCH KVM support
>> "LoongArch" -- you may want to check the entire patch series for such
>> ALL-CAPS references to LoongArch in natural language paragraphs, they
>> all want to be spelled "LoongArch".
>>> +#
>>> +
>>> +ccflags-y += -I $(srctree)/$(src)
>>> +
>>> +include $(srctree)/virt/kvm/Makefile.kvm
>>> +
>>> +obj-$(CONFIG_KVM) += kvm.o
>>> +
>>> +kvm-y += main.o
>>> +kvm-y += vm.o
>>> +kvm-y += vmid.o
>>> +kvm-y += tlb.o
>>> +kvm-y += mmu.o
>>> +kvm-y += vcpu.o
>>> +kvm-y += exit.o
>>> +kvm-y += interrupt.o
>>> +kvm-y += timer.o
>>> +kvm-y += switch.o
>>> +kvm-y += csr_ops.o
>> I'd suggest sorting this list too to better avoid editing conflicts in
>> the future.
>>
>> --
>> WANG "xen0n" Xuerui
>>
>> Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
>>
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-09-08  1:49       ` bibo mao
@ 2023-09-08  1:54         ` Huacai Chen
  0 siblings, 0 replies; 56+ messages in thread
From: Huacai Chen @ 2023-09-08  1:54 UTC (permalink / raw)
  To: bibo mao
  Cc: WANG Xuerui, Tianrui Zhao, linux-kernel, kvm, Paolo Bonzini,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao, kernel test robot

On Fri, Sep 8, 2023 at 9:49 AM bibo mao <maobibo@loongson.cn> wrote:
>
>
>
> 在 2023/9/8 09:40, Huacai Chen 写道:
> > On Fri, Sep 8, 2023 at 4:10 AM WANG Xuerui <kernel@xen0n.name> wrote:
> >>
> >>
> >> On 8/31/23 16:30, Tianrui Zhao wrote:
> >>> Enable LoongArch kvm config and add the makefile to support build kvm
> >>> module.
> >>>
> >>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> >>> Reported-by: kernel test robot <lkp@intel.com>
> >>> Link: https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
> >>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> >>> ---
> >>>   arch/loongarch/Kbuild                      |  1 +
> >>>   arch/loongarch/Kconfig                     |  3 ++
> >>>   arch/loongarch/configs/loongson3_defconfig |  2 +
> >>>   arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
> >>>   arch/loongarch/kvm/Makefile                | 22 +++++++++++
> >>>   5 files changed, 73 insertions(+)
> >>>   create mode 100644 arch/loongarch/kvm/Kconfig
> >>>   create mode 100644 arch/loongarch/kvm/Makefile
> >>>
> >>> diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
> >>> index b01f5cdb27..40be8a1696 100644
> >>> --- a/arch/loongarch/Kbuild
> >>> +++ b/arch/loongarch/Kbuild
> >>> @@ -2,6 +2,7 @@ obj-y += kernel/
> >>>   obj-y += mm/
> >>>   obj-y += net/
> >>>   obj-y += vdso/
> >>> +obj-y += kvm/
> >> Do we want to keep the list alphabetically sorted here?
> > kvm directory can be at last, but I'm afraid that it should be
> >
> > ifdef CONFIG_KVM
> > obj-y += kvm/
> > endif
> Agree, how about this like other architectures.
> obj-$(CONFIG_KVM) += kvm/
This is better, I agree and it can be at last because it is in a
different format from others.

> >
> > If such a guard is unnecessary, then I agree to use alphabetical order.
> Is there any document about "alphabetical order"? I check Kbuild in other
> directories, it is not sorted by alphabetical order.
Yes, there is no hard limit, but alphabetical order is better in some
cases, e.g., avoid duplicate lines.

Huacai
>
> $ cat  arch/riscv/Kbuild
> obj-y += kernel/ mm/ net/
> obj-$(CONFIG_BUILTIN_DTB) += boot/dts/
> obj-y += errata/
> obj-$(CONFIG_KVM) += kvm/
> obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += purgatory/
> # for cleaning
> subdir- += boot
>
> $ cat arch/arm64/Kbuild
> obj-y                   += kernel/ mm/ net/
> obj-$(CONFIG_KVM)       += kvm/
> obj-$(CONFIG_XEN)       += xen/
> obj-$(subst m,y,$(CONFIG_HYPERV))       += hyperv/
> obj-$(CONFIG_CRYPTO)    += crypto/
>
> # for cleaning
> subdir- += boot
>
>
> Regards
> Bibo Mao
> >
> > Huacai
> >
> >>>
> >>>   # for cleaning
> >>>   subdir- += boot
> >>> diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
> >>> index ecf282dee5..7f2f7ccc76 100644
> >>> --- a/arch/loongarch/Kconfig
> >>> +++ b/arch/loongarch/Kconfig
> >>> @@ -123,6 +123,7 @@ config LOONGARCH
> >>>       select HAVE_KPROBES
> >>>       select HAVE_KPROBES_ON_FTRACE
> >>>       select HAVE_KRETPROBES
> >>> +     select HAVE_KVM
> >>>       select HAVE_MOD_ARCH_SPECIFIC
> >>>       select HAVE_NMI
> >>>       select HAVE_PCI
> >>> @@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
> >>>   source "drivers/acpi/Kconfig"
> >>>
> >>>   endmenu
> >>> +
> >>> +source "arch/loongarch/kvm/Kconfig"
> >>> diff --git a/arch/loongarch/configs/loongson3_defconfig b/arch/loongarch/configs/loongson3_defconfig
> >>> index d64849b4cb..7acb4ae7af 100644
> >>> --- a/arch/loongarch/configs/loongson3_defconfig
> >>> +++ b/arch/loongarch/configs/loongson3_defconfig
> >>> @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
> >>>   CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
> >>>   CONFIG_EFI_CAPSULE_LOADER=m
> >>>   CONFIG_EFI_TEST=m
> >>> +CONFIG_VIRTUALIZATION=y
> >>> +CONFIG_KVM=m
> >>>   CONFIG_MODULES=y
> >>>   CONFIG_MODULE_FORCE_LOAD=y
> >>>   CONFIG_MODULE_UNLOAD=y
> >>> diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
> >>> new file mode 100644
> >>> index 0000000000..bf7d6e7cde
> >>> --- /dev/null
> >>> +++ b/arch/loongarch/kvm/Kconfig
> >>> @@ -0,0 +1,45 @@
> >>> +# SPDX-License-Identifier: GPL-2.0
> >>> +#
> >>> +# KVM configuration
> >>> +#
> >>> +
> >>> +source "virt/kvm/Kconfig"
> >>> +
> >>> +menuconfig VIRTUALIZATION
> >>> +     bool "Virtualization"
> >>> +     help
> >>> +       Say Y here to get to see options for using your Linux host to run
> >>> +       other operating systems inside virtual machines (guests).
> >>> +       This option alone does not add any kernel code.
> >>> +
> >>> +       If you say N, all options in this submenu will be skipped and
> >>> +       disabled.
> >>> +
> >>> +if VIRTUALIZATION
> >>> +
> >>> +config AS_HAS_LVZ_EXTENSION
> >>> +     def_bool $(as-instr,hvcl 0)
> >>> +
> >>> +config KVM
> >>> +     tristate "Kernel-based Virtual Machine (KVM) support"
> >>> +     depends on HAVE_KVM
> >>> +     depends on AS_HAS_LVZ_EXTENSION
> >>> +     select MMU_NOTIFIER
> >>> +     select ANON_INODES
> >>> +     select PREEMPT_NOTIFIERS
> >>> +     select KVM_MMIO
> >>> +     select KVM_GENERIC_DIRTYLOG_READ_PROTECT
> >>> +     select KVM_GENERIC_HARDWARE_ENABLING
> >>> +     select KVM_XFER_TO_GUEST_WORK
> >>> +     select HAVE_KVM_DIRTY_RING_ACQ_REL
> >>> +     select HAVE_KVM_VCPU_ASYNC_IOCTL
> >>> +     select HAVE_KVM_EVENTFD
> >>> +     select SRCU
> >> Make the list of selects also alphabetically sorted?
> >>> +     help
> >>> +       Support hosting virtualized guest machines using hardware
> >>> +       virtualization extensions. You will need a fairly processor
> >>> +       equipped with virtualization extensions.
> >>
> >> The word "fairly" seems extraneous here, and can be simply dropped.
> >>
> >> (I suppose you forgot to delete it after tweaking the original sentence,
> >> that came from arch/x86/kvm: "You will need a fairly recent processor
> >> ..." -- all LoongArch processors are recent!)
> >>
> >>> +
> >>> +       If unsure, say N.
> >>> +
> >>> +endif # VIRTUALIZATION
> >>> diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
> >>> new file mode 100644
> >>> index 0000000000..2335e873a6
> >>> --- /dev/null
> >>> +++ b/arch/loongarch/kvm/Makefile
> >>> @@ -0,0 +1,22 @@
> >>> +# SPDX-License-Identifier: GPL-2.0
> >>> +#
> >>> +# Makefile for LOONGARCH KVM support
> >> "LoongArch" -- you may want to check the entire patch series for such
> >> ALL-CAPS references to LoongArch in natural language paragraphs, they
> >> all want to be spelled "LoongArch".
> >>> +#
> >>> +
> >>> +ccflags-y += -I $(srctree)/$(src)
> >>> +
> >>> +include $(srctree)/virt/kvm/Makefile.kvm
> >>> +
> >>> +obj-$(CONFIG_KVM) += kvm.o
> >>> +
> >>> +kvm-y += main.o
> >>> +kvm-y += vm.o
> >>> +kvm-y += vmid.o
> >>> +kvm-y += tlb.o
> >>> +kvm-y += mmu.o
> >>> +kvm-y += vcpu.o
> >>> +kvm-y += exit.o
> >>> +kvm-y += interrupt.o
> >>> +kvm-y += timer.o
> >>> +kvm-y += switch.o
> >>> +kvm-y += csr_ops.o
> >> I'd suggest sorting this list too to better avoid editing conflicts in
> >> the future.
> >>
> >> --
> >> WANG "xen0n" Xuerui
> >>
> >> Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/
> >>
> >>
>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 00/30] Add KVM LoongArch support
  2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
                   ` (29 preceding siblings ...)
  2023-08-31  8:30 ` [PATCH v20 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
@ 2023-09-11  4:02 ` Huacai Chen
  2023-09-11  9:34   ` zhaotianrui
  30 siblings, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11  4:02 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

I hope this can be the last review and the next version can get upstreamed. :)


On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> From: zhaotianrui <zhaotianrui@loongson.cn>
>
> This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
> assisted virtualization. With cpu virtualization, there are separate
> hw-supported user mode and kernel mode in guest mode. With memory
> virtualization, there are two-level hw mmu table for guest mode and host
> mode. Also there is separate hw cpu timer with consant frequency in
> guest mode, so that vm can migrate between hosts with different freq.
> Currently, we are able to boot LoongArch Linux Guests.
>
> Few key aspects of KVM LoongArch added by this series are:
> 1. Enable kvm hardware function when kvm module is loaded.
> 2. Implement VM and vcpu related ioctl interface such as vcpu create,
>    vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
>    get general registers one by one.
> 3. Hardware access about MMU, timer and csr are emulated in kernel.
> 4. Hardwares such as mmio and iocsr device are emulated in user space
>    such as APIC, IPI, pci devices etc.
>
> The running environment of LoongArch virt machine:
> 1. Cross tools to build kernel and uefi:
>    $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
The cross tools should be updated to the latest one, because we need
binutils 2.41 now.

>    tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
>    export PATH=/opt/cross-tools/bin:$PATH
>    export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
>    export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
> 2. This series is based on the linux source code:
>    https://github.com/loongson/linux-loongarch-kvm
Please update the base to at least v6.6-rc1.

>    Build command:
>    git checkout kvm-loongarch
>    make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
>    make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
> 3. QEMU hypervisor with LoongArch supported:
>    https://github.com/loongson/qemu
QEMU base should also be updated.

>    Build command:
>    git checkout kvm-loongarch
>    ./configure --target-list="loongarch64-softmmu"  --enable-kvm
>    make
> 4. Uefi bios of LoongArch virt machine:
>    Link: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
> 5. you can also access the binary files we have already build:
>    https://github.com/yangxiaojuan-loongson/qemu-binary
Update any binaries if needed, too.

I will do a full test after v21 of this series, and I hope this can
move things forwards.


Huacai

> The command to boot loongarch virt machine:
>    $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
>    -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
>    -serial stdio   -monitor telnet:localhost:4495,server,nowait \
>    -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
>    --nographic
>
> changes for v20:
> 1. Remove the binary codes of virtualization instructions in
> insn_def.h and csr_ops.S and directly use the default csrrd,
> csrwr,csrxchg instructions. And let CONFIG_KVM depends on the
> AS_HAS_LVZ_EXTENSION, so we should use the binutils that have
> already supported them to compile the KVM. This can make our
> LoongArch KVM codes more maintainable and easier.
>
> changes for v19:
> 1. Use the common interface xfer_to_guest_mode_handle_work to
> Check conditions before entering the guest.
> 2. Add vcpu dirty ring support.
>
> changes for v18:
> 1. Code cleanup for vcpu timer: remove unnecessary timer_period_ns,
> timer_bias, timer_dyn_bias variables in kvm_vcpu_arch and rename
> the stable_ktime_saved variable to expire.
> 2. Change the value of KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE to 40.
>
> changes for v17:
> 1. Add CONFIG_AS_HAS_LVZ_EXTENSION config option which depends on
> binutils that support LVZ assemble instruction.
> 2. Change kvm mmu related functions, such as rename level2_ptw_pgd
> to kvm_ptw_pgd, replace kvm_flush_range with kvm_ptw_pgd pagewalk
> framework, replace kvm_arch.gpa_mm with kvm_arch.pgd, set
> mark_page_dirty/kvm_set_pfn_dirty out of mmu_lock in kvm page fault
> handling.
> 3. Replace kvm_loongarch_interrupt with standard kvm_interrupt
> when injecting IRQ.
> 4. Replace vcpu_arch.last_exec_cpu with existing vcpu.cpu, remove
> kvm_arch.online_vcpus and kvm_arch.is_migrating,
> 5. Remove EXCCODE_TLBNR and EXCCODE_TLBNX in kvm exception table,
> since NR/NX bit is not set in kvm page fault handling.
>
> Changes for v16:
> 1. Free allocated memory of vmcs,kvm_loongarch_ops in kvm module init,
> exit to avoid memory leak problem.
> 2. Simplify some assemble codes in switch.S which are necessary to be
> replaced with pseudo-instructions. And any other instructions do not need
> to be replaced anymore.
> 3. Add kvm_{save,restore}_guest_gprs macros to replace these ld.d,st.d
> guest regs instructions when vcpu world switch.
> 4. It is more secure to disable irq when flush guest tlb by gpa, so replace
> preempt_disable with loacl_irq_save in kvm_flush_tlb_gpa.
>
> Changes for v15:
> 1. Re-order some macros and variables in LoongArch kvm headers, put them
> together which have the same meaning.
> 2. Make some function definitions in one line, as it is not needed to split
> them.
> 3. Re-name some macros such as KVM_REG_LOONGARCH_GPR.
>
> Changes for v14:
> 1. Remove the macro CONFIG_KVM_GENERIC_HARDWARE_ENABLING in
> loongarch/kvm/main.c, as it is not useful.
> 2. Add select KVM_GENERIC_HARDWARE_ENABLING in loongarch/kvm/Kconfig,
> as it is used by virt/kvm.
> 3. Fix the LoongArch KVM source link in MAINTAINERS.
> 4. Improve LoongArch KVM documentation, such as add comment for
> LoongArch kvm_regs.
>
> Changes for v13:
> 1. Remove patch-28 "Implement probe virtualization when cpu init", as the
> virtualization information about FPU,PMP,LSX in guest.options,options_dyn
> is not used and the gcfg reg value can be read in kvm_hardware_enable, so
> remove the previous cpu_probe_lvz function.
> 2. Fix vcpu_enable_cap interface, it should return -EINVAL directly, as
> FPU cap is enable by default, and do not support any other caps now.
> 3. Simplify the jirl instruction with jr when without return addr,
> simplify case HW0 ... HW7 statment in interrupt.c
> 4. Rename host_stack,host_gp in kvm_vcpu_arch to host_sp,host_tp.
> 5. Remove 'cpu' parameter in _kvm_check_requests, as 'cpu' is not used,
> and remove 'cpu' parameter in kvm_check_vmid function, as it can get
> cpu number by itself.
>
> Changes for v12:
> 1. Improve the gcsr write/read/xchg interface to avoid the previous
> instruction statment like parse_r and make the code easy understanding,
> they are implemented in asm/insn-def.h and the instructions consistent
> of "opcode" "rj" "rd" "simm14" arguments.
> 2. Fix the maintainers list of LoongArch KVM.
>
> Changes for v11:
> 1. Add maintainers for LoongArch KVM.
>
> Changes for v10:
> 1. Fix grammatical problems in LoongArch documentation.
> 2. It is not necessary to save or restore the LOONGARCH_CSR_PGD when
> vcpu put and vcpu load, so we remove it.
>
> Changes for v9:
> 1. Apply the new defined interrupt number macros in loongarch.h to kvm,
> such as INT_SWI0, INT_HWI0, INT_TI, INT_IPI, etc. And remove the
> previous unused macros.
> 2. Remove unused variables in kvm_vcpu_arch, and reorder the variables
> to make them more standard.
>
> Changes for v8:
> 1. Adjust the cpu_data.guest.options structure, add the ases flag into
> it, and remove the previous guest.ases. We do this to keep consistent
> with host cpu_data.options structure.
> 2. Remove the "#include <asm/kvm_host.h>" in some files which also
> include the "<linux/kvm_host.h>". As linux/kvm_host.h already include
> the asm/kvm_host.h.
> 3. Fix some unstandard spelling and grammar errors in comments, and
> improve a little code format to make it easier and standard.
>
> Changes for v7:
> 1. Fix the kvm_save/restore_hw_gcsr compiling warnings reported by
> kernel test robot. The report link is:
> https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
> 2. Fix loongarch kvm trace related compiling problems.
>
> Changes for v6:
> 1. Fix the Documentation/virt/kvm/api.rst compile warning about
> loongarch parts.
>
> Changes for v5:
> 1. Implement get/set mp_state ioctl interface, and only the
> KVM_MP_STATE_RUNNABLE state is supported now, and other states
> will be completed in the future. The state is also used when vcpu
> run idle instruction, if vcpu state is changed to RUNNABLE, the
> vcpu will have the possibility to be woken up.
> 2. Supplement kvm document about loongarch-specific part, such as add
> api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
> etc.
> 3. Improve the kvm_switch_to_guest function in switch.S, remove the
> previous tmp,tmp1 arguments and replace it with t0,t1 reg.
>
> Changes for v4:
> 1. Add a csr_need_update flag in _vcpu_put, as most csr registers keep
> unchanged during process context switch, so we need not to update it
> every time. We can do this only if the soft csr is different form hardware.
> That is to say all of csrs should update after vcpu enter guest, as for
> set_csr_ioctl, we have written soft csr to keep consistent with hardware.
> 2. Improve get/set_csr_ioctl interface, we set SW or HW or INVALID flag
> for all csrs according to it's features when kvm init. In get/set_csr_ioctl,
> if csr is HW, we use gcsrrd/ gcsrwr instruction to access it, else if csr is
> SW, we use software to emulate it, and others return false.
> 3. Add set_hw_gcsr function in csr_ops.S, and it is used in set_csr_ioctl.
> We have splited hw gcsr into three parts, so we can calculate the code offset
> by gcsrid and jump here to run the gcsrwr instruction. We use this function to
> make the code easier and avoid to use the previous SET_HW_GCSR(XXX) interface.
> 4. Improve kvm mmu functions, such as flush page table and make clean page table
> interface.
>
> Changes for v3:
> 1. Remove the vpid array list in kvm_vcpu_arch and use a vpid variable here,
> because a vpid will never be recycled if a vCPU migrates from physical CPU A
> to B and back to A.
> 2. Make some constant variables in kvm_context to global such as vpid_mask,
> guest_eentry, enter_guest, etc.
> 3. Add some new tracepoints, such as kvm_trace_idle, kvm_trace_cache,
> kvm_trace_gspr, etc.
> 4. There are some duplicate codes in kvm_handle_exit and kvm_vcpu_run,
> so we move it to a new function kvm_pre_enter_guest.
> 5. Change the RESUME_HOST, RESUME_GUEST value, return 1 for resume guest
> and "<= 0" for resume host.
> 6. Fcsr and fpu registers are saved/restored together.
>
> Changes for v2:
> 1. Seprate the original patch-01 and patch-03 into small patches, and the
> patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
> etc.
> 2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
> and we use the common KVM_{GET,SET}_ONE_REG to access register.
> 3. Use BIT(x) to replace the "1 << n_bits" statement.
>
> Tianrui Zhao (30):
>   LoongArch: KVM: Add kvm related header files
>   LoongArch: KVM: Implement kvm module related interface
>   LoongArch: KVM: Implement kvm hardware enable, disable interface
>   LoongArch: KVM: Implement VM related functions
>   LoongArch: KVM: Add vcpu related header files
>   LoongArch: KVM: Implement vcpu create and destroy interface
>   LoongArch: KVM: Implement vcpu run interface
>   LoongArch: KVM: Implement vcpu handle exit interface
>   LoongArch: KVM: Implement vcpu get, vcpu set registers
>   LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
>   LoongArch: KVM: Implement fpu related operations for vcpu
>   LoongArch: KVM: Implement vcpu interrupt operations
>   LoongArch: KVM: Implement misc vcpu related interfaces
>   LoongArch: KVM: Implement vcpu load and vcpu put operations
>   LoongArch: KVM: Implement vcpu status description
>   LoongArch: KVM: Implement update VM id function
>   LoongArch: KVM: Implement virtual machine tlb operations
>   LoongArch: KVM: Implement vcpu timer operations
>   LoongArch: KVM: Implement kvm mmu operations
>   LoongArch: KVM: Implement handle csr excption
>   LoongArch: KVM: Implement handle iocsr exception
>   LoongArch: KVM: Implement handle idle exception
>   LoongArch: KVM: Implement handle gspr exception
>   LoongArch: KVM: Implement handle mmio exception
>   LoongArch: KVM: Implement handle fpu exception
>   LoongArch: KVM: Implement kvm exception vector
>   LoongArch: KVM: Implement vcpu world switch
>   LoongArch: KVM: Enable kvm config and add the makefile
>   LoongArch: KVM: Supplement kvm document about LoongArch-specific part
>   LoongArch: KVM: Add maintainers for LoongArch KVM
>
>  Documentation/virt/kvm/api.rst             |  70 +-
>  MAINTAINERS                                |  12 +
>  arch/loongarch/Kbuild                      |   1 +
>  arch/loongarch/Kconfig                     |   3 +
>  arch/loongarch/configs/loongson3_defconfig |   2 +
>  arch/loongarch/include/asm/inst.h          |  16 +
>  arch/loongarch/include/asm/kvm_csr.h       | 222 +++++
>  arch/loongarch/include/asm/kvm_host.h      | 238 ++++++
>  arch/loongarch/include/asm/kvm_types.h     |  11 +
>  arch/loongarch/include/asm/kvm_vcpu.h      |  95 +++
>  arch/loongarch/include/asm/loongarch.h     |  19 +-
>  arch/loongarch/include/uapi/asm/kvm.h      | 101 +++
>  arch/loongarch/kernel/asm-offsets.c        |  32 +
>  arch/loongarch/kvm/Kconfig                 |  45 ++
>  arch/loongarch/kvm/Makefile                |  22 +
>  arch/loongarch/kvm/csr_ops.S               |  67 ++
>  arch/loongarch/kvm/exit.c                  | 702 ++++++++++++++++
>  arch/loongarch/kvm/interrupt.c             | 113 +++
>  arch/loongarch/kvm/main.c                  | 361 +++++++++
>  arch/loongarch/kvm/mmu.c                   | 678 ++++++++++++++++
>  arch/loongarch/kvm/switch.S                | 255 ++++++
>  arch/loongarch/kvm/timer.c                 | 200 +++++
>  arch/loongarch/kvm/tlb.c                   |  34 +
>  arch/loongarch/kvm/trace.h                 | 168 ++++
>  arch/loongarch/kvm/vcpu.c                  | 898 +++++++++++++++++++++
>  arch/loongarch/kvm/vm.c                    |  76 ++
>  arch/loongarch/kvm/vmid.c                  |  66 ++
>  include/uapi/linux/kvm.h                   |   9 +
>  28 files changed, 4502 insertions(+), 14 deletions(-)
>  create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>  create mode 100644 arch/loongarch/include/asm/kvm_host.h
>  create mode 100644 arch/loongarch/include/asm/kvm_types.h
>  create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>  create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>  create mode 100644 arch/loongarch/kvm/Kconfig
>  create mode 100644 arch/loongarch/kvm/Makefile
>  create mode 100644 arch/loongarch/kvm/csr_ops.S
>  create mode 100644 arch/loongarch/kvm/exit.c
>  create mode 100644 arch/loongarch/kvm/interrupt.c
>  create mode 100644 arch/loongarch/kvm/main.c
>  create mode 100644 arch/loongarch/kvm/mmu.c
>  create mode 100644 arch/loongarch/kvm/switch.S
>  create mode 100644 arch/loongarch/kvm/timer.c
>  create mode 100644 arch/loongarch/kvm/tlb.c
>  create mode 100644 arch/loongarch/kvm/trace.h
>  create mode 100644 arch/loongarch/kvm/vcpu.c
>  create mode 100644 arch/loongarch/kvm/vm.c
>  create mode 100644 arch/loongarch/kvm/vmid.c
>
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files
  2023-08-31  8:29 ` [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Tianrui Zhao
@ 2023-09-11  4:59   ` Huacai Chen
  2023-09-11  9:41     ` zhaotianrui
  0 siblings, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11  4:59 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Add LoongArch KVM related header files, including kvm.h,
> kvm_host.h, kvm_types.h. All of those are about LoongArch
> virtualization features and kvm interfaces.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  arch/loongarch/include/asm/kvm_host.h  | 238 +++++++++++++++++++++++++
>  arch/loongarch/include/asm/kvm_types.h |  11 ++
>  arch/loongarch/include/uapi/asm/kvm.h  | 101 +++++++++++
>  include/uapi/linux/kvm.h               |   9 +
>  4 files changed, 359 insertions(+)
>  create mode 100644 arch/loongarch/include/asm/kvm_host.h
>  create mode 100644 arch/loongarch/include/asm/kvm_types.h
>  create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>
> diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
> new file mode 100644
> index 0000000000..9f23ddaaae
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_host.h
> @@ -0,0 +1,238 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_HOST_H__
> +#define __ASM_LOONGARCH_KVM_HOST_H__
> +
> +#include <linux/cpumask.h>
> +#include <linux/mutex.h>
> +#include <linux/hrtimer.h>
> +#include <linux/interrupt.h>
> +#include <linux/types.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_types.h>
> +#include <linux/threads.h>
> +#include <linux/spinlock.h>
> +
> +#include <asm/inst.h>
> +#include <asm/loongarch.h>
> +
> +/* Loongarch KVM register ids */
> +#define LOONGARCH_CSR_32(_R, _S)       \
> +       (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
> +
> +#define LOONGARCH_CSR_64(_R, _S)       \
> +       (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S)))
> +
> +#define KVM_IOC_CSRID(id)              LOONGARCH_CSR_64(id, 0)
> +#define KVM_GET_IOC_CSRIDX(id)         ((id & KVM_CSR_IDX_MASK) >> 3)
> +
> +#define KVM_MAX_VCPUS                  256
> +/* memory slots that does not exposed to userspace */
> +#define KVM_PRIVATE_MEM_SLOTS          0
> +
> +#define KVM_HALT_POLL_NS_DEFAULT       500000
> +
> +struct kvm_vm_stat {
> +       struct kvm_vm_stat_generic generic;
> +};
> +
> +struct kvm_vcpu_stat {
> +       struct kvm_vcpu_stat_generic generic;
> +       u64 idle_exits;
> +       u64 signal_exits;
> +       u64 int_exits;
> +       u64 cpucfg_exits;
> +};
> +
> +struct kvm_arch_memory_slot {
> +};
> +
> +struct kvm_context {
> +       unsigned long vpid_cache;
> +       struct kvm_vcpu *last_vcpu;
> +};
> +
> +struct kvm_world_switch {
> +       int (*guest_eentry)(void);
> +       int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +       unsigned long page_order;
> +};
> +
> +struct kvm_arch {
> +       /* Guest physical mm */
> +       pgd_t *pgd;
> +       unsigned long gpa_size;
> +
> +       s64 time_offset;
> +       struct kvm_context __percpu *vmcs;
> +};
> +
> +#define CSR_MAX_NUMS           0x800
> +
> +struct loongarch_csrs {
> +       unsigned long csrs[CSR_MAX_NUMS];
> +};
> +
> +/* Resume Flags */
> +#define RESUME_HOST            0
> +#define RESUME_GUEST           1
> +
> +enum emulation_result {
> +       EMULATE_DONE,           /* no further processing */
> +       EMULATE_DO_MMIO,        /* kvm_run filled with MMIO request */
> +       EMULATE_FAIL,           /* can't emulate this instruction */
> +       EMULATE_EXCEPT,         /* A guest exception has been generated */
> +       EMULATE_DO_IOCSR,       /* handle IOCSR request */
> +};
> +
> +#define KVM_LARCH_CSR          (0x1 << 1)
> +#define KVM_LARCH_FPU          (0x1 << 0)
> +
> +struct kvm_vcpu_arch {
> +       /*
> +        * Switch pointer-to-function type to unsigned long
> +        * for loading the value into register directly.
> +        */
> +       unsigned long host_eentry;
> +       unsigned long guest_eentry;
> +
> +       /* Pointers stored here for easy accessing from assembly code */
> +       int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +
> +       /* Host registers preserved across guest mode execution */
> +       unsigned long host_sp;
> +       unsigned long host_tp;
> +       unsigned long host_pgd;
> +
> +       /* Host CSRs are used when handling exits from guest */
> +       unsigned long badi;
> +       unsigned long badv;
> +       unsigned long host_ecfg;
> +       unsigned long host_estat;
> +       unsigned long host_percpu;
> +
> +       /* GPRs */
> +       unsigned long gprs[32];
> +       unsigned long pc;
> +
> +       /* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
> +       unsigned int aux_inuse;
> +       /* FPU state */
> +       struct loongarch_fpu fpu FPU_ALIGN;
> +
> +       /* CSR state */
> +       struct loongarch_csrs *csr;
> +
> +       /* GPR used as IO source/target */
> +       u32 io_gpr;
> +
> +       struct hrtimer swtimer;
> +       /* KVM register to control count timer */
> +       u32 count_ctl;
> +
> +       /* Bitmask of exceptions that are pending */
> +       unsigned long irq_pending;
> +       /* Bitmask of pending exceptions to be cleared */
> +       unsigned long irq_clear;
> +
> +       /* Cache for pages needed inside spinlock regions */
> +       struct kvm_mmu_memory_cache mmu_page_cache;
> +
> +       /* vcpu's vpid */
> +       u64 vpid;
> +
> +       /* Frequency of stable timer in Hz */
> +       u64 timer_mhz;
> +       ktime_t expire;
> +
> +       u64 core_ext_ioisr[4];
> +
> +       /* Last CPU the vCPU state was loaded on */
> +       int last_sched_cpu;
> +       /* mp state */
> +       struct kvm_mp_state mp_state;
> +};
> +
> +static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
> +{
> +       return csr->csrs[reg];
> +}
> +
> +static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val)
> +{
> +       csr->csrs[reg] = val;
> +}
> +
> +/* Helpers */
> +static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch)
> +{
> +       return cpu_has_fpu;
> +}
> +
> +void _kvm_init_fault(void);
Can we use kvm_guest_has_fpu and kvm_init_fault? Don't prefix with _
unless you have a special reason. For example, static internal
functions can be prefixed.

> +
> +/* Debug: dump vcpu state */
> +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
> +
> +/* MMU handling */
> +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
> +void kvm_flush_tlb_all(void);
> +void _kvm_destroy_mm(struct kvm *kvm);
The same as before, and maybe you can check other patches for the same issue.


Huacai

> +pgd_t *kvm_pgd_alloc(void);
> +
> +#define KVM_ARCH_WANT_MMU_NOTIFIER
> +int kvm_unmap_hva_range(struct kvm *kvm,
> +                       unsigned long start, unsigned long end, bool blockable);
> +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
> +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
> +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
> +
> +static inline void update_pc(struct kvm_vcpu_arch *arch)
> +{
> +       arch->pc += 4;
> +}
> +
> +/**
> + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
> + * @vcpu:      Virtual CPU.
> + *
> + * Returns:    Whether the TLBL exception was likely due to an instruction
> + *             fetch fault rather than a data load fault.
> + */
> +static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
> +{
> +       return arch->pc == arch->badv;
> +}
> +
> +/* Misc */
> +static inline void kvm_arch_hardware_unsetup(void) {}
> +static inline void kvm_arch_sync_events(struct kvm *kvm) {}
> +static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
> +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
> +static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
> +static inline void kvm_arch_free_memslot(struct kvm *kvm,
> +                                  struct kvm_memory_slot *slot) {}
> +void _kvm_check_vmid(struct kvm_vcpu *vcpu);
> +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
> +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
> +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
> +                                       const struct kvm_memory_slot *memslot);
> +void kvm_init_vmcs(struct kvm *kvm);
> +void kvm_vector_entry(void);
> +int  kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
> +extern const unsigned long kvm_vector_size;
> +extern const unsigned long kvm_enter_guest_size;
> +extern unsigned long vpid_mask;
> +extern struct kvm_world_switch *kvm_loongarch_ops;
> +
> +#define SW_GCSR                (1 << 0)
> +#define HW_GCSR                (1 << 1)
> +#define INVALID_GCSR   (1 << 2)
> +int get_gcsr_flag(int csr);
> +extern void set_hw_gcsr(int csr_id, unsigned long val);
> +#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
> diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h
> new file mode 100644
> index 0000000000..2fe1d4bdff
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_types.h
> @@ -0,0 +1,11 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef _ASM_LOONGARCH_KVM_TYPES_H
> +#define _ASM_LOONGARCH_KVM_TYPES_H
> +
> +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE      40
> +
> +#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
> diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h
> new file mode 100644
> index 0000000000..7ec2f34018
> --- /dev/null
> +++ b/arch/loongarch/include/uapi/asm/kvm.h
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __UAPI_ASM_LOONGARCH_KVM_H
> +#define __UAPI_ASM_LOONGARCH_KVM_H
> +
> +#include <linux/types.h>
> +
> +/*
> + * KVM Loongarch specific structures and definitions.
> + *
> + * Some parts derived from the x86 version of this file.
> + */
> +
> +#define __KVM_HAVE_READONLY_MEM
> +
> +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
> +#define KVM_DIRTY_LOG_PAGE_OFFSET      64
> +
> +/*
> + * for KVM_GET_REGS and KVM_SET_REGS
> + */
> +struct kvm_regs {
> +       /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
> +       __u64 gpr[32];
> +       __u64 pc;
> +};
> +
> +/*
> + * for KVM_GET_FPU and KVM_SET_FPU
> + */
> +struct kvm_fpu {
> +       __u32 fcsr;
> +       __u64 fcc;    /* 8x8 */
> +       struct kvm_fpureg {
> +               __u64 val64[4];
> +       } fpr[32];
> +};
> +
> +/*
> + * For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
> + * registers.  The id field is broken down as follows:
> + *
> + *  bits[63..52] - As per linux/kvm.h
> + *  bits[51..32] - Must be zero.
> + *  bits[31..16] - Register set.
> + *
> + * Register set = 0: GP registers from kvm_regs (see definitions below).
> + *
> + * Register set = 1: CSR registers.
> + *
> + * Register set = 2: KVM specific registers (see definitions below).
> + *
> + * Register set = 3: FPU / SIMD registers (see definitions below).
> + *
> + * Other sets registers may be added in the future.  Each set would
> + * have its own identifier in bits[31..16].
> + */
> +
> +#define KVM_REG_LOONGARCH_GPR          (KVM_REG_LOONGARCH | 0x00000ULL)
> +#define KVM_REG_LOONGARCH_CSR          (KVM_REG_LOONGARCH | 0x10000ULL)
> +#define KVM_REG_LOONGARCH_KVM          (KVM_REG_LOONGARCH | 0x20000ULL)
> +#define KVM_REG_LOONGARCH_FPU          (KVM_REG_LOONGARCH | 0x30000ULL)
> +#define KVM_REG_LOONGARCH_MASK         (KVM_REG_LOONGARCH | 0x30000ULL)
> +#define KVM_CSR_IDX_MASK               (0x10000 - 1)
> +
> +/*
> + * KVM_REG_LOONGARCH_KVM - KVM specific control registers.
> + */
> +
> +#define KVM_REG_LOONGARCH_COUNTER      (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3)
> +#define KVM_REG_LOONGARCH_VCPU_RESET   (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4)
> +
> +struct kvm_debug_exit_arch {
> +};
> +
> +/* for KVM_SET_GUEST_DEBUG */
> +struct kvm_guest_debug_arch {
> +};
> +
> +/* definition of registers in kvm_run */
> +struct kvm_sync_regs {
> +};
> +
> +/* dummy definition */
> +struct kvm_sregs {
> +};
> +
> +struct kvm_iocsr_entry {
> +       __u32 addr;
> +       __u32 pad;
> +       __u64 data;
> +};
> +
> +#define KVM_NR_IRQCHIPS                1
> +#define KVM_IRQCHIP_NUM_PINS   64
> +#define KVM_MAX_CORES          256
> +
> +#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index f089ab2909..1184171224 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -264,6 +264,7 @@ struct kvm_xen_exit {
>  #define KVM_EXIT_RISCV_SBI        35
>  #define KVM_EXIT_RISCV_CSR        36
>  #define KVM_EXIT_NOTIFY           37
> +#define KVM_EXIT_LOONGARCH_IOCSR  38
>
>  /* For KVM_EXIT_INTERNAL_ERROR */
>  /* Emulate instruction failed. */
> @@ -336,6 +337,13 @@ struct kvm_run {
>                         __u32 len;
>                         __u8  is_write;
>                 } mmio;
> +               /* KVM_EXIT_LOONGARCH_IOCSR */
> +               struct {
> +                       __u64 phys_addr;
> +                       __u8  data[8];
> +                       __u32 len;
> +                       __u8  is_write;
> +               } iocsr_io;
>                 /* KVM_EXIT_HYPERCALL */
>                 struct {
>                         __u64 nr;
> @@ -1362,6 +1370,7 @@ struct kvm_dirty_tlb {
>  #define KVM_REG_ARM64          0x6000000000000000ULL
>  #define KVM_REG_MIPS           0x7000000000000000ULL
>  #define KVM_REG_RISCV          0x8000000000000000ULL
> +#define KVM_REG_LOONGARCH      0x9000000000000000ULL
>
>  #define KVM_REG_SIZE_SHIFT     52
>  #define KVM_REG_SIZE_MASK      0x00f0000000000000ULL
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-08-31  8:30 ` [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
  2023-09-07 20:10   ` WANG Xuerui
@ 2023-09-11  7:30   ` WANG Xuerui
  2023-09-12  1:57     ` zhaotianrui
  1 sibling, 1 reply; 56+ messages in thread
From: WANG Xuerui @ 2023-09-11  7:30 UTC (permalink / raw)
  To: Tianrui Zhao, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao, kernel test robot

On 8/31/23 16:30, Tianrui Zhao wrote:
> [snip]
> +
> +config AS_HAS_LVZ_EXTENSION
> +	def_bool $(as-instr,hvcl 0)

Upon closer look it looks like this piece could use some improvement as 
well: while presence of "hvcl" indeed always implies full support for 
LVZ instructions, "hvcl" however isn't used anywhere in the series, so 
it's not trivially making sense. It may be better to test for an 
instruction actually going to be used like "gcsrrd". What do you think?

-- 
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files
  2023-08-31  8:29 ` [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
@ 2023-09-11  8:07   ` Huacai Chen
  2023-09-12  8:26     ` zhaotianrui
  0 siblings, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11  8:07 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Add LoongArch vcpu related header files, including vcpu csr
> information, irq number defines, and some vcpu interfaces.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  arch/loongarch/include/asm/kvm_csr.h   | 222 +++++++++++++++++++++++++
>  arch/loongarch/include/asm/kvm_vcpu.h  |  95 +++++++++++
>  arch/loongarch/include/asm/loongarch.h |  19 ++-
>  arch/loongarch/kvm/trace.h             | 168 +++++++++++++++++++
>  4 files changed, 499 insertions(+), 5 deletions(-)
>  create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>  create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>  create mode 100644 arch/loongarch/kvm/trace.h
>
> diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
> new file mode 100644
> index 0000000000..e27dcacd00
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_csr.h
> @@ -0,0 +1,222 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_CSR_H__
> +#define __ASM_LOONGARCH_KVM_CSR_H__
> +#include <asm/loongarch.h>
> +#include <asm/kvm_vcpu.h>
> +#include <linux/uaccess.h>
> +#include <linux/kvm_host.h>
> +
> +/* binutils support virtualization instructions */
> +#define gcsr_read(csr)                                         \
> +({                                                             \
> +       register unsigned long __v;                             \
> +       __asm__ __volatile__(                                   \
> +               " gcsrrd %[val], %[reg]\n\t"                    \
> +               : [val] "=r" (__v)                              \
> +               : [reg] "i" (csr)                               \
> +               : "memory");                                    \
> +       __v;                                                    \
> +})
> +
> +#define gcsr_write(v, csr)                                     \
> +({                                                             \
> +       register unsigned long __v = v;                         \
> +       __asm__ __volatile__ (                                  \
> +               " gcsrwr %[val], %[reg]\n\t"                    \
> +               : [val] "+r" (__v)                              \
> +               : [reg] "i" (csr)                               \
> +               : "memory");                                    \
> +})
> +
> +#define gcsr_xchg(v, m, csr)                                   \
> +({                                                             \
> +       register unsigned long __v = v;                         \
> +       __asm__ __volatile__(                                   \
> +               " gcsrxchg %[val], %[mask], %[reg]\n\t"         \
> +               : [val] "+r" (__v)                              \
> +               : [mask] "r" (m), [reg] "i" (csr)               \
> +               : "memory");                                    \
> +       __v;                                                    \
> +})
> +
> +/* Guest CSRS read and write */
> +#define read_gcsr_crmd()               gcsr_read(LOONGARCH_CSR_CRMD)
> +#define write_gcsr_crmd(val)           gcsr_write(val, LOONGARCH_CSR_CRMD)
> +#define read_gcsr_prmd()               gcsr_read(LOONGARCH_CSR_PRMD)
> +#define write_gcsr_prmd(val)           gcsr_write(val, LOONGARCH_CSR_PRMD)
> +#define read_gcsr_euen()               gcsr_read(LOONGARCH_CSR_EUEN)
> +#define write_gcsr_euen(val)           gcsr_write(val, LOONGARCH_CSR_EUEN)
> +#define read_gcsr_misc()               gcsr_read(LOONGARCH_CSR_MISC)
> +#define write_gcsr_misc(val)           gcsr_write(val, LOONGARCH_CSR_MISC)
> +#define read_gcsr_ecfg()               gcsr_read(LOONGARCH_CSR_ECFG)
> +#define write_gcsr_ecfg(val)           gcsr_write(val, LOONGARCH_CSR_ECFG)
> +#define read_gcsr_estat()              gcsr_read(LOONGARCH_CSR_ESTAT)
> +#define write_gcsr_estat(val)          gcsr_write(val, LOONGARCH_CSR_ESTAT)
> +#define read_gcsr_era()                        gcsr_read(LOONGARCH_CSR_ERA)
> +#define write_gcsr_era(val)            gcsr_write(val, LOONGARCH_CSR_ERA)
> +#define read_gcsr_badv()               gcsr_read(LOONGARCH_CSR_BADV)
> +#define write_gcsr_badv(val)           gcsr_write(val, LOONGARCH_CSR_BADV)
> +#define read_gcsr_badi()               gcsr_read(LOONGARCH_CSR_BADI)
> +#define write_gcsr_badi(val)           gcsr_write(val, LOONGARCH_CSR_BADI)
> +#define read_gcsr_eentry()             gcsr_read(LOONGARCH_CSR_EENTRY)
> +#define write_gcsr_eentry(val)         gcsr_write(val, LOONGARCH_CSR_EENTRY)
> +
> +#define read_gcsr_tlbidx()             gcsr_read(LOONGARCH_CSR_TLBIDX)
> +#define write_gcsr_tlbidx(val)         gcsr_write(val, LOONGARCH_CSR_TLBIDX)
> +#define read_gcsr_tlbhi()              gcsr_read(LOONGARCH_CSR_TLBEHI)
> +#define write_gcsr_tlbhi(val)          gcsr_write(val, LOONGARCH_CSR_TLBEHI)
> +#define read_gcsr_tlblo0()             gcsr_read(LOONGARCH_CSR_TLBELO0)
> +#define write_gcsr_tlblo0(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO0)
> +#define read_gcsr_tlblo1()             gcsr_read(LOONGARCH_CSR_TLBELO1)
> +#define write_gcsr_tlblo1(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO1)
> +
> +#define read_gcsr_asid()               gcsr_read(LOONGARCH_CSR_ASID)
> +#define write_gcsr_asid(val)           gcsr_write(val, LOONGARCH_CSR_ASID)
> +#define read_gcsr_pgdl()               gcsr_read(LOONGARCH_CSR_PGDL)
> +#define write_gcsr_pgdl(val)           gcsr_write(val, LOONGARCH_CSR_PGDL)
> +#define read_gcsr_pgdh()               gcsr_read(LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgdh(val)           gcsr_write(val, LOONGARCH_CSR_PGDH)
> +#define write_gcsr_pgd(val)            gcsr_write(val, LOONGARCH_CSR_PGD)
> +#define read_gcsr_pgd()                        gcsr_read(LOONGARCH_CSR_PGD)
> +#define read_gcsr_pwctl0()             gcsr_read(LOONGARCH_CSR_PWCTL0)
> +#define write_gcsr_pwctl0(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL0)
> +#define read_gcsr_pwctl1()             gcsr_read(LOONGARCH_CSR_PWCTL1)
> +#define write_gcsr_pwctl1(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL1)
> +#define read_gcsr_stlbpgsize()         gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
> +#define write_gcsr_stlbpgsize(val)     gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
> +#define read_gcsr_rvacfg()             gcsr_read(LOONGARCH_CSR_RVACFG)
> +#define write_gcsr_rvacfg(val)         gcsr_write(val, LOONGARCH_CSR_RVACFG)
> +
> +#define read_gcsr_cpuid()              gcsr_read(LOONGARCH_CSR_CPUID)
> +#define write_gcsr_cpuid(val)          gcsr_write(val, LOONGARCH_CSR_CPUID)
> +#define read_gcsr_prcfg1()             gcsr_read(LOONGARCH_CSR_PRCFG1)
> +#define write_gcsr_prcfg1(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG1)
> +#define read_gcsr_prcfg2()             gcsr_read(LOONGARCH_CSR_PRCFG2)
> +#define write_gcsr_prcfg2(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG2)
> +#define read_gcsr_prcfg3()             gcsr_read(LOONGARCH_CSR_PRCFG3)
> +#define write_gcsr_prcfg3(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG3)
> +
> +#define read_gcsr_kscratch0()          gcsr_read(LOONGARCH_CSR_KS0)
> +#define write_gcsr_kscratch0(val)      gcsr_write(val, LOONGARCH_CSR_KS0)
> +#define read_gcsr_kscratch1()          gcsr_read(LOONGARCH_CSR_KS1)
> +#define write_gcsr_kscratch1(val)      gcsr_write(val, LOONGARCH_CSR_KS1)
> +#define read_gcsr_kscratch2()          gcsr_read(LOONGARCH_CSR_KS2)
> +#define write_gcsr_kscratch2(val)      gcsr_write(val, LOONGARCH_CSR_KS2)
> +#define read_gcsr_kscratch3()          gcsr_read(LOONGARCH_CSR_KS3)
> +#define write_gcsr_kscratch3(val)      gcsr_write(val, LOONGARCH_CSR_KS3)
> +#define read_gcsr_kscratch4()          gcsr_read(LOONGARCH_CSR_KS4)
> +#define write_gcsr_kscratch4(val)      gcsr_write(val, LOONGARCH_CSR_KS4)
> +#define read_gcsr_kscratch5()          gcsr_read(LOONGARCH_CSR_KS5)
> +#define write_gcsr_kscratch5(val)      gcsr_write(val, LOONGARCH_CSR_KS5)
> +#define read_gcsr_kscratch6()          gcsr_read(LOONGARCH_CSR_KS6)
> +#define write_gcsr_kscratch6(val)      gcsr_write(val, LOONGARCH_CSR_KS6)
> +#define read_gcsr_kscratch7()          gcsr_read(LOONGARCH_CSR_KS7)
> +#define write_gcsr_kscratch7(val)      gcsr_write(val, LOONGARCH_CSR_KS7)
> +
> +#define read_gcsr_timerid()            gcsr_read(LOONGARCH_CSR_TMID)
> +#define write_gcsr_timerid(val)                gcsr_write(val, LOONGARCH_CSR_TMID)
> +#define read_gcsr_timercfg()           gcsr_read(LOONGARCH_CSR_TCFG)
> +#define write_gcsr_timercfg(val)       gcsr_write(val, LOONGARCH_CSR_TCFG)
> +#define read_gcsr_timertick()          gcsr_read(LOONGARCH_CSR_TVAL)
> +#define write_gcsr_timertick(val)      gcsr_write(val, LOONGARCH_CSR_TVAL)
> +#define read_gcsr_timeroffset()                gcsr_read(LOONGARCH_CSR_CNTC)
> +#define write_gcsr_timeroffset(val)    gcsr_write(val, LOONGARCH_CSR_CNTC)
> +
> +#define read_gcsr_llbctl()             gcsr_read(LOONGARCH_CSR_LLBCTL)
> +#define write_gcsr_llbctl(val)         gcsr_write(val, LOONGARCH_CSR_LLBCTL)
> +
> +#define read_gcsr_tlbrentry()          gcsr_read(LOONGARCH_CSR_TLBRENTRY)
> +#define write_gcsr_tlbrentry(val)      gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
> +#define read_gcsr_tlbrbadv()           gcsr_read(LOONGARCH_CSR_TLBRBADV)
> +#define write_gcsr_tlbrbadv(val)       gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
> +#define read_gcsr_tlbrera()            gcsr_read(LOONGARCH_CSR_TLBRERA)
> +#define write_gcsr_tlbrera(val)                gcsr_write(val, LOONGARCH_CSR_TLBRERA)
> +#define read_gcsr_tlbrsave()           gcsr_read(LOONGARCH_CSR_TLBRSAVE)
> +#define write_gcsr_tlbrsave(val)       gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
> +#define read_gcsr_tlbrelo0()           gcsr_read(LOONGARCH_CSR_TLBRELO0)
> +#define write_gcsr_tlbrelo0(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
> +#define read_gcsr_tlbrelo1()           gcsr_read(LOONGARCH_CSR_TLBRELO1)
> +#define write_gcsr_tlbrelo1(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
> +#define read_gcsr_tlbrehi()            gcsr_read(LOONGARCH_CSR_TLBREHI)
> +#define write_gcsr_tlbrehi(val)                gcsr_write(val, LOONGARCH_CSR_TLBREHI)
> +#define read_gcsr_tlbrprmd()           gcsr_read(LOONGARCH_CSR_TLBRPRMD)
> +#define write_gcsr_tlbrprmd(val)       gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
> +
> +#define read_gcsr_directwin0()         gcsr_read(LOONGARCH_CSR_DMWIN0)
> +#define write_gcsr_directwin0(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN0)
> +#define read_gcsr_directwin1()         gcsr_read(LOONGARCH_CSR_DMWIN1)
> +#define write_gcsr_directwin1(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN1)
> +#define read_gcsr_directwin2()         gcsr_read(LOONGARCH_CSR_DMWIN2)
> +#define write_gcsr_directwin2(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN2)
> +#define read_gcsr_directwin3()         gcsr_read(LOONGARCH_CSR_DMWIN3)
> +#define write_gcsr_directwin3(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN3)
> +
> +/* Guest related CSRs */
> +#define read_csr_gtlbc()               csr_read64(LOONGARCH_CSR_GTLBC)
> +#define write_csr_gtlbc(val)           csr_write64(val, LOONGARCH_CSR_GTLBC)
> +#define read_csr_trgp()                        csr_read64(LOONGARCH_CSR_TRGP)
> +#define read_csr_gcfg()                        csr_read64(LOONGARCH_CSR_GCFG)
> +#define write_csr_gcfg(val)            csr_write64(val, LOONGARCH_CSR_GCFG)
> +#define read_csr_gstat()               csr_read64(LOONGARCH_CSR_GSTAT)
> +#define write_csr_gstat(val)           csr_write64(val, LOONGARCH_CSR_GSTAT)
> +#define read_csr_gintc()               csr_read64(LOONGARCH_CSR_GINTC)
> +#define write_csr_gintc(val)           csr_write64(val, LOONGARCH_CSR_GINTC)
> +#define read_csr_gcntc()               csr_read64(LOONGARCH_CSR_GCNTC)
> +#define write_csr_gcntc(val)           csr_write64(val, LOONGARCH_CSR_GCNTC)
> +
> +#define __BUILD_GCSR_OP(name)          __BUILD_CSR_COMMON(gcsr_##name)
> +
> +__BUILD_GCSR_OP(llbctl)
> +__BUILD_GCSR_OP(tlbidx)
> +__BUILD_CSR_OP(gcfg)
> +__BUILD_CSR_OP(gstat)
> +__BUILD_CSR_OP(gtlbc)
> +__BUILD_CSR_OP(gintc)
> +
> +#define set_gcsr_estat(val)    \
> +       gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
> +#define clear_gcsr_estat(val)  \
> +       gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
> +
> +#define kvm_read_hw_gcsr(id)           gcsr_read(id)
> +#define kvm_write_hw_gcsr(csr, id, val)        gcsr_write(val, id)
> +
> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
> +
> +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
> +
> +#define kvm_save_hw_gcsr(csr, gid)     (csr->csrs[gid] = gcsr_read(gid))
> +#define kvm_restore_hw_gcsr(csr, gid)  (gcsr_write(csr->csrs[gid], gid))
> +
> +static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
> +{
> +       return csr->csrs[gid];
> +}
> +
> +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
> +                                             int gid, unsigned long val)
> +{
> +       csr->csrs[gid] = val;
> +}
> +
> +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
> +                                           int gid, unsigned long val)
> +{
> +       csr->csrs[gid] |= val;
> +}
> +
> +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
> +                                              int gid, unsigned long mask,
> +                                              unsigned long val)
> +{
> +       unsigned long _mask = mask;
> +
> +       csr->csrs[gid] &= ~_mask;
> +       csr->csrs[gid] |= val & _mask;
> +}
> +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
> new file mode 100644
> index 0000000000..3d23a656fe
> --- /dev/null
> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
> @@ -0,0 +1,95 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
> +#define __ASM_LOONGARCH_KVM_VCPU_H__
> +
> +#include <linux/kvm_host.h>
> +#include <asm/loongarch.h>
> +
> +/* Controlled by 0x5 guest exst */
> +#define CPU_SIP0                       (_ULCAST_(1))
> +#define CPU_SIP1                       (_ULCAST_(1) << 1)
> +#define CPU_PMU                                (_ULCAST_(1) << 10)
> +#define CPU_TIMER                      (_ULCAST_(1) << 11)
> +#define CPU_IPI                                (_ULCAST_(1) << 12)
> +
> +/* Controlled by 0x52 guest exception VIP
> + * aligned to exst bit 5~12
> + */
> +#define CPU_IP0                                (_ULCAST_(1))
> +#define CPU_IP1                                (_ULCAST_(1) << 1)
> +#define CPU_IP2                                (_ULCAST_(1) << 2)
> +#define CPU_IP3                                (_ULCAST_(1) << 3)
> +#define CPU_IP4                                (_ULCAST_(1) << 4)
> +#define CPU_IP5                                (_ULCAST_(1) << 5)
> +#define CPU_IP6                                (_ULCAST_(1) << 6)
> +#define CPU_IP7                                (_ULCAST_(1) << 7)
> +
> +#define MNSEC_PER_SEC                  (NSEC_PER_SEC >> 20)
> +
> +/* KVM_IRQ_LINE irq field index values */
> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT    24
> +#define KVM_LOONGSON_IRQ_TYPE_MASK     0xff
> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT    16
> +#define KVM_LOONGSON_IRQ_VCPU_MASK     0xff
> +#define KVM_LOONGSON_IRQ_NUM_SHIFT     0
> +#define KVM_LOONGSON_IRQ_NUM_MASK      0xffff
> +
> +/* Irq_type field */
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP   0
> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO   1
> +#define KVM_LOONGSON_IRQ_TYPE_HT       2
> +#define KVM_LOONGSON_IRQ_TYPE_MSI      3
> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC   4
> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE    5
> +
> +/* Out-of-kernel GIC cpu interrupt injection irq_number field */
> +#define KVM_LOONGSON_IRQ_CPU_IRQ       0
> +#define KVM_LOONGSON_IRQ_CPU_FIQ       1
> +#define KVM_LOONGSON_CPU_IP_NUM                8
> +
> +typedef union loongarch_instruction  larch_inst;
> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
> +
> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
> +
> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
> +void kvm_save_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
> +
> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
> +void kvm_save_timer(struct kvm_vcpu *vcpu);
> +
> +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq);
> +/*
> + * Loongarch KVM guest interrupt handling
> + */
> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +       set_bit(irq, &vcpu->arch.irq_pending);
> +       clear_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
> +{
> +       clear_bit(irq, &vcpu->arch.irq_pending);
> +       set_bit(irq, &vcpu->arch.irq_clear);
> +}
> +
> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
> diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
> index 10748a20a2..b9044c8dfa 100644
> --- a/arch/loongarch/include/asm/loongarch.h
> +++ b/arch/loongarch/include/asm/loongarch.h
> @@ -269,6 +269,7 @@ __asm__(".macro     parse_r var r\n\t"
>  #define LOONGARCH_CSR_ECFG             0x4     /* Exception config */
>  #define  CSR_ECFG_VS_SHIFT             16
>  #define  CSR_ECFG_VS_WIDTH             3
> +#define  CSR_ECFG_VS_SHIFT_END         (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
>  #define  CSR_ECFG_VS                   (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>  #define  CSR_ECFG_IM_SHIFT             0
>  #define  CSR_ECFG_IM_WIDTH             14
> @@ -357,13 +358,14 @@ __asm__(".macro   parse_r var r\n\t"
>  #define  CSR_TLBLO1_V                  (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>
>  #define LOONGARCH_CSR_GTLBC            0x15    /* Guest TLB control */
> -#define  CSR_GTLBC_RID_SHIFT           16
> -#define  CSR_GTLBC_RID_WIDTH           8
> -#define  CSR_GTLBC_RID                 (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
> +#define  CSR_GTLBC_TGID_SHIFT          16
> +#define  CSR_GTLBC_TGID_WIDTH          8
> +#define  CSR_GTLBC_TGID_SHIFT_END      (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
> +#define  CSR_GTLBC_TGID                        (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
>  #define  CSR_GTLBC_TOTI_SHIFT          13
>  #define  CSR_GTLBC_TOTI                        (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
> -#define  CSR_GTLBC_USERID_SHIFT                12
> -#define  CSR_GTLBC_USERID              (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
> +#define  CSR_GTLBC_USETGID_SHIFT       12
> +#define  CSR_GTLBC_USETGID             (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
>  #define  CSR_GTLBC_GMTLBSZ_SHIFT       0
>  #define  CSR_GTLBC_GMTLBSZ_WIDTH       6
>  #define  CSR_GTLBC_GMTLBSZ             (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
> @@ -518,6 +520,7 @@ __asm__(".macro     parse_r var r\n\t"
>  #define LOONGARCH_CSR_GSTAT            0x50    /* Guest status */
>  #define  CSR_GSTAT_GID_SHIFT           16
>  #define  CSR_GSTAT_GID_WIDTH           8
> +#define  CSR_GSTAT_GID_SHIFT_END       (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
>  #define  CSR_GSTAT_GID                 (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
>  #define  CSR_GSTAT_GIDBIT_SHIFT                4
>  #define  CSR_GSTAT_GIDBIT_WIDTH                6
> @@ -568,6 +571,12 @@ __asm__(".macro    parse_r var r\n\t"
>  #define  CSR_GCFG_MATC_GUEST           (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
>  #define  CSR_GCFG_MATC_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
>  #define  CSR_GCFG_MATC_NEST            (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
> +#define  CSR_GCFG_MATP_NEST_SHIFT      2
> +#define  CSR_GCFG_MATP_NEST            (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
> +#define  CSR_GCFG_MATP_ROOT_SHIFT      1
> +#define  CSR_GCFG_MATP_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
> +#define  CSR_GCFG_MATP_GUEST_SHIFT     0
> +#define  CSR_GCFG_MATP_GUEST           (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
>
>  #define LOONGARCH_CSR_GINTC            0x52    /* Guest interrupt control */
>  #define  CSR_GINTC_HC_SHIFT            16
> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
> new file mode 100644
> index 0000000000..17b28d94d5
> --- /dev/null
> +++ b/arch/loongarch/kvm/trace.h
> @@ -0,0 +1,168 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_KVM_H
> +
> +#include <linux/tracepoint.h>
> +#include <asm/kvm_csr.h>
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM   kvm
> +
> +/*
> + * Tracepoints for VM enters
> + */
> +DECLARE_EVENT_CLASS(kvm_transition,
> +       TP_PROTO(struct kvm_vcpu *vcpu),
> +       TP_ARGS(vcpu),
> +       TP_STRUCT__entry(
> +               __field(unsigned long, pc)
> +       ),
> +
> +       TP_fast_assign(
> +               __entry->pc = vcpu->arch.pc;
> +       ),
> +
> +       TP_printk("PC: 0x%08lx",
> +                 __entry->pc)
> +);
> +
> +DEFINE_EVENT(kvm_transition, kvm_enter,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_reenter,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +DEFINE_EVENT(kvm_transition, kvm_out,
> +            TP_PROTO(struct kvm_vcpu *vcpu),
> +            TP_ARGS(vcpu));
> +
> +/* Further exit reasons */
> +#define KVM_TRACE_EXIT_IDLE            64
> +#define KVM_TRACE_EXIT_CACHE           65
> +#define KVM_TRACE_EXIT_SIGNAL          66
> +
> +/* Tracepoints for VM exits */
> +#define kvm_trace_symbol_exit_types                    \
> +       { KVM_TRACE_EXIT_IDLE,          "IDLE" },       \
> +       { KVM_TRACE_EXIT_CACHE,         "CACHE" },      \
> +       { KVM_TRACE_EXIT_SIGNAL,        "Signal" }
Consider to use Idle, Cache which has the same style of Signal?

And why the types here are not the same as those in kvm_vcpu_stat?

struct kvm_vcpu_stat {
 struct kvm_vcpu_stat_generic generic;
 u64 idle_exits;
 u64 signal_exits;
 u64 int_exits;
 u64 cpucfg_exits;
};

> +
> +TRACE_EVENT(kvm_exit_gspr,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
> +           TP_ARGS(vcpu, inst_word),
> +           TP_STRUCT__entry(
> +                       __field(unsigned int, inst_word)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->inst_word = inst_word;
> +           ),
> +
> +           TP_printk("inst word: 0x%08x",
> +                     __entry->inst_word)
> +);
> +
> +
> +DECLARE_EVENT_CLASS(kvm_exit,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +           TP_ARGS(vcpu, reason),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, pc)
> +                       __field(unsigned int, reason)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->pc = vcpu->arch.pc;
> +                       __entry->reason = reason;
> +           ),
> +
> +           TP_printk("[%s]PC: 0x%08lx",
> +                     __print_symbolic(__entry->reason,
> +                                      kvm_trace_symbol_exit_types),
> +                     __entry->pc)
> +);
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit_idle,
> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit_cache,
> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +DEFINE_EVENT(kvm_exit, kvm_exit,
I'm not sure, but it may be DEFINE_EVENT(kvm_exit, kvm_exit_signal),
which is corresponding to the types above?


Huacai

> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
> +            TP_ARGS(vcpu, reason));
> +
> +#define KVM_TRACE_AUX_RESTORE          0
> +#define KVM_TRACE_AUX_SAVE             1
> +#define KVM_TRACE_AUX_ENABLE           2
> +#define KVM_TRACE_AUX_DISABLE          3
> +#define KVM_TRACE_AUX_DISCARD          4
> +
> +#define KVM_TRACE_AUX_FPU              1
> +
> +#define kvm_trace_symbol_aux_op                                \
> +       { KVM_TRACE_AUX_RESTORE,        "restore" },    \
> +       { KVM_TRACE_AUX_SAVE,           "save" },       \
> +       { KVM_TRACE_AUX_ENABLE,         "enable" },     \
> +       { KVM_TRACE_AUX_DISABLE,        "disable" },    \
> +       { KVM_TRACE_AUX_DISCARD,        "discard" }
> +
> +#define kvm_trace_symbol_aux_state                     \
> +       { KVM_TRACE_AUX_FPU,     "FPU" }
> +
> +TRACE_EVENT(kvm_aux,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
> +                    unsigned int state),
> +           TP_ARGS(vcpu, op, state),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, pc)
> +                       __field(u8, op)
> +                       __field(u8, state)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->pc = vcpu->arch.pc;
> +                       __entry->op = op;
> +                       __entry->state = state;
> +           ),
> +
> +           TP_printk("%s %s PC: 0x%08lx",
> +                     __print_symbolic(__entry->op,
> +                                      kvm_trace_symbol_aux_op),
> +                     __print_symbolic(__entry->state,
> +                                      kvm_trace_symbol_aux_state),
> +                     __entry->pc)
> +);
> +
> +TRACE_EVENT(kvm_vpid_change,
> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
> +           TP_ARGS(vcpu, vpid),
> +           TP_STRUCT__entry(
> +                       __field(unsigned long, vpid)
> +           ),
> +
> +           TP_fast_assign(
> +                       __entry->vpid = vpid;
> +           ),
> +
> +           TP_printk("vpid: 0x%08lx",
> +                     __entry->vpid)
> +);
> +
> +#endif /* _TRACE_LOONGARCH64_KVM_H */
> +
> +#undef TRACE_INCLUDE_PATH
> +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_FILE trace
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-08-31  8:29 ` [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
@ 2023-09-11  9:03   ` Huacai Chen
  2023-09-11 10:03     ` zhaotianrui
  0 siblings, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11  9:03 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Implement LoongArch vcpu get registers and set registers operations, it
> is called when user space use the ioctl interface to get or set regs.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
>  arch/loongarch/kvm/vcpu.c    | 206 +++++++++++++++++++++++++++++++++++
>  2 files changed, 273 insertions(+)
>  create mode 100644 arch/loongarch/kvm/csr_ops.S
>
> diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S
> new file mode 100644
> index 0000000000..53e44b23a5
> --- /dev/null
> +++ b/arch/loongarch/kvm/csr_ops.S
> @@ -0,0 +1,67 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#include <asm/regdef.h>
> +#include <linux/linkage.h>
> +       .text
> +       .cfi_sections   .debug_frame
> +/*
> + * we have splited hw gcsr into three parts, so we can
> + * calculate the code offset by gcsrid and jump here to
> + * run the gcsrwr instruction.
> + */
> +SYM_FUNC_START(set_hw_gcsr)
> +       addi.d      t0,   a0,   0
> +       addi.w      t1,   zero, 96
> +       bltu        t1,   t0,   1f
> +       la.pcrel    t0,   10f
> +       alsl.d      t0,   a0,   t0, 3
> +       jr          t0
> +1:
> +       addi.w      t1,   a0,   -128
> +       addi.w      t2,   zero, 15
> +       bltu        t2,   t1,   2f
> +       la.pcrel    t0,   11f
> +       alsl.d      t0,   t1,   t0, 3
> +       jr          t0
> +2:
> +       addi.w      t1,   a0,   -384
> +       addi.w      t2,   zero, 3
> +       bltu        t2,   t1,   3f
> +       la.pcrel    t0,   12f
> +       alsl.d      t0,   t1,   t0, 3
> +       jr          t0
> +3:
> +       addi.w      a0,   zero, -1
> +       jr          ra
> +
> +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
> +10:
> +       csrnum = 0
> +       .rept 0x61
> +       gcsrwr a1, csrnum
> +       jr ra
> +       csrnum = csrnum + 1
> +       .endr
> +
> +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
> +11:
> +       csrnum = 0x80
> +       .rept 0x10
> +       gcsrwr a1, csrnum
> +       jr ra
> +       csrnum = csrnum + 1
> +       .endr
> +
> +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
> +12:
> +       csrnum = 0x180
> +       .rept 0x4
> +       gcsrwr a1, csrnum
> +       jr ra
> +       csrnum = csrnum + 1
> +       .endr
> +
> +SYM_FUNC_END(set_hw_gcsr)
> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
> index ca4e8d074e..f17422a942 100644
> --- a/arch/loongarch/kvm/vcpu.c
> +++ b/arch/loongarch/kvm/vcpu.c
> @@ -13,6 +13,212 @@
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
>
> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
> +{
> +       unsigned long val;
> +       struct loongarch_csrs *csr = vcpu->arch.csr;
> +
> +       if (get_gcsr_flag(id) & INVALID_GCSR)
> +               return -EINVAL;
> +
> +       if (id == LOONGARCH_CSR_ESTAT) {
> +               /* interrupt status IP0 -- IP7 from GINTC */
> +               val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
> +               *v = kvm_read_sw_gcsr(csr, id) | (val << 2);
> +               return 0;
> +       }
> +
> +       /*
> +        * get software csr state if csrid is valid, since software
> +        * csr state is consistent with hardware
> +        */
After a long time thinking, I found this is wrong. Of course
_kvm_setcsr() saves a software copy of the hardware registers, but the
hardware status will change. For example, during a VM running, it may
change the EUEN register if it uses fpu.

So, we should do things like what we do in our internal repo,
_kvm_getcsr() should get values from hardware for HW_GCSR registers.
And we also need a get_hw_gcsr assembly function.


Huacai

> +       *v = kvm_read_sw_gcsr(csr, id);
> +
> +       return 0;
> +}
> +
> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
> +{
> +       struct loongarch_csrs *csr = vcpu->arch.csr;
> +       int ret = 0, gintc;
> +
> +       if (get_gcsr_flag(id) & INVALID_GCSR)
> +               return -EINVAL;
> +
> +       if (id == LOONGARCH_CSR_ESTAT) {
> +               /* estat IP0~IP7 inject through guestexcept */
> +               gintc = (val >> 2) & 0xff;
> +               write_csr_gintc(gintc);
> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
> +
> +               gintc = val & ~(0xffUL << 2);
> +               write_gcsr_estat(gintc);
> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
> +
> +               return ret;
> +       }
> +
> +       if (get_gcsr_flag(id) & HW_GCSR) {
> +               set_hw_gcsr(id, val);
> +               /* write sw gcsr to keep consistent with hardware */
> +               kvm_write_sw_gcsr(csr, id, val);
> +       } else
> +               kvm_write_sw_gcsr(csr, id, val);
> +
> +       return ret;
> +}
> +
> +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
> +               const struct kvm_one_reg *reg, s64 *v)
> +{
> +       int reg_idx, ret = 0;
> +
> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
> +               ret = _kvm_getcsr(vcpu, reg_idx, v);
> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
> +               *v = drdtime() + vcpu->kvm->arch.time_offset;
> +       else
> +               ret = -EINVAL;
> +
> +       return ret;
> +}
> +
> +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> +{
> +       int ret = -EINVAL;
> +       s64 v;
> +
> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
> +               return ret;
> +
> +       if (_kvm_get_one_reg(vcpu, reg, &v))
> +               return ret;
> +
> +       return put_user(v, (u64 __user *)(long)reg->addr);
> +}
> +
> +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
> +                       const struct kvm_one_reg *reg,
> +                       s64 v)
> +{
> +       int ret = 0;
> +       unsigned long flags;
> +       u64 val;
> +       int reg_idx;
> +
> +       val = v;
> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
> +               ret = _kvm_setcsr(vcpu, reg_idx, val);
> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
> +               local_irq_save(flags);
> +               /*
> +                * gftoffset is relative with board, not vcpu
> +                * only set for the first time for smp system
> +                */
> +               if (vcpu->vcpu_id == 0)
> +                       vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
> +               write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
> +               local_irq_restore(flags);
> +       } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
> +               kvm_reset_timer(vcpu);
> +               memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
> +               memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
> +       } else
> +               ret = -EINVAL;
> +
> +       return ret;
> +}
> +
> +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> +{
> +       s64 v;
> +       int ret = -EINVAL;
> +
> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
> +               return ret;
> +
> +       if (get_user(v, (u64 __user *)(long)reg->addr))
> +               return ret;
> +
> +       return _kvm_set_one_reg(vcpu, reg, v);
> +}
> +
> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
> +                                 struct kvm_sregs *sregs)
> +{
> +       return -ENOIOCTLCMD;
> +}
> +
> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
> +                                 struct kvm_sregs *sregs)
> +{
> +       return -ENOIOCTLCMD;
> +}
> +
> +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> +{
> +       int i;
> +
> +       vcpu_load(vcpu);
> +
> +       for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
> +               regs->gpr[i] = vcpu->arch.gprs[i];
> +
> +       regs->pc = vcpu->arch.pc;
> +
> +       vcpu_put(vcpu);
> +       return 0;
> +}
> +
> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> +{
> +       int i;
> +
> +       vcpu_load(vcpu);
> +
> +       for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
> +               vcpu->arch.gprs[i] = regs->gpr[i];
> +       vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
> +       vcpu->arch.pc = regs->pc;
> +
> +       vcpu_put(vcpu);
> +       return 0;
> +}
> +
> +long kvm_arch_vcpu_ioctl(struct file *filp,
> +                        unsigned int ioctl, unsigned long arg)
> +{
> +       struct kvm_vcpu *vcpu = filp->private_data;
> +       void __user *argp = (void __user *)arg;
> +       long r;
> +
> +       vcpu_load(vcpu);
> +
> +       switch (ioctl) {
> +       case KVM_SET_ONE_REG:
> +       case KVM_GET_ONE_REG: {
> +               struct kvm_one_reg reg;
> +
> +               r = -EFAULT;
> +               if (copy_from_user(&reg, argp, sizeof(reg)))
> +                       break;
> +               if (ioctl == KVM_SET_ONE_REG)
> +                       r = _kvm_set_reg(vcpu, &reg);
> +               else
> +                       r = _kvm_get_reg(vcpu, &reg);
> +               break;
> +       }
> +       default:
> +               r = -ENOIOCTLCMD;
> +               break;
> +       }
> +
> +       vcpu_put(vcpu);
> +       return r;
> +}
> +
>  int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
>  {
>         return 0;
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 00/30] Add KVM LoongArch support
  2023-09-11  4:02 ` [PATCH v20 00/30] Add KVM LoongArch support Huacai Chen
@ 2023-09-11  9:34   ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-11  9:34 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao


在 2023/9/11 下午12:02, Huacai Chen 写道:
> Hi, Tianrui,
>
> I hope this can be the last review and the next version can get upstreamed. :)
Thanks, I am very grateful for your carefully reviewing and giving us a 
lot of useful advice to make the LoongArch KVM codes better.
>
>
> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> From: zhaotianrui <zhaotianrui@loongson.cn>
>>
>> This series adds KVM LoongArch support. Loongson 3A5000 supports hardware
>> assisted virtualization. With cpu virtualization, there are separate
>> hw-supported user mode and kernel mode in guest mode. With memory
>> virtualization, there are two-level hw mmu table for guest mode and host
>> mode. Also there is separate hw cpu timer with consant frequency in
>> guest mode, so that vm can migrate between hosts with different freq.
>> Currently, we are able to boot LoongArch Linux Guests.
>>
>> Few key aspects of KVM LoongArch added by this series are:
>> 1. Enable kvm hardware function when kvm module is loaded.
>> 2. Implement VM and vcpu related ioctl interface such as vcpu create,
>>     vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
>>     get general registers one by one.
>> 3. Hardware access about MMU, timer and csr are emulated in kernel.
>> 4. Hardwares such as mmio and iocsr device are emulated in user space
>>     such as APIC, IPI, pci devices etc.
>>
>> The running environment of LoongArch virt machine:
>> 1. Cross tools to build kernel and uefi:
>>     $ wget https://github.com/loongson/build-tools/releases/download/2022.09.06/loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz
> The cross tools should be updated to the latest one, because we need
> binutils 2.41 now.
Thanks, I will update the binutils to latest version.
>
>>     tar -vxf loongarch64-clfs-6.3-cross-tools-gcc-glibc.tar.xz  -C /opt
>>     export PATH=/opt/cross-tools/bin:$PATH
>>     export LD_LIBRARY_PATH=/opt/cross-tools/lib:$LD_LIBRARY_PATH
>>     export LD_LIBRARY_PATH=/opt/cross-tools/loongarch64-unknown-linux-gnu/lib/:$LD_LIBRARY_PATH
>> 2. This series is based on the linux source code:
>>     https://github.com/loongson/linux-loongarch-kvm
> Please update the base to at least v6.6-rc1.
Thanks, I will update the linux kernel to least version.
>
>>     Build command:
>>     git checkout kvm-loongarch
>>     make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu- loongson3_defconfig
>>     make ARCH=loongarch CROSS_COMPILE=loongarch64-unknown-linux-gnu-
>> 3. QEMU hypervisor with LoongArch supported:
>>     https://github.com/loongson/qemu
> QEMU base should also be updated.
Thanks, I will update QEMU to latest version.
>
>>     Build command:
>>     git checkout kvm-loongarch
>>     ./configure --target-list="loongarch64-softmmu"  --enable-kvm
>>     make
>> 4. Uefi bios of LoongArch virt machine:
>>     Link: https://github.com/tianocore/edk2-platforms/tree/master/Platform/Loongson/LoongArchQemuPkg#readme
>> 5. you can also access the binary files we have already build:
>>     https://github.com/yangxiaojuan-loongson/qemu-binary
> Update any binaries if needed, too.
Thanks, I will update all the binary files used by KVM to latest.

Thanks
Tianrui Zhao
>
> I will do a full test after v21 of this series, and I hope this can
> move things forwards.
>
>
> Huacai
>
>> The command to boot loongarch virt machine:
>>     $ qemu-system-loongarch64 -machine virt -m 4G -cpu la464 \
>>     -smp 1 -bios QEMU_EFI.fd -kernel vmlinuz.efi -initrd ramdisk \
>>     -serial stdio   -monitor telnet:localhost:4495,server,nowait \
>>     -append "root=/dev/ram rdinit=/sbin/init console=ttyS0,115200" \
>>     --nographic
>>
>> changes for v20:
>> 1. Remove the binary codes of virtualization instructions in
>> insn_def.h and csr_ops.S and directly use the default csrrd,
>> csrwr,csrxchg instructions. And let CONFIG_KVM depends on the
>> AS_HAS_LVZ_EXTENSION, so we should use the binutils that have
>> already supported them to compile the KVM. This can make our
>> LoongArch KVM codes more maintainable and easier.
>>
>> changes for v19:
>> 1. Use the common interface xfer_to_guest_mode_handle_work to
>> Check conditions before entering the guest.
>> 2. Add vcpu dirty ring support.
>>
>> changes for v18:
>> 1. Code cleanup for vcpu timer: remove unnecessary timer_period_ns,
>> timer_bias, timer_dyn_bias variables in kvm_vcpu_arch and rename
>> the stable_ktime_saved variable to expire.
>> 2. Change the value of KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE to 40.
>>
>> changes for v17:
>> 1. Add CONFIG_AS_HAS_LVZ_EXTENSION config option which depends on
>> binutils that support LVZ assemble instruction.
>> 2. Change kvm mmu related functions, such as rename level2_ptw_pgd
>> to kvm_ptw_pgd, replace kvm_flush_range with kvm_ptw_pgd pagewalk
>> framework, replace kvm_arch.gpa_mm with kvm_arch.pgd, set
>> mark_page_dirty/kvm_set_pfn_dirty out of mmu_lock in kvm page fault
>> handling.
>> 3. Replace kvm_loongarch_interrupt with standard kvm_interrupt
>> when injecting IRQ.
>> 4. Replace vcpu_arch.last_exec_cpu with existing vcpu.cpu, remove
>> kvm_arch.online_vcpus and kvm_arch.is_migrating,
>> 5. Remove EXCCODE_TLBNR and EXCCODE_TLBNX in kvm exception table,
>> since NR/NX bit is not set in kvm page fault handling.
>>
>> Changes for v16:
>> 1. Free allocated memory of vmcs,kvm_loongarch_ops in kvm module init,
>> exit to avoid memory leak problem.
>> 2. Simplify some assemble codes in switch.S which are necessary to be
>> replaced with pseudo-instructions. And any other instructions do not need
>> to be replaced anymore.
>> 3. Add kvm_{save,restore}_guest_gprs macros to replace these ld.d,st.d
>> guest regs instructions when vcpu world switch.
>> 4. It is more secure to disable irq when flush guest tlb by gpa, so replace
>> preempt_disable with loacl_irq_save in kvm_flush_tlb_gpa.
>>
>> Changes for v15:
>> 1. Re-order some macros and variables in LoongArch kvm headers, put them
>> together which have the same meaning.
>> 2. Make some function definitions in one line, as it is not needed to split
>> them.
>> 3. Re-name some macros such as KVM_REG_LOONGARCH_GPR.
>>
>> Changes for v14:
>> 1. Remove the macro CONFIG_KVM_GENERIC_HARDWARE_ENABLING in
>> loongarch/kvm/main.c, as it is not useful.
>> 2. Add select KVM_GENERIC_HARDWARE_ENABLING in loongarch/kvm/Kconfig,
>> as it is used by virt/kvm.
>> 3. Fix the LoongArch KVM source link in MAINTAINERS.
>> 4. Improve LoongArch KVM documentation, such as add comment for
>> LoongArch kvm_regs.
>>
>> Changes for v13:
>> 1. Remove patch-28 "Implement probe virtualization when cpu init", as the
>> virtualization information about FPU,PMP,LSX in guest.options,options_dyn
>> is not used and the gcfg reg value can be read in kvm_hardware_enable, so
>> remove the previous cpu_probe_lvz function.
>> 2. Fix vcpu_enable_cap interface, it should return -EINVAL directly, as
>> FPU cap is enable by default, and do not support any other caps now.
>> 3. Simplify the jirl instruction with jr when without return addr,
>> simplify case HW0 ... HW7 statment in interrupt.c
>> 4. Rename host_stack,host_gp in kvm_vcpu_arch to host_sp,host_tp.
>> 5. Remove 'cpu' parameter in _kvm_check_requests, as 'cpu' is not used,
>> and remove 'cpu' parameter in kvm_check_vmid function, as it can get
>> cpu number by itself.
>>
>> Changes for v12:
>> 1. Improve the gcsr write/read/xchg interface to avoid the previous
>> instruction statment like parse_r and make the code easy understanding,
>> they are implemented in asm/insn-def.h and the instructions consistent
>> of "opcode" "rj" "rd" "simm14" arguments.
>> 2. Fix the maintainers list of LoongArch KVM.
>>
>> Changes for v11:
>> 1. Add maintainers for LoongArch KVM.
>>
>> Changes for v10:
>> 1. Fix grammatical problems in LoongArch documentation.
>> 2. It is not necessary to save or restore the LOONGARCH_CSR_PGD when
>> vcpu put and vcpu load, so we remove it.
>>
>> Changes for v9:
>> 1. Apply the new defined interrupt number macros in loongarch.h to kvm,
>> such as INT_SWI0, INT_HWI0, INT_TI, INT_IPI, etc. And remove the
>> previous unused macros.
>> 2. Remove unused variables in kvm_vcpu_arch, and reorder the variables
>> to make them more standard.
>>
>> Changes for v8:
>> 1. Adjust the cpu_data.guest.options structure, add the ases flag into
>> it, and remove the previous guest.ases. We do this to keep consistent
>> with host cpu_data.options structure.
>> 2. Remove the "#include <asm/kvm_host.h>" in some files which also
>> include the "<linux/kvm_host.h>". As linux/kvm_host.h already include
>> the asm/kvm_host.h.
>> 3. Fix some unstandard spelling and grammar errors in comments, and
>> improve a little code format to make it easier and standard.
>>
>> Changes for v7:
>> 1. Fix the kvm_save/restore_hw_gcsr compiling warnings reported by
>> kernel test robot. The report link is:
>> https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
>> 2. Fix loongarch kvm trace related compiling problems.
>>
>> Changes for v6:
>> 1. Fix the Documentation/virt/kvm/api.rst compile warning about
>> loongarch parts.
>>
>> Changes for v5:
>> 1. Implement get/set mp_state ioctl interface, and only the
>> KVM_MP_STATE_RUNNABLE state is supported now, and other states
>> will be completed in the future. The state is also used when vcpu
>> run idle instruction, if vcpu state is changed to RUNNABLE, the
>> vcpu will have the possibility to be woken up.
>> 2. Supplement kvm document about loongarch-specific part, such as add
>> api introduction for GET/SET_ONE_REG, GET/SET_FPU, GET/SET_MP_STATE,
>> etc.
>> 3. Improve the kvm_switch_to_guest function in switch.S, remove the
>> previous tmp,tmp1 arguments and replace it with t0,t1 reg.
>>
>> Changes for v4:
>> 1. Add a csr_need_update flag in _vcpu_put, as most csr registers keep
>> unchanged during process context switch, so we need not to update it
>> every time. We can do this only if the soft csr is different form hardware.
>> That is to say all of csrs should update after vcpu enter guest, as for
>> set_csr_ioctl, we have written soft csr to keep consistent with hardware.
>> 2. Improve get/set_csr_ioctl interface, we set SW or HW or INVALID flag
>> for all csrs according to it's features when kvm init. In get/set_csr_ioctl,
>> if csr is HW, we use gcsrrd/ gcsrwr instruction to access it, else if csr is
>> SW, we use software to emulate it, and others return false.
>> 3. Add set_hw_gcsr function in csr_ops.S, and it is used in set_csr_ioctl.
>> We have splited hw gcsr into three parts, so we can calculate the code offset
>> by gcsrid and jump here to run the gcsrwr instruction. We use this function to
>> make the code easier and avoid to use the previous SET_HW_GCSR(XXX) interface.
>> 4. Improve kvm mmu functions, such as flush page table and make clean page table
>> interface.
>>
>> Changes for v3:
>> 1. Remove the vpid array list in kvm_vcpu_arch and use a vpid variable here,
>> because a vpid will never be recycled if a vCPU migrates from physical CPU A
>> to B and back to A.
>> 2. Make some constant variables in kvm_context to global such as vpid_mask,
>> guest_eentry, enter_guest, etc.
>> 3. Add some new tracepoints, such as kvm_trace_idle, kvm_trace_cache,
>> kvm_trace_gspr, etc.
>> 4. There are some duplicate codes in kvm_handle_exit and kvm_vcpu_run,
>> so we move it to a new function kvm_pre_enter_guest.
>> 5. Change the RESUME_HOST, RESUME_GUEST value, return 1 for resume guest
>> and "<= 0" for resume host.
>> 6. Fcsr and fpu registers are saved/restored together.
>>
>> Changes for v2:
>> 1. Seprate the original patch-01 and patch-03 into small patches, and the
>> patches mainly contain kvm module init, module exit, vcpu create, vcpu run,
>> etc.
>> 2. Remove the original KVM_{GET,SET}_CSRS ioctl in the kvm uapi header,
>> and we use the common KVM_{GET,SET}_ONE_REG to access register.
>> 3. Use BIT(x) to replace the "1 << n_bits" statement.
>>
>> Tianrui Zhao (30):
>>    LoongArch: KVM: Add kvm related header files
>>    LoongArch: KVM: Implement kvm module related interface
>>    LoongArch: KVM: Implement kvm hardware enable, disable interface
>>    LoongArch: KVM: Implement VM related functions
>>    LoongArch: KVM: Add vcpu related header files
>>    LoongArch: KVM: Implement vcpu create and destroy interface
>>    LoongArch: KVM: Implement vcpu run interface
>>    LoongArch: KVM: Implement vcpu handle exit interface
>>    LoongArch: KVM: Implement vcpu get, vcpu set registers
>>    LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface
>>    LoongArch: KVM: Implement fpu related operations for vcpu
>>    LoongArch: KVM: Implement vcpu interrupt operations
>>    LoongArch: KVM: Implement misc vcpu related interfaces
>>    LoongArch: KVM: Implement vcpu load and vcpu put operations
>>    LoongArch: KVM: Implement vcpu status description
>>    LoongArch: KVM: Implement update VM id function
>>    LoongArch: KVM: Implement virtual machine tlb operations
>>    LoongArch: KVM: Implement vcpu timer operations
>>    LoongArch: KVM: Implement kvm mmu operations
>>    LoongArch: KVM: Implement handle csr excption
>>    LoongArch: KVM: Implement handle iocsr exception
>>    LoongArch: KVM: Implement handle idle exception
>>    LoongArch: KVM: Implement handle gspr exception
>>    LoongArch: KVM: Implement handle mmio exception
>>    LoongArch: KVM: Implement handle fpu exception
>>    LoongArch: KVM: Implement kvm exception vector
>>    LoongArch: KVM: Implement vcpu world switch
>>    LoongArch: KVM: Enable kvm config and add the makefile
>>    LoongArch: KVM: Supplement kvm document about LoongArch-specific part
>>    LoongArch: KVM: Add maintainers for LoongArch KVM
>>
>>   Documentation/virt/kvm/api.rst             |  70 +-
>>   MAINTAINERS                                |  12 +
>>   arch/loongarch/Kbuild                      |   1 +
>>   arch/loongarch/Kconfig                     |   3 +
>>   arch/loongarch/configs/loongson3_defconfig |   2 +
>>   arch/loongarch/include/asm/inst.h          |  16 +
>>   arch/loongarch/include/asm/kvm_csr.h       | 222 +++++
>>   arch/loongarch/include/asm/kvm_host.h      | 238 ++++++
>>   arch/loongarch/include/asm/kvm_types.h     |  11 +
>>   arch/loongarch/include/asm/kvm_vcpu.h      |  95 +++
>>   arch/loongarch/include/asm/loongarch.h     |  19 +-
>>   arch/loongarch/include/uapi/asm/kvm.h      | 101 +++
>>   arch/loongarch/kernel/asm-offsets.c        |  32 +
>>   arch/loongarch/kvm/Kconfig                 |  45 ++
>>   arch/loongarch/kvm/Makefile                |  22 +
>>   arch/loongarch/kvm/csr_ops.S               |  67 ++
>>   arch/loongarch/kvm/exit.c                  | 702 ++++++++++++++++
>>   arch/loongarch/kvm/interrupt.c             | 113 +++
>>   arch/loongarch/kvm/main.c                  | 361 +++++++++
>>   arch/loongarch/kvm/mmu.c                   | 678 ++++++++++++++++
>>   arch/loongarch/kvm/switch.S                | 255 ++++++
>>   arch/loongarch/kvm/timer.c                 | 200 +++++
>>   arch/loongarch/kvm/tlb.c                   |  34 +
>>   arch/loongarch/kvm/trace.h                 | 168 ++++
>>   arch/loongarch/kvm/vcpu.c                  | 898 +++++++++++++++++++++
>>   arch/loongarch/kvm/vm.c                    |  76 ++
>>   arch/loongarch/kvm/vmid.c                  |  66 ++
>>   include/uapi/linux/kvm.h                   |   9 +
>>   28 files changed, 4502 insertions(+), 14 deletions(-)
>>   create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_host.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_types.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>>   create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>>   create mode 100644 arch/loongarch/kvm/Kconfig
>>   create mode 100644 arch/loongarch/kvm/Makefile
>>   create mode 100644 arch/loongarch/kvm/csr_ops.S
>>   create mode 100644 arch/loongarch/kvm/exit.c
>>   create mode 100644 arch/loongarch/kvm/interrupt.c
>>   create mode 100644 arch/loongarch/kvm/main.c
>>   create mode 100644 arch/loongarch/kvm/mmu.c
>>   create mode 100644 arch/loongarch/kvm/switch.S
>>   create mode 100644 arch/loongarch/kvm/timer.c
>>   create mode 100644 arch/loongarch/kvm/tlb.c
>>   create mode 100644 arch/loongarch/kvm/trace.h
>>   create mode 100644 arch/loongarch/kvm/vcpu.c
>>   create mode 100644 arch/loongarch/kvm/vm.c
>>   create mode 100644 arch/loongarch/kvm/vmid.c
>>
>> --
>> 2.27.0
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files
  2023-09-11  4:59   ` Huacai Chen
@ 2023-09-11  9:41     ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-11  9:41 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao


在 2023/9/11 下午12:59, Huacai Chen 写道:
> Hi, Tianrui,
>
> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> Add LoongArch KVM related header files, including kvm.h,
>> kvm_host.h, kvm_types.h. All of those are about LoongArch
>> virtualization features and kvm interfaces.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/include/asm/kvm_host.h  | 238 +++++++++++++++++++++++++
>>   arch/loongarch/include/asm/kvm_types.h |  11 ++
>>   arch/loongarch/include/uapi/asm/kvm.h  | 101 +++++++++++
>>   include/uapi/linux/kvm.h               |   9 +
>>   4 files changed, 359 insertions(+)
>>   create mode 100644 arch/loongarch/include/asm/kvm_host.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_types.h
>>   create mode 100644 arch/loongarch/include/uapi/asm/kvm.h
>>
>> diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h
>> new file mode 100644
>> index 0000000000..9f23ddaaae
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_host.h
>> @@ -0,0 +1,238 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_HOST_H__
>> +#define __ASM_LOONGARCH_KVM_HOST_H__
>> +
>> +#include <linux/cpumask.h>
>> +#include <linux/mutex.h>
>> +#include <linux/hrtimer.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/types.h>
>> +#include <linux/kvm.h>
>> +#include <linux/kvm_types.h>
>> +#include <linux/threads.h>
>> +#include <linux/spinlock.h>
>> +
>> +#include <asm/inst.h>
>> +#include <asm/loongarch.h>
>> +
>> +/* Loongarch KVM register ids */
>> +#define LOONGARCH_CSR_32(_R, _S)       \
>> +       (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U32 | (8 * (_R) + (_S)))
>> +
>> +#define LOONGARCH_CSR_64(_R, _S)       \
>> +       (KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | (8 * (_R) + (_S)))
>> +
>> +#define KVM_IOC_CSRID(id)              LOONGARCH_CSR_64(id, 0)
>> +#define KVM_GET_IOC_CSRIDX(id)         ((id & KVM_CSR_IDX_MASK) >> 3)
>> +
>> +#define KVM_MAX_VCPUS                  256
>> +/* memory slots that does not exposed to userspace */
>> +#define KVM_PRIVATE_MEM_SLOTS          0
>> +
>> +#define KVM_HALT_POLL_NS_DEFAULT       500000
>> +
>> +struct kvm_vm_stat {
>> +       struct kvm_vm_stat_generic generic;
>> +};
>> +
>> +struct kvm_vcpu_stat {
>> +       struct kvm_vcpu_stat_generic generic;
>> +       u64 idle_exits;
>> +       u64 signal_exits;
>> +       u64 int_exits;
>> +       u64 cpucfg_exits;
>> +};
>> +
>> +struct kvm_arch_memory_slot {
>> +};
>> +
>> +struct kvm_context {
>> +       unsigned long vpid_cache;
>> +       struct kvm_vcpu *last_vcpu;
>> +};
>> +
>> +struct kvm_world_switch {
>> +       int (*guest_eentry)(void);
>> +       int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +       unsigned long page_order;
>> +};
>> +
>> +struct kvm_arch {
>> +       /* Guest physical mm */
>> +       pgd_t *pgd;
>> +       unsigned long gpa_size;
>> +
>> +       s64 time_offset;
>> +       struct kvm_context __percpu *vmcs;
>> +};
>> +
>> +#define CSR_MAX_NUMS           0x800
>> +
>> +struct loongarch_csrs {
>> +       unsigned long csrs[CSR_MAX_NUMS];
>> +};
>> +
>> +/* Resume Flags */
>> +#define RESUME_HOST            0
>> +#define RESUME_GUEST           1
>> +
>> +enum emulation_result {
>> +       EMULATE_DONE,           /* no further processing */
>> +       EMULATE_DO_MMIO,        /* kvm_run filled with MMIO request */
>> +       EMULATE_FAIL,           /* can't emulate this instruction */
>> +       EMULATE_EXCEPT,         /* A guest exception has been generated */
>> +       EMULATE_DO_IOCSR,       /* handle IOCSR request */
>> +};
>> +
>> +#define KVM_LARCH_CSR          (0x1 << 1)
>> +#define KVM_LARCH_FPU          (0x1 << 0)
>> +
>> +struct kvm_vcpu_arch {
>> +       /*
>> +        * Switch pointer-to-function type to unsigned long
>> +        * for loading the value into register directly.
>> +        */
>> +       unsigned long host_eentry;
>> +       unsigned long guest_eentry;
>> +
>> +       /* Pointers stored here for easy accessing from assembly code */
>> +       int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +
>> +       /* Host registers preserved across guest mode execution */
>> +       unsigned long host_sp;
>> +       unsigned long host_tp;
>> +       unsigned long host_pgd;
>> +
>> +       /* Host CSRs are used when handling exits from guest */
>> +       unsigned long badi;
>> +       unsigned long badv;
>> +       unsigned long host_ecfg;
>> +       unsigned long host_estat;
>> +       unsigned long host_percpu;
>> +
>> +       /* GPRs */
>> +       unsigned long gprs[32];
>> +       unsigned long pc;
>> +
>> +       /* Which auxiliary state is loaded (KVM_LOONGARCH_AUX_*) */
>> +       unsigned int aux_inuse;
>> +       /* FPU state */
>> +       struct loongarch_fpu fpu FPU_ALIGN;
>> +
>> +       /* CSR state */
>> +       struct loongarch_csrs *csr;
>> +
>> +       /* GPR used as IO source/target */
>> +       u32 io_gpr;
>> +
>> +       struct hrtimer swtimer;
>> +       /* KVM register to control count timer */
>> +       u32 count_ctl;
>> +
>> +       /* Bitmask of exceptions that are pending */
>> +       unsigned long irq_pending;
>> +       /* Bitmask of pending exceptions to be cleared */
>> +       unsigned long irq_clear;
>> +
>> +       /* Cache for pages needed inside spinlock regions */
>> +       struct kvm_mmu_memory_cache mmu_page_cache;
>> +
>> +       /* vcpu's vpid */
>> +       u64 vpid;
>> +
>> +       /* Frequency of stable timer in Hz */
>> +       u64 timer_mhz;
>> +       ktime_t expire;
>> +
>> +       u64 core_ext_ioisr[4];
>> +
>> +       /* Last CPU the vCPU state was loaded on */
>> +       int last_sched_cpu;
>> +       /* mp state */
>> +       struct kvm_mp_state mp_state;
>> +};
>> +
>> +static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
>> +{
>> +       return csr->csrs[reg];
>> +}
>> +
>> +static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val)
>> +{
>> +       csr->csrs[reg] = val;
>> +}
>> +
>> +/* Helpers */
>> +static inline bool _kvm_guest_has_fpu(struct kvm_vcpu_arch *arch)
>> +{
>> +       return cpu_has_fpu;
>> +}
>> +
>> +void _kvm_init_fault(void);
> Can we use kvm_guest_has_fpu and kvm_init_fault? Don't prefix with _
> unless you have a special reason. For example, static internal
> functions can be prefixed.
Thanks, I will remove the '_' prefix.
>
>> +
>> +/* Debug: dump vcpu state */
>> +int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
>> +
>> +/* MMU handling */
>> +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
>> +void kvm_flush_tlb_all(void);
>> +void _kvm_destroy_mm(struct kvm *kvm);
> The same as before, and maybe you can check other patches for the same issue.
>
>
> Huacai
Thanks, I will check the same problems about '_' prefix in other patches.

Thanks
Tianrui Zhao
>
>> +pgd_t *kvm_pgd_alloc(void);
>> +
>> +#define KVM_ARCH_WANT_MMU_NOTIFIER
>> +int kvm_unmap_hva_range(struct kvm *kvm,
>> +                       unsigned long start, unsigned long end, bool blockable);
>> +void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
>> +int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
>> +int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
>> +
>> +static inline void update_pc(struct kvm_vcpu_arch *arch)
>> +{
>> +       arch->pc += 4;
>> +}
>> +
>> +/**
>> + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
>> + * @vcpu:      Virtual CPU.
>> + *
>> + * Returns:    Whether the TLBL exception was likely due to an instruction
>> + *             fetch fault rather than a data load fault.
>> + */
>> +static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
>> +{
>> +       return arch->pc == arch->badv;
>> +}
>> +
>> +/* Misc */
>> +static inline void kvm_arch_hardware_unsetup(void) {}
>> +static inline void kvm_arch_sync_events(struct kvm *kvm) {}
>> +static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
>> +static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
>> +static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
>> +static inline void kvm_arch_free_memslot(struct kvm *kvm,
>> +                                  struct kvm_memory_slot *slot) {}
>> +void _kvm_check_vmid(struct kvm_vcpu *vcpu);
>> +enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
>> +int kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
>> +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
>> +                                       const struct kvm_memory_slot *memslot);
>> +void kvm_init_vmcs(struct kvm *kvm);
>> +void kvm_vector_entry(void);
>> +int  kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +extern const unsigned long kvm_vector_size;
>> +extern const unsigned long kvm_enter_guest_size;
>> +extern unsigned long vpid_mask;
>> +extern struct kvm_world_switch *kvm_loongarch_ops;
>> +
>> +#define SW_GCSR                (1 << 0)
>> +#define HW_GCSR                (1 << 1)
>> +#define INVALID_GCSR   (1 << 2)
>> +int get_gcsr_flag(int csr);
>> +extern void set_hw_gcsr(int csr_id, unsigned long val);
>> +#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
>> diff --git a/arch/loongarch/include/asm/kvm_types.h b/arch/loongarch/include/asm/kvm_types.h
>> new file mode 100644
>> index 0000000000..2fe1d4bdff
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_types.h
>> @@ -0,0 +1,11 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef _ASM_LOONGARCH_KVM_TYPES_H
>> +#define _ASM_LOONGARCH_KVM_TYPES_H
>> +
>> +#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE      40
>> +
>> +#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
>> diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h
>> new file mode 100644
>> index 0000000000..7ec2f34018
>> --- /dev/null
>> +++ b/arch/loongarch/include/uapi/asm/kvm.h
>> @@ -0,0 +1,101 @@
>> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __UAPI_ASM_LOONGARCH_KVM_H
>> +#define __UAPI_ASM_LOONGARCH_KVM_H
>> +
>> +#include <linux/types.h>
>> +
>> +/*
>> + * KVM Loongarch specific structures and definitions.
>> + *
>> + * Some parts derived from the x86 version of this file.
>> + */
>> +
>> +#define __KVM_HAVE_READONLY_MEM
>> +
>> +#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
>> +#define KVM_DIRTY_LOG_PAGE_OFFSET      64
>> +
>> +/*
>> + * for KVM_GET_REGS and KVM_SET_REGS
>> + */
>> +struct kvm_regs {
>> +       /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
>> +       __u64 gpr[32];
>> +       __u64 pc;
>> +};
>> +
>> +/*
>> + * for KVM_GET_FPU and KVM_SET_FPU
>> + */
>> +struct kvm_fpu {
>> +       __u32 fcsr;
>> +       __u64 fcc;    /* 8x8 */
>> +       struct kvm_fpureg {
>> +               __u64 val64[4];
>> +       } fpr[32];
>> +};
>> +
>> +/*
>> + * For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
>> + * registers.  The id field is broken down as follows:
>> + *
>> + *  bits[63..52] - As per linux/kvm.h
>> + *  bits[51..32] - Must be zero.
>> + *  bits[31..16] - Register set.
>> + *
>> + * Register set = 0: GP registers from kvm_regs (see definitions below).
>> + *
>> + * Register set = 1: CSR registers.
>> + *
>> + * Register set = 2: KVM specific registers (see definitions below).
>> + *
>> + * Register set = 3: FPU / SIMD registers (see definitions below).
>> + *
>> + * Other sets registers may be added in the future.  Each set would
>> + * have its own identifier in bits[31..16].
>> + */
>> +
>> +#define KVM_REG_LOONGARCH_GPR          (KVM_REG_LOONGARCH | 0x00000ULL)
>> +#define KVM_REG_LOONGARCH_CSR          (KVM_REG_LOONGARCH | 0x10000ULL)
>> +#define KVM_REG_LOONGARCH_KVM          (KVM_REG_LOONGARCH | 0x20000ULL)
>> +#define KVM_REG_LOONGARCH_FPU          (KVM_REG_LOONGARCH | 0x30000ULL)
>> +#define KVM_REG_LOONGARCH_MASK         (KVM_REG_LOONGARCH | 0x30000ULL)
>> +#define KVM_CSR_IDX_MASK               (0x10000 - 1)
>> +
>> +/*
>> + * KVM_REG_LOONGARCH_KVM - KVM specific control registers.
>> + */
>> +
>> +#define KVM_REG_LOONGARCH_COUNTER      (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 3)
>> +#define KVM_REG_LOONGARCH_VCPU_RESET   (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 4)
>> +
>> +struct kvm_debug_exit_arch {
>> +};
>> +
>> +/* for KVM_SET_GUEST_DEBUG */
>> +struct kvm_guest_debug_arch {
>> +};
>> +
>> +/* definition of registers in kvm_run */
>> +struct kvm_sync_regs {
>> +};
>> +
>> +/* dummy definition */
>> +struct kvm_sregs {
>> +};
>> +
>> +struct kvm_iocsr_entry {
>> +       __u32 addr;
>> +       __u32 pad;
>> +       __u64 data;
>> +};
>> +
>> +#define KVM_NR_IRQCHIPS                1
>> +#define KVM_IRQCHIP_NUM_PINS   64
>> +#define KVM_MAX_CORES          256
>> +
>> +#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index f089ab2909..1184171224 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -264,6 +264,7 @@ struct kvm_xen_exit {
>>   #define KVM_EXIT_RISCV_SBI        35
>>   #define KVM_EXIT_RISCV_CSR        36
>>   #define KVM_EXIT_NOTIFY           37
>> +#define KVM_EXIT_LOONGARCH_IOCSR  38
>>
>>   /* For KVM_EXIT_INTERNAL_ERROR */
>>   /* Emulate instruction failed. */
>> @@ -336,6 +337,13 @@ struct kvm_run {
>>                          __u32 len;
>>                          __u8  is_write;
>>                  } mmio;
>> +               /* KVM_EXIT_LOONGARCH_IOCSR */
>> +               struct {
>> +                       __u64 phys_addr;
>> +                       __u8  data[8];
>> +                       __u32 len;
>> +                       __u8  is_write;
>> +               } iocsr_io;
>>                  /* KVM_EXIT_HYPERCALL */
>>                  struct {
>>                          __u64 nr;
>> @@ -1362,6 +1370,7 @@ struct kvm_dirty_tlb {
>>   #define KVM_REG_ARM64          0x6000000000000000ULL
>>   #define KVM_REG_MIPS           0x7000000000000000ULL
>>   #define KVM_REG_RISCV          0x8000000000000000ULL
>> +#define KVM_REG_LOONGARCH      0x9000000000000000ULL
>>
>>   #define KVM_REG_SIZE_SHIFT     52
>>   #define KVM_REG_SIZE_MASK      0x00f0000000000000ULL
>> --
>> 2.27.0
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function
  2023-08-31  8:30 ` [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function Tianrui Zhao
@ 2023-09-11 10:00   ` Huacai Chen
  2023-09-11 10:23     ` bibo mao
  0 siblings, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11 10:00 UTC (permalink / raw)
  To: Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>
> Implement kvm check vmid and update vmid, the vmid should be checked before
> vcpu enter guest.
>
> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> ---
>  arch/loongarch/kvm/vmid.c | 66 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 arch/loongarch/kvm/vmid.c
>
> diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c
> new file mode 100644
> index 0000000000..fc25ddc3b7
> --- /dev/null
> +++ b/arch/loongarch/kvm/vmid.c
> @@ -0,0 +1,66 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> + */
> +
> +#include <linux/kvm_host.h>
> +#include "trace.h"
> +
> +static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu)
> +{
> +       struct kvm_context *context;
> +       unsigned long vpid;
> +
> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
> +       vpid = context->vpid_cache + 1;
> +       if (!(vpid & vpid_mask)) {
> +               /* finish round of 64 bit loop */
> +               if (unlikely(!vpid))
> +                       vpid = vpid_mask + 1;
> +
> +               /* vpid 0 reserved for root */
> +               ++vpid;
> +
> +               /* start new vpid cycle */
> +               kvm_flush_tlb_all();
> +       }
> +
> +       context->vpid_cache = vpid;
> +       vcpu->arch.vpid = vpid;
> +}
> +
> +void _kvm_check_vmid(struct kvm_vcpu *vcpu)
> +{
> +       struct kvm_context *context;
> +       bool migrated;
> +       unsigned long ver, old, vpid;
> +       int cpu;
> +
> +       cpu = smp_processor_id();
> +       /*
> +        * Are we entering guest context on a different CPU to last time?
> +        * If so, the vCPU's guest TLB state on this CPU may be stale.
> +        */
> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
> +       migrated = (vcpu->cpu != cpu);
> +
> +       /*
> +        * Check if our vpid is of an older version
> +        *
> +        * We also discard the stored vpid if we've executed on
> +        * another CPU, as the guest mappings may have changed without
> +        * hypervisor knowledge.
> +        */
> +       ver = vcpu->arch.vpid & ~vpid_mask;
> +       old = context->vpid_cache  & ~vpid_mask;
> +       if (migrated || (ver != old)) {
> +               _kvm_update_vpid(vcpu, cpu);
> +               trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
> +               vcpu->cpu = cpu;
> +       }
> +
> +       /* Restore GSTAT(0x50).vpid */
> +       vpid = (vcpu->arch.vpid & vpid_mask)
> +               << CSR_GSTAT_GID_SHIFT;
> +       change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid);
> +}
I believe that vpid and vmid are both GID in the gstat register, so
please unify their names. And I think vpid is better than vmid.

Moreover, no need to create a vmid.c file, just putting them in main.c is OK.

Huacai

> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-09-11  9:03   ` Huacai Chen
@ 2023-09-11 10:03     ` zhaotianrui
  2023-09-11 10:13       ` zhaotianrui
  2023-09-11 11:49       ` Huacai Chen
  0 siblings, 2 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-11 10:03 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao


在 2023/9/11 下午5:03, Huacai Chen 写道:
> Hi, Tianrui,
>
> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> Implement LoongArch vcpu get registers and set registers operations, it
>> is called when user space use the ioctl interface to get or set regs.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
>>   arch/loongarch/kvm/vcpu.c    | 206 +++++++++++++++++++++++++++++++++++
>>   2 files changed, 273 insertions(+)
>>   create mode 100644 arch/loongarch/kvm/csr_ops.S
>>
>> diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S
>> new file mode 100644
>> index 0000000000..53e44b23a5
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/csr_ops.S
>> @@ -0,0 +1,67 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#include <asm/regdef.h>
>> +#include <linux/linkage.h>
>> +       .text
>> +       .cfi_sections   .debug_frame
>> +/*
>> + * we have splited hw gcsr into three parts, so we can
>> + * calculate the code offset by gcsrid and jump here to
>> + * run the gcsrwr instruction.
>> + */
>> +SYM_FUNC_START(set_hw_gcsr)
>> +       addi.d      t0,   a0,   0
>> +       addi.w      t1,   zero, 96
>> +       bltu        t1,   t0,   1f
>> +       la.pcrel    t0,   10f
>> +       alsl.d      t0,   a0,   t0, 3
>> +       jr          t0
>> +1:
>> +       addi.w      t1,   a0,   -128
>> +       addi.w      t2,   zero, 15
>> +       bltu        t2,   t1,   2f
>> +       la.pcrel    t0,   11f
>> +       alsl.d      t0,   t1,   t0, 3
>> +       jr          t0
>> +2:
>> +       addi.w      t1,   a0,   -384
>> +       addi.w      t2,   zero, 3
>> +       bltu        t2,   t1,   3f
>> +       la.pcrel    t0,   12f
>> +       alsl.d      t0,   t1,   t0, 3
>> +       jr          t0
>> +3:
>> +       addi.w      a0,   zero, -1
>> +       jr          ra
>> +
>> +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
>> +10:
>> +       csrnum = 0
>> +       .rept 0x61
>> +       gcsrwr a1, csrnum
>> +       jr ra
>> +       csrnum = csrnum + 1
>> +       .endr
>> +
>> +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
>> +11:
>> +       csrnum = 0x80
>> +       .rept 0x10
>> +       gcsrwr a1, csrnum
>> +       jr ra
>> +       csrnum = csrnum + 1
>> +       .endr
>> +
>> +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
>> +12:
>> +       csrnum = 0x180
>> +       .rept 0x4
>> +       gcsrwr a1, csrnum
>> +       jr ra
>> +       csrnum = csrnum + 1
>> +       .endr
>> +
>> +SYM_FUNC_END(set_hw_gcsr)
>> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
>> index ca4e8d074e..f17422a942 100644
>> --- a/arch/loongarch/kvm/vcpu.c
>> +++ b/arch/loongarch/kvm/vcpu.c
>> @@ -13,6 +13,212 @@
>>   #define CREATE_TRACE_POINTS
>>   #include "trace.h"
>>
>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
>> +{
>> +       unsigned long val;
>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>> +
>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>> +               return -EINVAL;
>> +
>> +       if (id == LOONGARCH_CSR_ESTAT) {
>> +               /* interrupt status IP0 -- IP7 from GINTC */
>> +               val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
>> +               *v = kvm_read_sw_gcsr(csr, id) | (val << 2);
>> +               return 0;
>> +       }
>> +
>> +       /*
>> +        * get software csr state if csrid is valid, since software
>> +        * csr state is consistent with hardware
>> +        */
> After a long time thinking, I found this is wrong. Of course
> _kvm_setcsr() saves a software copy of the hardware registers, but the
> hardware status will change. For example, during a VM running, it may
> change the EUEN register if it uses fpu.
>
> So, we should do things like what we do in our internal repo,
> _kvm_getcsr() should get values from hardware for HW_GCSR registers.
> And we also need a get_hw_gcsr assembly function.
>
>
> Huacai
This is a asynchronous vcpu ioctl action, that is to say  this action 
take place int the vcpu thread after vcpu get out of guest mode, and the 
guest registers have been saved in software, so we could return software 
register value when get guest csr.

Thanks
Tianrui Zhao
>
>> +       *v = kvm_read_sw_gcsr(csr, id);
>> +
>> +       return 0;
>> +}
>> +
>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
>> +{
>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>> +       int ret = 0, gintc;
>> +
>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>> +               return -EINVAL;
>> +
>> +       if (id == LOONGARCH_CSR_ESTAT) {
>> +               /* estat IP0~IP7 inject through guestexcept */
>> +               gintc = (val >> 2) & 0xff;
>> +               write_csr_gintc(gintc);
>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
>> +
>> +               gintc = val & ~(0xffUL << 2);
>> +               write_gcsr_estat(gintc);
>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
>> +
>> +               return ret;
>> +       }
>> +
>> +       if (get_gcsr_flag(id) & HW_GCSR) {
>> +               set_hw_gcsr(id, val);
>> +               /* write sw gcsr to keep consistent with hardware */
>> +               kvm_write_sw_gcsr(csr, id, val);
>> +       } else
>> +               kvm_write_sw_gcsr(csr, id, val);
>> +
>> +       return ret;
>> +}
>> +
>> +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
>> +               const struct kvm_one_reg *reg, s64 *v)
>> +{
>> +       int reg_idx, ret = 0;
>> +
>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>> +               ret = _kvm_getcsr(vcpu, reg_idx, v);
>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
>> +               *v = drdtime() + vcpu->kvm->arch.time_offset;
>> +       else
>> +               ret = -EINVAL;
>> +
>> +       return ret;
>> +}
>> +
>> +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>> +{
>> +       int ret = -EINVAL;
>> +       s64 v;
>> +
>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>> +               return ret;
>> +
>> +       if (_kvm_get_one_reg(vcpu, reg, &v))
>> +               return ret;
>> +
>> +       return put_user(v, (u64 __user *)(long)reg->addr);
>> +}
>> +
>> +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
>> +                       const struct kvm_one_reg *reg,
>> +                       s64 v)
>> +{
>> +       int ret = 0;
>> +       unsigned long flags;
>> +       u64 val;
>> +       int reg_idx;
>> +
>> +       val = v;
>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>> +               ret = _kvm_setcsr(vcpu, reg_idx, val);
>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
>> +               local_irq_save(flags);
>> +               /*
>> +                * gftoffset is relative with board, not vcpu
>> +                * only set for the first time for smp system
>> +                */
>> +               if (vcpu->vcpu_id == 0)
>> +                       vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
>> +               write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
>> +               local_irq_restore(flags);
>> +       } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
>> +               kvm_reset_timer(vcpu);
>> +               memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
>> +               memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
>> +       } else
>> +               ret = -EINVAL;
>> +
>> +       return ret;
>> +}
>> +
>> +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>> +{
>> +       s64 v;
>> +       int ret = -EINVAL;
>> +
>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>> +               return ret;
>> +
>> +       if (get_user(v, (u64 __user *)(long)reg->addr))
>> +               return ret;
>> +
>> +       return _kvm_set_one_reg(vcpu, reg, v);
>> +}
>> +
>> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
>> +                                 struct kvm_sregs *sregs)
>> +{
>> +       return -ENOIOCTLCMD;
>> +}
>> +
>> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
>> +                                 struct kvm_sregs *sregs)
>> +{
>> +       return -ENOIOCTLCMD;
>> +}
>> +
>> +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>> +{
>> +       int i;
>> +
>> +       vcpu_load(vcpu);
>> +
>> +       for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>> +               regs->gpr[i] = vcpu->arch.gprs[i];
>> +
>> +       regs->pc = vcpu->arch.pc;
>> +
>> +       vcpu_put(vcpu);
>> +       return 0;
>> +}
>> +
>> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>> +{
>> +       int i;
>> +
>> +       vcpu_load(vcpu);
>> +
>> +       for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>> +               vcpu->arch.gprs[i] = regs->gpr[i];
>> +       vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
>> +       vcpu->arch.pc = regs->pc;
>> +
>> +       vcpu_put(vcpu);
>> +       return 0;
>> +}
>> +
>> +long kvm_arch_vcpu_ioctl(struct file *filp,
>> +                        unsigned int ioctl, unsigned long arg)
>> +{
>> +       struct kvm_vcpu *vcpu = filp->private_data;
>> +       void __user *argp = (void __user *)arg;
>> +       long r;
>> +
>> +       vcpu_load(vcpu);
>> +
>> +       switch (ioctl) {
>> +       case KVM_SET_ONE_REG:
>> +       case KVM_GET_ONE_REG: {
>> +               struct kvm_one_reg reg;
>> +
>> +               r = -EFAULT;
>> +               if (copy_from_user(&reg, argp, sizeof(reg)))
>> +                       break;
>> +               if (ioctl == KVM_SET_ONE_REG)
>> +                       r = _kvm_set_reg(vcpu, &reg);
>> +               else
>> +                       r = _kvm_get_reg(vcpu, &reg);
>> +               break;
>> +       }
>> +       default:
>> +               r = -ENOIOCTLCMD;
>> +               break;
>> +       }
>> +
>> +       vcpu_put(vcpu);
>> +       return r;
>> +}
>> +
>>   int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
>>   {
>>          return 0;
>> --
>> 2.27.0
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-09-11 10:03     ` zhaotianrui
@ 2023-09-11 10:13       ` zhaotianrui
  2023-09-11 11:49       ` Huacai Chen
  1 sibling, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-11 10:13 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao


在 2023/9/11 下午6:03, zhaotianrui 写道:
>
> 在 2023/9/11 下午5:03, Huacai Chen 写道:
>> Hi, Tianrui,
>>
>> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao 
>> <zhaotianrui@loongson.cn> wrote:
>>> Implement LoongArch vcpu get registers and set registers operations, it
>>> is called when user space use the ioctl interface to get or set regs.
>>>
>>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>>> ---
>>>   arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
>>>   arch/loongarch/kvm/vcpu.c    | 206 
>>> +++++++++++++++++++++++++++++++++++
>>>   2 files changed, 273 insertions(+)
>>>   create mode 100644 arch/loongarch/kvm/csr_ops.S
>>>
>>> diff --git a/arch/loongarch/kvm/csr_ops.S 
>>> b/arch/loongarch/kvm/csr_ops.S
>>> new file mode 100644
>>> index 0000000000..53e44b23a5
>>> --- /dev/null
>>> +++ b/arch/loongarch/kvm/csr_ops.S
>>> @@ -0,0 +1,67 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>>> + */
>>> +
>>> +#include <asm/regdef.h>
>>> +#include <linux/linkage.h>
>>> +       .text
>>> +       .cfi_sections   .debug_frame
>>> +/*
>>> + * we have splited hw gcsr into three parts, so we can
>>> + * calculate the code offset by gcsrid and jump here to
>>> + * run the gcsrwr instruction.
>>> + */
>>> +SYM_FUNC_START(set_hw_gcsr)
>>> +       addi.d      t0,   a0,   0
>>> +       addi.w      t1,   zero, 96
>>> +       bltu        t1,   t0,   1f
>>> +       la.pcrel    t0,   10f
>>> +       alsl.d      t0,   a0,   t0, 3
>>> +       jr          t0
>>> +1:
>>> +       addi.w      t1,   a0,   -128
>>> +       addi.w      t2,   zero, 15
>>> +       bltu        t2,   t1,   2f
>>> +       la.pcrel    t0,   11f
>>> +       alsl.d      t0,   t1,   t0, 3
>>> +       jr          t0
>>> +2:
>>> +       addi.w      t1,   a0,   -384
>>> +       addi.w      t2,   zero, 3
>>> +       bltu        t2,   t1,   3f
>>> +       la.pcrel    t0,   12f
>>> +       alsl.d      t0,   t1,   t0, 3
>>> +       jr          t0
>>> +3:
>>> +       addi.w      a0,   zero, -1
>>> +       jr          ra
>>> +
>>> +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
>>> +10:
>>> +       csrnum = 0
>>> +       .rept 0x61
>>> +       gcsrwr a1, csrnum
>>> +       jr ra
>>> +       csrnum = csrnum + 1
>>> +       .endr
>>> +
>>> +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
>>> +11:
>>> +       csrnum = 0x80
>>> +       .rept 0x10
>>> +       gcsrwr a1, csrnum
>>> +       jr ra
>>> +       csrnum = csrnum + 1
>>> +       .endr
>>> +
>>> +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
>>> +12:
>>> +       csrnum = 0x180
>>> +       .rept 0x4
>>> +       gcsrwr a1, csrnum
>>> +       jr ra
>>> +       csrnum = csrnum + 1
>>> +       .endr
>>> +
>>> +SYM_FUNC_END(set_hw_gcsr)
>>> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
>>> index ca4e8d074e..f17422a942 100644
>>> --- a/arch/loongarch/kvm/vcpu.c
>>> +++ b/arch/loongarch/kvm/vcpu.c
>>> @@ -13,6 +13,212 @@
>>>   #define CREATE_TRACE_POINTS
>>>   #include "trace.h"
>>>
>>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
>>> +{
>>> +       unsigned long val;
>>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>>> +
>>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>>> +               return -EINVAL;
>>> +
>>> +       if (id == LOONGARCH_CSR_ESTAT) {
>>> +               /* interrupt status IP0 -- IP7 from GINTC */
>>> +               val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 
>>> 0xff;
>>> +               *v = kvm_read_sw_gcsr(csr, id) | (val << 2);
>>> +               return 0;
>>> +       }
>>> +
>>> +       /*
>>> +        * get software csr state if csrid is valid, since software
>>> +        * csr state is consistent with hardware
>>> +        */
>> After a long time thinking, I found this is wrong. Of course
>> _kvm_setcsr() saves a software copy of the hardware registers, but the
>> hardware status will change. For example, during a VM running, it may
>> change the EUEN register if it uses fpu.
>>
>> So, we should do things like what we do in our internal repo,
>> _kvm_getcsr() should get values from hardware for HW_GCSR registers.
>> And we also need a get_hw_gcsr assembly function.
>>
>>
>> Huacai
> This is a asynchronous vcpu ioctl action, that is to say  this action 
> take place int the vcpu thread after vcpu get out of guest mode, and 
> the guest registers have been saved in software, so we could return 
> software register value when get guest csr.
>
> Thanks
> Tianrui Zhao
This sentence should be This is a **synchronous** vcpu ioctl action, ... ...
Sorry this is my spelling mistake.

Thanks
Tianrui Zhao
>>
>>> +       *v = kvm_read_sw_gcsr(csr, id);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
>>> +{
>>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>>> +       int ret = 0, gintc;
>>> +
>>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>>> +               return -EINVAL;
>>> +
>>> +       if (id == LOONGARCH_CSR_ESTAT) {
>>> +               /* estat IP0~IP7 inject through guestexcept */
>>> +               gintc = (val >> 2) & 0xff;
>>> +               write_csr_gintc(gintc);
>>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
>>> +
>>> +               gintc = val & ~(0xffUL << 2);
>>> +               write_gcsr_estat(gintc);
>>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
>>> +
>>> +               return ret;
>>> +       }
>>> +
>>> +       if (get_gcsr_flag(id) & HW_GCSR) {
>>> +               set_hw_gcsr(id, val);
>>> +               /* write sw gcsr to keep consistent with hardware */
>>> +               kvm_write_sw_gcsr(csr, id, val);
>>> +       } else
>>> +               kvm_write_sw_gcsr(csr, id, val);
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
>>> +               const struct kvm_one_reg *reg, s64 *v)
>>> +{
>>> +       int reg_idx, ret = 0;
>>> +
>>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == 
>>> KVM_REG_LOONGARCH_CSR) {
>>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>>> +               ret = _kvm_getcsr(vcpu, reg_idx, v);
>>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
>>> +               *v = drdtime() + vcpu->kvm->arch.time_offset;
>>> +       else
>>> +               ret = -EINVAL;
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct 
>>> kvm_one_reg *reg)
>>> +{
>>> +       int ret = -EINVAL;
>>> +       s64 v;
>>> +
>>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>>> +               return ret;
>>> +
>>> +       if (_kvm_get_one_reg(vcpu, reg, &v))
>>> +               return ret;
>>> +
>>> +       return put_user(v, (u64 __user *)(long)reg->addr);
>>> +}
>>> +
>>> +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
>>> +                       const struct kvm_one_reg *reg,
>>> +                       s64 v)
>>> +{
>>> +       int ret = 0;
>>> +       unsigned long flags;
>>> +       u64 val;
>>> +       int reg_idx;
>>> +
>>> +       val = v;
>>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == 
>>> KVM_REG_LOONGARCH_CSR) {
>>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>>> +               ret = _kvm_setcsr(vcpu, reg_idx, val);
>>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
>>> +               local_irq_save(flags);
>>> +               /*
>>> +                * gftoffset is relative with board, not vcpu
>>> +                * only set for the first time for smp system
>>> +                */
>>> +               if (vcpu->vcpu_id == 0)
>>> +                       vcpu->kvm->arch.time_offset = (signed 
>>> long)(v - drdtime());
>>> + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
>>> +               local_irq_restore(flags);
>>> +       } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
>>> +               kvm_reset_timer(vcpu);
>>> +               memset(&vcpu->arch.irq_pending, 0, 
>>> sizeof(vcpu->arch.irq_pending));
>>> +               memset(&vcpu->arch.irq_clear, 0, 
>>> sizeof(vcpu->arch.irq_clear));
>>> +       } else
>>> +               ret = -EINVAL;
>>> +
>>> +       return ret;
>>> +}
>>> +
>>> +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct 
>>> kvm_one_reg *reg)
>>> +{
>>> +       s64 v;
>>> +       int ret = -EINVAL;
>>> +
>>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>>> +               return ret;
>>> +
>>> +       if (get_user(v, (u64 __user *)(long)reg->addr))
>>> +               return ret;
>>> +
>>> +       return _kvm_set_one_reg(vcpu, reg, v);
>>> +}
>>> +
>>> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
>>> +                                 struct kvm_sregs *sregs)
>>> +{
>>> +       return -ENOIOCTLCMD;
>>> +}
>>> +
>>> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
>>> +                                 struct kvm_sregs *sregs)
>>> +{
>>> +       return -ENOIOCTLCMD;
>>> +}
>>> +
>>> +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct 
>>> kvm_regs *regs)
>>> +{
>>> +       int i;
>>> +
>>> +       vcpu_load(vcpu);
>>> +
>>> +       for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>>> +               regs->gpr[i] = vcpu->arch.gprs[i];
>>> +
>>> +       regs->pc = vcpu->arch.pc;
>>> +
>>> +       vcpu_put(vcpu);
>>> +       return 0;
>>> +}
>>> +
>>> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct 
>>> kvm_regs *regs)
>>> +{
>>> +       int i;
>>> +
>>> +       vcpu_load(vcpu);
>>> +
>>> +       for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>>> +               vcpu->arch.gprs[i] = regs->gpr[i];
>>> +       vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be 
>>> set. */
>>> +       vcpu->arch.pc = regs->pc;
>>> +
>>> +       vcpu_put(vcpu);
>>> +       return 0;
>>> +}
>>> +
>>> +long kvm_arch_vcpu_ioctl(struct file *filp,
>>> +                        unsigned int ioctl, unsigned long arg)
>>> +{
>>> +       struct kvm_vcpu *vcpu = filp->private_data;
>>> +       void __user *argp = (void __user *)arg;
>>> +       long r;
>>> +
>>> +       vcpu_load(vcpu);
>>> +
>>> +       switch (ioctl) {
>>> +       case KVM_SET_ONE_REG:
>>> +       case KVM_GET_ONE_REG: {
>>> +               struct kvm_one_reg reg;
>>> +
>>> +               r = -EFAULT;
>>> +               if (copy_from_user(&reg, argp, sizeof(reg)))
>>> +                       break;
>>> +               if (ioctl == KVM_SET_ONE_REG)
>>> +                       r = _kvm_set_reg(vcpu, &reg);
>>> +               else
>>> +                       r = _kvm_get_reg(vcpu, &reg);
>>> +               break;
>>> +       }
>>> +       default:
>>> +               r = -ENOIOCTLCMD;
>>> +               break;
>>> +       }
>>> +
>>> +       vcpu_put(vcpu);
>>> +       return r;
>>> +}
>>> +
>>>   int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
>>>   {
>>>          return 0;
>>> -- 
>>> 2.27.0
>>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function
  2023-09-11 10:00   ` Huacai Chen
@ 2023-09-11 10:23     ` bibo mao
  2023-09-12  3:51       ` Huacai Chen
  0 siblings, 1 reply; 56+ messages in thread
From: bibo mao @ 2023-09-11 10:23 UTC (permalink / raw)
  To: Huacai Chen, Tianrui Zhao
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao



在 2023/9/11 18:00, Huacai Chen 写道:
> Hi, Tianrui,
> 
> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>>
>> Implement kvm check vmid and update vmid, the vmid should be checked before
>> vcpu enter guest.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>  arch/loongarch/kvm/vmid.c | 66 +++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 66 insertions(+)
>>  create mode 100644 arch/loongarch/kvm/vmid.c
>>
>> diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c
>> new file mode 100644
>> index 0000000000..fc25ddc3b7
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/vmid.c
>> @@ -0,0 +1,66 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#include <linux/kvm_host.h>
>> +#include "trace.h"
>> +
>> +static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu)
>> +{
>> +       struct kvm_context *context;
>> +       unsigned long vpid;
>> +
>> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
>> +       vpid = context->vpid_cache + 1;
>> +       if (!(vpid & vpid_mask)) {
>> +               /* finish round of 64 bit loop */
>> +               if (unlikely(!vpid))
>> +                       vpid = vpid_mask + 1;
>> +
>> +               /* vpid 0 reserved for root */
>> +               ++vpid;
>> +
>> +               /* start new vpid cycle */
>> +               kvm_flush_tlb_all();
>> +       }
>> +
>> +       context->vpid_cache = vpid;
>> +       vcpu->arch.vpid = vpid;
>> +}
>> +
>> +void _kvm_check_vmid(struct kvm_vcpu *vcpu)
>> +{
>> +       struct kvm_context *context;
>> +       bool migrated;
>> +       unsigned long ver, old, vpid;
>> +       int cpu;
>> +
>> +       cpu = smp_processor_id();
>> +       /*
>> +        * Are we entering guest context on a different CPU to last time?
>> +        * If so, the vCPU's guest TLB state on this CPU may be stale.
>> +        */
>> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
>> +       migrated = (vcpu->cpu != cpu);
>> +
>> +       /*
>> +        * Check if our vpid is of an older version
>> +        *
>> +        * We also discard the stored vpid if we've executed on
>> +        * another CPU, as the guest mappings may have changed without
>> +        * hypervisor knowledge.
>> +        */
>> +       ver = vcpu->arch.vpid & ~vpid_mask;
>> +       old = context->vpid_cache  & ~vpid_mask;
>> +       if (migrated || (ver != old)) {
>> +               _kvm_update_vpid(vcpu, cpu);
>> +               trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
>> +               vcpu->cpu = cpu;
>> +       }
>> +
>> +       /* Restore GSTAT(0x50).vpid */
>> +       vpid = (vcpu->arch.vpid & vpid_mask)
>> +               << CSR_GSTAT_GID_SHIFT;
>> +       change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid);
>> +}
> I believe that vpid and vmid are both GID in the gstat register, so
> please unify their names. And I think vpid is better than vmid.

For processor 3A5000 vpid is the same with vmid, with next generation processor
like 3A6000, it is seperated. vpid is for vcpu specific and represents
translation from gva to gpa; vmid is the whole vm and represents translation
from gpa to hpa, all vcpus shares the same vmid, so that tlb indexed with vpid
will be still in effective when flushing shadow tlbs indexed with vmid.

Only that VM patch for 3A6000 is not submitted now, generation method for
vpid and vmid will be much different. It is prepared for future processor
update :)

Regards
Bibo Mao

> 
> Moreover, no need to create a vmid.c file, just putting them in main.c is OK.
> 
> Huacai
> 
>> --
>> 2.27.0
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-09-11 10:03     ` zhaotianrui
  2023-09-11 10:13       ` zhaotianrui
@ 2023-09-11 11:49       ` Huacai Chen
  2023-09-12  2:41         ` bibo mao
  1 sibling, 1 reply; 56+ messages in thread
From: Huacai Chen @ 2023-09-11 11:49 UTC (permalink / raw)
  To: zhaotianrui
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao

Hi, Tianrui,

On Mon, Sep 11, 2023 at 6:03 PM zhaotianrui <zhaotianrui@loongson.cn> wrote:
>
>
> 在 2023/9/11 下午5:03, Huacai Chen 写道:
> > Hi, Tianrui,
> >
> > On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
> >> Implement LoongArch vcpu get registers and set registers operations, it
> >> is called when user space use the ioctl interface to get or set regs.
> >>
> >> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> >> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> >> ---
> >>   arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
> >>   arch/loongarch/kvm/vcpu.c    | 206 +++++++++++++++++++++++++++++++++++
> >>   2 files changed, 273 insertions(+)
> >>   create mode 100644 arch/loongarch/kvm/csr_ops.S
> >>
> >> diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S
> >> new file mode 100644
> >> index 0000000000..53e44b23a5
> >> --- /dev/null
> >> +++ b/arch/loongarch/kvm/csr_ops.S
> >> @@ -0,0 +1,67 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> >> + */
> >> +
> >> +#include <asm/regdef.h>
> >> +#include <linux/linkage.h>
> >> +       .text
> >> +       .cfi_sections   .debug_frame
> >> +/*
> >> + * we have splited hw gcsr into three parts, so we can
> >> + * calculate the code offset by gcsrid and jump here to
> >> + * run the gcsrwr instruction.
> >> + */
> >> +SYM_FUNC_START(set_hw_gcsr)
> >> +       addi.d      t0,   a0,   0
> >> +       addi.w      t1,   zero, 96
> >> +       bltu        t1,   t0,   1f
> >> +       la.pcrel    t0,   10f
> >> +       alsl.d      t0,   a0,   t0, 3
> >> +       jr          t0
> >> +1:
> >> +       addi.w      t1,   a0,   -128
> >> +       addi.w      t2,   zero, 15
> >> +       bltu        t2,   t1,   2f
> >> +       la.pcrel    t0,   11f
> >> +       alsl.d      t0,   t1,   t0, 3
> >> +       jr          t0
> >> +2:
> >> +       addi.w      t1,   a0,   -384
> >> +       addi.w      t2,   zero, 3
> >> +       bltu        t2,   t1,   3f
> >> +       la.pcrel    t0,   12f
> >> +       alsl.d      t0,   t1,   t0, 3
> >> +       jr          t0
> >> +3:
> >> +       addi.w      a0,   zero, -1
> >> +       jr          ra
> >> +
> >> +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
> >> +10:
> >> +       csrnum = 0
> >> +       .rept 0x61
> >> +       gcsrwr a1, csrnum
> >> +       jr ra
> >> +       csrnum = csrnum + 1
> >> +       .endr
> >> +
> >> +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
> >> +11:
> >> +       csrnum = 0x80
> >> +       .rept 0x10
> >> +       gcsrwr a1, csrnum
> >> +       jr ra
> >> +       csrnum = csrnum + 1
> >> +       .endr
> >> +
> >> +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
> >> +12:
> >> +       csrnum = 0x180
> >> +       .rept 0x4
> >> +       gcsrwr a1, csrnum
> >> +       jr ra
> >> +       csrnum = csrnum + 1
> >> +       .endr
> >> +
> >> +SYM_FUNC_END(set_hw_gcsr)
> >> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
> >> index ca4e8d074e..f17422a942 100644
> >> --- a/arch/loongarch/kvm/vcpu.c
> >> +++ b/arch/loongarch/kvm/vcpu.c
> >> @@ -13,6 +13,212 @@
> >>   #define CREATE_TRACE_POINTS
> >>   #include "trace.h"
> >>
> >> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
> >> +{
> >> +       unsigned long val;
> >> +       struct loongarch_csrs *csr = vcpu->arch.csr;
> >> +
> >> +       if (get_gcsr_flag(id) & INVALID_GCSR)
> >> +               return -EINVAL;
> >> +
> >> +       if (id == LOONGARCH_CSR_ESTAT) {
> >> +               /* interrupt status IP0 -- IP7 from GINTC */
> >> +               val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
> >> +               *v = kvm_read_sw_gcsr(csr, id) | (val << 2);
> >> +               return 0;
> >> +       }
> >> +
> >> +       /*
> >> +        * get software csr state if csrid is valid, since software
> >> +        * csr state is consistent with hardware
> >> +        */
> > After a long time thinking, I found this is wrong. Of course
> > _kvm_setcsr() saves a software copy of the hardware registers, but the
> > hardware status will change. For example, during a VM running, it may
> > change the EUEN register if it uses fpu.
> >
> > So, we should do things like what we do in our internal repo,
> > _kvm_getcsr() should get values from hardware for HW_GCSR registers.
> > And we also need a get_hw_gcsr assembly function.
> >
> >
> > Huacai
> This is a asynchronous vcpu ioctl action, that is to say  this action
> take place int the vcpu thread after vcpu get out of guest mode, and the
> guest registers have been saved in software, so we could return software
> register value when get guest csr.
Maybe you are right in this case, but it is still worthy to get from
hardware directly (more straightforward, more understandable, more
robust). And from my point of view, this is not a performance-critical
path so the 'optimization' is unnecessary.

Huacai

>
> Thanks
> Tianrui Zhao
> >
> >> +       *v = kvm_read_sw_gcsr(csr, id);
> >> +
> >> +       return 0;
> >> +}
> >> +
> >> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
> >> +{
> >> +       struct loongarch_csrs *csr = vcpu->arch.csr;
> >> +       int ret = 0, gintc;
> >> +
> >> +       if (get_gcsr_flag(id) & INVALID_GCSR)
> >> +               return -EINVAL;
> >> +
> >> +       if (id == LOONGARCH_CSR_ESTAT) {
> >> +               /* estat IP0~IP7 inject through guestexcept */
> >> +               gintc = (val >> 2) & 0xff;
> >> +               write_csr_gintc(gintc);
> >> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
> >> +
> >> +               gintc = val & ~(0xffUL << 2);
> >> +               write_gcsr_estat(gintc);
> >> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
> >> +
> >> +               return ret;
> >> +       }
> >> +
> >> +       if (get_gcsr_flag(id) & HW_GCSR) {
> >> +               set_hw_gcsr(id, val);
> >> +               /* write sw gcsr to keep consistent with hardware */
> >> +               kvm_write_sw_gcsr(csr, id, val);
> >> +       } else
> >> +               kvm_write_sw_gcsr(csr, id, val);
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
> >> +               const struct kvm_one_reg *reg, s64 *v)
> >> +{
> >> +       int reg_idx, ret = 0;
> >> +
> >> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
> >> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
> >> +               ret = _kvm_getcsr(vcpu, reg_idx, v);
> >> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
> >> +               *v = drdtime() + vcpu->kvm->arch.time_offset;
> >> +       else
> >> +               ret = -EINVAL;
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> >> +{
> >> +       int ret = -EINVAL;
> >> +       s64 v;
> >> +
> >> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
> >> +               return ret;
> >> +
> >> +       if (_kvm_get_one_reg(vcpu, reg, &v))
> >> +               return ret;
> >> +
> >> +       return put_user(v, (u64 __user *)(long)reg->addr);
> >> +}
> >> +
> >> +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
> >> +                       const struct kvm_one_reg *reg,
> >> +                       s64 v)
> >> +{
> >> +       int ret = 0;
> >> +       unsigned long flags;
> >> +       u64 val;
> >> +       int reg_idx;
> >> +
> >> +       val = v;
> >> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
> >> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
> >> +               ret = _kvm_setcsr(vcpu, reg_idx, val);
> >> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
> >> +               local_irq_save(flags);
> >> +               /*
> >> +                * gftoffset is relative with board, not vcpu
> >> +                * only set for the first time for smp system
> >> +                */
> >> +               if (vcpu->vcpu_id == 0)
> >> +                       vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
> >> +               write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
> >> +               local_irq_restore(flags);
> >> +       } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
> >> +               kvm_reset_timer(vcpu);
> >> +               memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
> >> +               memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
> >> +       } else
> >> +               ret = -EINVAL;
> >> +
> >> +       return ret;
> >> +}
> >> +
> >> +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> >> +{
> >> +       s64 v;
> >> +       int ret = -EINVAL;
> >> +
> >> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
> >> +               return ret;
> >> +
> >> +       if (get_user(v, (u64 __user *)(long)reg->addr))
> >> +               return ret;
> >> +
> >> +       return _kvm_set_one_reg(vcpu, reg, v);
> >> +}
> >> +
> >> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
> >> +                                 struct kvm_sregs *sregs)
> >> +{
> >> +       return -ENOIOCTLCMD;
> >> +}
> >> +
> >> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
> >> +                                 struct kvm_sregs *sregs)
> >> +{
> >> +       return -ENOIOCTLCMD;
> >> +}
> >> +
> >> +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> >> +{
> >> +       int i;
> >> +
> >> +       vcpu_load(vcpu);
> >> +
> >> +       for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
> >> +               regs->gpr[i] = vcpu->arch.gprs[i];
> >> +
> >> +       regs->pc = vcpu->arch.pc;
> >> +
> >> +       vcpu_put(vcpu);
> >> +       return 0;
> >> +}
> >> +
> >> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> >> +{
> >> +       int i;
> >> +
> >> +       vcpu_load(vcpu);
> >> +
> >> +       for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
> >> +               vcpu->arch.gprs[i] = regs->gpr[i];
> >> +       vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
> >> +       vcpu->arch.pc = regs->pc;
> >> +
> >> +       vcpu_put(vcpu);
> >> +       return 0;
> >> +}
> >> +
> >> +long kvm_arch_vcpu_ioctl(struct file *filp,
> >> +                        unsigned int ioctl, unsigned long arg)
> >> +{
> >> +       struct kvm_vcpu *vcpu = filp->private_data;
> >> +       void __user *argp = (void __user *)arg;
> >> +       long r;
> >> +
> >> +       vcpu_load(vcpu);
> >> +
> >> +       switch (ioctl) {
> >> +       case KVM_SET_ONE_REG:
> >> +       case KVM_GET_ONE_REG: {
> >> +               struct kvm_one_reg reg;
> >> +
> >> +               r = -EFAULT;
> >> +               if (copy_from_user(&reg, argp, sizeof(reg)))
> >> +                       break;
> >> +               if (ioctl == KVM_SET_ONE_REG)
> >> +                       r = _kvm_set_reg(vcpu, &reg);
> >> +               else
> >> +                       r = _kvm_get_reg(vcpu, &reg);
> >> +               break;
> >> +       }
> >> +       default:
> >> +               r = -ENOIOCTLCMD;
> >> +               break;
> >> +       }
> >> +
> >> +       vcpu_put(vcpu);
> >> +       return r;
> >> +}
> >> +
> >>   int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
> >>   {
> >>          return 0;
> >> --
> >> 2.27.0
> >>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-09-11  7:30   ` WANG Xuerui
@ 2023-09-12  1:57     ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-12  1:57 UTC (permalink / raw)
  To: WANG Xuerui, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao, kernel test robot


在 2023/9/11 下午3:30, WANG Xuerui 写道:
> On 8/31/23 16:30, Tianrui Zhao wrote:
>> [snip]
>> +
>> +config AS_HAS_LVZ_EXTENSION
>> +    def_bool $(as-instr,hvcl 0)
>
> Upon closer look it looks like this piece could use some improvement 
> as well: while presence of "hvcl" indeed always implies full support 
> for LVZ instructions, "hvcl" however isn't used anywhere in the 
> series, so it's not trivially making sense. It may be better to test 
> for an instruction actually going to be used like "gcsrrd". What do 
> you think?
Thanks for your advice, and I think both "hvcl" and "gcsrrd" 
instructions which are supported by binutils mean that the cpu support 
LVZ extension, so I think they have the same meaning there.

Thanks
Tianrui Zhao


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers
  2023-09-11 11:49       ` Huacai Chen
@ 2023-09-12  2:41         ` bibo mao
  0 siblings, 0 replies; 56+ messages in thread
From: bibo mao @ 2023-09-12  2:41 UTC (permalink / raw)
  To: Huacai Chen, zhaotianrui
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao



在 2023/9/11 19:49, Huacai Chen 写道:
> Hi, Tianrui,
> 
> On Mon, Sep 11, 2023 at 6:03 PM zhaotianrui <zhaotianrui@loongson.cn> wrote:
>>
>>
>> 在 2023/9/11 下午5:03, Huacai Chen 写道:
>>> Hi, Tianrui,
>>>
>>> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>>>> Implement LoongArch vcpu get registers and set registers operations, it
>>>> is called when user space use the ioctl interface to get or set regs.
>>>>
>>>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>>>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>>>> ---
>>>>   arch/loongarch/kvm/csr_ops.S |  67 ++++++++++++
>>>>   arch/loongarch/kvm/vcpu.c    | 206 +++++++++++++++++++++++++++++++++++
>>>>   2 files changed, 273 insertions(+)
>>>>   create mode 100644 arch/loongarch/kvm/csr_ops.S
>>>>
>>>> diff --git a/arch/loongarch/kvm/csr_ops.S b/arch/loongarch/kvm/csr_ops.S
>>>> new file mode 100644
>>>> index 0000000000..53e44b23a5
>>>> --- /dev/null
>>>> +++ b/arch/loongarch/kvm/csr_ops.S
>>>> @@ -0,0 +1,67 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +/*
>>>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>>>> + */
>>>> +
>>>> +#include <asm/regdef.h>
>>>> +#include <linux/linkage.h>
>>>> +       .text
>>>> +       .cfi_sections   .debug_frame
>>>> +/*
>>>> + * we have splited hw gcsr into three parts, so we can
>>>> + * calculate the code offset by gcsrid and jump here to
>>>> + * run the gcsrwr instruction.
>>>> + */
>>>> +SYM_FUNC_START(set_hw_gcsr)
>>>> +       addi.d      t0,   a0,   0
>>>> +       addi.w      t1,   zero, 96
>>>> +       bltu        t1,   t0,   1f
>>>> +       la.pcrel    t0,   10f
>>>> +       alsl.d      t0,   a0,   t0, 3
>>>> +       jr          t0
>>>> +1:
>>>> +       addi.w      t1,   a0,   -128
>>>> +       addi.w      t2,   zero, 15
>>>> +       bltu        t2,   t1,   2f
>>>> +       la.pcrel    t0,   11f
>>>> +       alsl.d      t0,   t1,   t0, 3
>>>> +       jr          t0
>>>> +2:
>>>> +       addi.w      t1,   a0,   -384
>>>> +       addi.w      t2,   zero, 3
>>>> +       bltu        t2,   t1,   3f
>>>> +       la.pcrel    t0,   12f
>>>> +       alsl.d      t0,   t1,   t0, 3
>>>> +       jr          t0
>>>> +3:
>>>> +       addi.w      a0,   zero, -1
>>>> +       jr          ra
>>>> +
>>>> +/* range from 0x0(KVM_CSR_CRMD) to 0x60(KVM_CSR_LLBCTL) */
>>>> +10:
>>>> +       csrnum = 0
>>>> +       .rept 0x61
>>>> +       gcsrwr a1, csrnum
>>>> +       jr ra
>>>> +       csrnum = csrnum + 1
>>>> +       .endr
>>>> +
>>>> +/* range from 0x80(KVM_CSR_IMPCTL1) to 0x8f(KVM_CSR_TLBRPRMD) */
>>>> +11:
>>>> +       csrnum = 0x80
>>>> +       .rept 0x10
>>>> +       gcsrwr a1, csrnum
>>>> +       jr ra
>>>> +       csrnum = csrnum + 1
>>>> +       .endr
>>>> +
>>>> +/* range from 0x180(KVM_CSR_DMWIN0) to 0x183(KVM_CSR_DMWIN3) */
>>>> +12:
>>>> +       csrnum = 0x180
>>>> +       .rept 0x4
>>>> +       gcsrwr a1, csrnum
>>>> +       jr ra
>>>> +       csrnum = csrnum + 1
>>>> +       .endr
>>>> +
>>>> +SYM_FUNC_END(set_hw_gcsr)
>>>> diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
>>>> index ca4e8d074e..f17422a942 100644
>>>> --- a/arch/loongarch/kvm/vcpu.c
>>>> +++ b/arch/loongarch/kvm/vcpu.c
>>>> @@ -13,6 +13,212 @@
>>>>   #define CREATE_TRACE_POINTS
>>>>   #include "trace.h"
>>>>
>>>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v)
>>>> +{
>>>> +       unsigned long val;
>>>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>>>> +
>>>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>>>> +               return -EINVAL;
>>>> +
>>>> +       if (id == LOONGARCH_CSR_ESTAT) {
>>>> +               /* interrupt status IP0 -- IP7 from GINTC */
>>>> +               val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
>>>> +               *v = kvm_read_sw_gcsr(csr, id) | (val << 2);
>>>> +               return 0;
>>>> +       }
>>>> +
>>>> +       /*
>>>> +        * get software csr state if csrid is valid, since software
>>>> +        * csr state is consistent with hardware
>>>> +        */
>>> After a long time thinking, I found this is wrong. Of course
>>> _kvm_setcsr() saves a software copy of the hardware registers, but the
>>> hardware status will change. For example, during a VM running, it may
>>> change the EUEN register if it uses fpu.
>>>
>>> So, we should do things like what we do in our internal repo,
>>> _kvm_getcsr() should get values from hardware for HW_GCSR registers.
>>> And we also need a get_hw_gcsr assembly function.
>>>
>>>
>>> Huacai
>> This is a asynchronous vcpu ioctl action, that is to say  this action
>> take place int the vcpu thread after vcpu get out of guest mode, and the
>> guest registers have been saved in software, so we could return software
>> register value when get guest csr.
> Maybe you are right in this case, but it is still worthy to get from
> hardware directly (more straightforward, more understandable, more
> robust). And from my point of view, this is not a performance-critical
> path so the 'optimization' is unnecessary.
Current vcpu_load/vcpu_put is called with the flowing function:
  1. kvm_arch_vcpu_ioctl_get_regs/kvm_arch_vcpu_ioctl_set_regs pair
  2. kvm_arch_vcpu_ioctl
  3. kvm_sched_in/kvm_sched_out hook function, kvm_arch_vcpu_load is called
  4. kvm_arch_vcpu_ioctl_run
Yeap, we can remove vcpu_load/vcpu_put function call during 1) and 2).

Here is pseudo code when vm starts to run.
   kvm_arch_vcpu_ioctl(KVM_SET_ONE_REG, ..)
   kvm_arch_vcpu_ioctl(KVM_SET_ONE_REG, ..)
   kvm_vcpu_ioctl(KVM_RUN)
kvm_arch_vcpu_ioctl(KVM_SET_ONE_REG) may run in CPU0, and kvm_vcpu_ioctl(KVM_RUN)
runs in CPU1. so kvm_arch_vcpu_ioctl_run needs restores hw csr from sw, and
 _kvm_setcsr needs set csr to sw.

KVM_LARCH_CSR flag is for optimization for kvm_sched_in/kvm_sched_out hook function,
there is another scenario such as CPU/Memory is shared by multiple VMs, memory will
be swapped out and there will be multiple page faults for the second mmu, vcpu may
be preempted since physical CPU is shared by multiple VM. In this scenario 
kvm_sched_in/kvm_sched_out hook function call will be frequent.

Regards
Bibo Mao
> 
> Huacai
> 
>>
>> Thanks
>> Tianrui Zhao
>>>
>>>> +       *v = kvm_read_sw_gcsr(csr, id);
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val)
>>>> +{
>>>> +       struct loongarch_csrs *csr = vcpu->arch.csr;
>>>> +       int ret = 0, gintc;
>>>> +
>>>> +       if (get_gcsr_flag(id) & INVALID_GCSR)
>>>> +               return -EINVAL;
>>>> +
>>>> +       if (id == LOONGARCH_CSR_ESTAT) {
>>>> +               /* estat IP0~IP7 inject through guestexcept */
>>>> +               gintc = (val >> 2) & 0xff;
>>>> +               write_csr_gintc(gintc);
>>>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc);
>>>> +
>>>> +               gintc = val & ~(0xffUL << 2);
>>>> +               write_gcsr_estat(gintc);
>>>> +               kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc);
>>>> +
>>>> +               return ret;
>>>> +       }
>>>> +
>>>> +       if (get_gcsr_flag(id) & HW_GCSR) {
>>>> +               set_hw_gcsr(id, val);
>>>> +               /* write sw gcsr to keep consistent with hardware */
>>>> +               kvm_write_sw_gcsr(csr, id, val);
>>>> +       } else
>>>> +               kvm_write_sw_gcsr(csr, id, val);
>>>> +
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +static int _kvm_get_one_reg(struct kvm_vcpu *vcpu,
>>>> +               const struct kvm_one_reg *reg, s64 *v)
>>>> +{
>>>> +       int reg_idx, ret = 0;
>>>> +
>>>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
>>>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>>>> +               ret = _kvm_getcsr(vcpu, reg_idx, v);
>>>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER)
>>>> +               *v = drdtime() + vcpu->kvm->arch.time_offset;
>>>> +       else
>>>> +               ret = -EINVAL;
>>>> +
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +static int _kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>>>> +{
>>>> +       int ret = -EINVAL;
>>>> +       s64 v;
>>>> +
>>>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>>>> +               return ret;
>>>> +
>>>> +       if (_kvm_get_one_reg(vcpu, reg, &v))
>>>> +               return ret;
>>>> +
>>>> +       return put_user(v, (u64 __user *)(long)reg->addr);
>>>> +}
>>>> +
>>>> +static int _kvm_set_one_reg(struct kvm_vcpu *vcpu,
>>>> +                       const struct kvm_one_reg *reg,
>>>> +                       s64 v)
>>>> +{
>>>> +       int ret = 0;
>>>> +       unsigned long flags;
>>>> +       u64 val;
>>>> +       int reg_idx;
>>>> +
>>>> +       val = v;
>>>> +       if ((reg->id & KVM_REG_LOONGARCH_MASK) == KVM_REG_LOONGARCH_CSR) {
>>>> +               reg_idx = KVM_GET_IOC_CSRIDX(reg->id);
>>>> +               ret = _kvm_setcsr(vcpu, reg_idx, val);
>>>> +       } else if (reg->id == KVM_REG_LOONGARCH_COUNTER) {
>>>> +               local_irq_save(flags);
>>>> +               /*
>>>> +                * gftoffset is relative with board, not vcpu
>>>> +                * only set for the first time for smp system
>>>> +                */
>>>> +               if (vcpu->vcpu_id == 0)
>>>> +                       vcpu->kvm->arch.time_offset = (signed long)(v - drdtime());
>>>> +               write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset);
>>>> +               local_irq_restore(flags);
>>>> +       } else if (reg->id == KVM_REG_LOONGARCH_VCPU_RESET) {
>>>> +               kvm_reset_timer(vcpu);
>>>> +               memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending));
>>>> +               memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear));
>>>> +       } else
>>>> +               ret = -EINVAL;
>>>> +
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +static int _kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>>>> +{
>>>> +       s64 v;
>>>> +       int ret = -EINVAL;
>>>> +
>>>> +       if ((reg->id & KVM_REG_SIZE_MASK) != KVM_REG_SIZE_U64)
>>>> +               return ret;
>>>> +
>>>> +       if (get_user(v, (u64 __user *)(long)reg->addr))
>>>> +               return ret;
>>>> +
>>>> +       return _kvm_set_one_reg(vcpu, reg, v);
>>>> +}
>>>> +
>>>> +int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
>>>> +                                 struct kvm_sregs *sregs)
>>>> +{
>>>> +       return -ENOIOCTLCMD;
>>>> +}
>>>> +
>>>> +int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
>>>> +                                 struct kvm_sregs *sregs)
>>>> +{
>>>> +       return -ENOIOCTLCMD;
>>>> +}
>>>> +
>>>> +int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>>>> +{
>>>> +       int i;
>>>> +
>>>> +       vcpu_load(vcpu);
>>>> +
>>>> +       for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>>>> +               regs->gpr[i] = vcpu->arch.gprs[i];
>>>> +
>>>> +       regs->pc = vcpu->arch.pc;
>>>> +
>>>> +       vcpu_put(vcpu);
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
>>>> +{
>>>> +       int i;
>>>> +
>>>> +       vcpu_load(vcpu);
>>>> +
>>>> +       for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++)
>>>> +               vcpu->arch.gprs[i] = regs->gpr[i];
>>>> +       vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */
>>>> +       vcpu->arch.pc = regs->pc;
>>>> +
>>>> +       vcpu_put(vcpu);
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +long kvm_arch_vcpu_ioctl(struct file *filp,
>>>> +                        unsigned int ioctl, unsigned long arg)
>>>> +{
>>>> +       struct kvm_vcpu *vcpu = filp->private_data;
>>>> +       void __user *argp = (void __user *)arg;
>>>> +       long r;
>>>> +
>>>> +       vcpu_load(vcpu);
>>>> +
>>>> +       switch (ioctl) {
>>>> +       case KVM_SET_ONE_REG:
>>>> +       case KVM_GET_ONE_REG: {
>>>> +               struct kvm_one_reg reg;
>>>> +
>>>> +               r = -EFAULT;
>>>> +               if (copy_from_user(&reg, argp, sizeof(reg)))
>>>> +                       break;
>>>> +               if (ioctl == KVM_SET_ONE_REG)
>>>> +                       r = _kvm_set_reg(vcpu, &reg);
>>>> +               else
>>>> +                       r = _kvm_get_reg(vcpu, &reg);
>>>> +               break;
>>>> +       }
>>>> +       default:
>>>> +               r = -ENOIOCTLCMD;
>>>> +               break;
>>>> +       }
>>>> +
>>>> +       vcpu_put(vcpu);
>>>> +       return r;
>>>> +}
>>>> +
>>>>   int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id)
>>>>   {
>>>>          return 0;
>>>> --
>>>> 2.27.0
>>>>
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function
  2023-09-11 10:23     ` bibo mao
@ 2023-09-12  3:51       ` Huacai Chen
  0 siblings, 0 replies; 56+ messages in thread
From: Huacai Chen @ 2023-09-12  3:51 UTC (permalink / raw)
  To: bibo mao
  Cc: Tianrui Zhao, linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, Xi Ruoyao

On Mon, Sep 11, 2023 at 6:23 PM bibo mao <maobibo@loongson.cn> wrote:
>
>
>
> 在 2023/9/11 18:00, Huacai Chen 写道:
> > Hi, Tianrui,
> >
> > On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
> >>
> >> Implement kvm check vmid and update vmid, the vmid should be checked before
> >> vcpu enter guest.
> >>
> >> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
> >> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
> >> ---
> >>  arch/loongarch/kvm/vmid.c | 66 +++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 66 insertions(+)
> >>  create mode 100644 arch/loongarch/kvm/vmid.c
> >>
> >> diff --git a/arch/loongarch/kvm/vmid.c b/arch/loongarch/kvm/vmid.c
> >> new file mode 100644
> >> index 0000000000..fc25ddc3b7
> >> --- /dev/null
> >> +++ b/arch/loongarch/kvm/vmid.c
> >> @@ -0,0 +1,66 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +/*
> >> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
> >> + */
> >> +
> >> +#include <linux/kvm_host.h>
> >> +#include "trace.h"
> >> +
> >> +static void _kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu)
> >> +{
> >> +       struct kvm_context *context;
> >> +       unsigned long vpid;
> >> +
> >> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
> >> +       vpid = context->vpid_cache + 1;
> >> +       if (!(vpid & vpid_mask)) {
> >> +               /* finish round of 64 bit loop */
> >> +               if (unlikely(!vpid))
> >> +                       vpid = vpid_mask + 1;
> >> +
> >> +               /* vpid 0 reserved for root */
> >> +               ++vpid;
> >> +
> >> +               /* start new vpid cycle */
> >> +               kvm_flush_tlb_all();
> >> +       }
> >> +
> >> +       context->vpid_cache = vpid;
> >> +       vcpu->arch.vpid = vpid;
> >> +}
> >> +
> >> +void _kvm_check_vmid(struct kvm_vcpu *vcpu)
> >> +{
> >> +       struct kvm_context *context;
> >> +       bool migrated;
> >> +       unsigned long ver, old, vpid;
> >> +       int cpu;
> >> +
> >> +       cpu = smp_processor_id();
> >> +       /*
> >> +        * Are we entering guest context on a different CPU to last time?
> >> +        * If so, the vCPU's guest TLB state on this CPU may be stale.
> >> +        */
> >> +       context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu);
> >> +       migrated = (vcpu->cpu != cpu);
> >> +
> >> +       /*
> >> +        * Check if our vpid is of an older version
> >> +        *
> >> +        * We also discard the stored vpid if we've executed on
> >> +        * another CPU, as the guest mappings may have changed without
> >> +        * hypervisor knowledge.
> >> +        */
> >> +       ver = vcpu->arch.vpid & ~vpid_mask;
> >> +       old = context->vpid_cache  & ~vpid_mask;
> >> +       if (migrated || (ver != old)) {
> >> +               _kvm_update_vpid(vcpu, cpu);
> >> +               trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
> >> +               vcpu->cpu = cpu;
> >> +       }
> >> +
> >> +       /* Restore GSTAT(0x50).vpid */
> >> +       vpid = (vcpu->arch.vpid & vpid_mask)
> >> +               << CSR_GSTAT_GID_SHIFT;
> >> +       change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid);
> >> +}
> > I believe that vpid and vmid are both GID in the gstat register, so
> > please unify their names. And I think vpid is better than vmid.
>
> For processor 3A5000 vpid is the same with vmid, with next generation processor
> like 3A6000, it is seperated. vpid is for vcpu specific and represents
> translation from gva to gpa; vmid is the whole vm and represents translation
> from gpa to hpa, all vcpus shares the same vmid, so that tlb indexed with vpid
> will be still in effective when flushing shadow tlbs indexed with vmid.
>
> Only that VM patch for 3A6000 is not submitted now, generation method for
> vpid and vmid will be much different. It is prepared for future processor
> update :)
If so, then I think there should be a 'vmid' in kvm_arch and a 'vpid'
in kvm_vcpu_arch?
This patch only handles kvm_vcpu_arch so I think all should be vpid here.

And again, this code can be just put in main.c.

Huacai

>
> Regards
> Bibo Mao
>
> >
> > Moreover, no need to create a vmid.c file, just putting them in main.c is OK.
> >
> > Huacai
> >
> >> --
> >> 2.27.0
> >>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files
  2023-09-11  8:07   ` Huacai Chen
@ 2023-09-12  8:26     ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-12  8:26 UTC (permalink / raw)
  To: Huacai Chen
  Cc: linux-kernel, kvm, Paolo Bonzini, WANG Xuerui,
	Greg Kroah-Hartman, loongarch, Jens Axboe, Mark Brown,
	Alex Deucher, Oliver Upton, maobibo, Xi Ruoyao


在 2023/9/11 下午4:07, Huacai Chen 写道:
> Hi, Tianrui,
>
> On Thu, Aug 31, 2023 at 4:30 PM Tianrui Zhao <zhaotianrui@loongson.cn> wrote:
>> Add LoongArch vcpu related header files, including vcpu csr
>> information, irq number defines, and some vcpu interfaces.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/include/asm/kvm_csr.h   | 222 +++++++++++++++++++++++++
>>   arch/loongarch/include/asm/kvm_vcpu.h  |  95 +++++++++++
>>   arch/loongarch/include/asm/loongarch.h |  19 ++-
>>   arch/loongarch/kvm/trace.h             | 168 +++++++++++++++++++
>>   4 files changed, 499 insertions(+), 5 deletions(-)
>>   create mode 100644 arch/loongarch/include/asm/kvm_csr.h
>>   create mode 100644 arch/loongarch/include/asm/kvm_vcpu.h
>>   create mode 100644 arch/loongarch/kvm/trace.h
>>
>> diff --git a/arch/loongarch/include/asm/kvm_csr.h b/arch/loongarch/include/asm/kvm_csr.h
>> new file mode 100644
>> index 0000000000..e27dcacd00
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_csr.h
>> @@ -0,0 +1,222 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_CSR_H__
>> +#define __ASM_LOONGARCH_KVM_CSR_H__
>> +#include <asm/loongarch.h>
>> +#include <asm/kvm_vcpu.h>
>> +#include <linux/uaccess.h>
>> +#include <linux/kvm_host.h>
>> +
>> +/* binutils support virtualization instructions */
>> +#define gcsr_read(csr)                                         \
>> +({                                                             \
>> +       register unsigned long __v;                             \
>> +       __asm__ __volatile__(                                   \
>> +               " gcsrrd %[val], %[reg]\n\t"                    \
>> +               : [val] "=r" (__v)                              \
>> +               : [reg] "i" (csr)                               \
>> +               : "memory");                                    \
>> +       __v;                                                    \
>> +})
>> +
>> +#define gcsr_write(v, csr)                                     \
>> +({                                                             \
>> +       register unsigned long __v = v;                         \
>> +       __asm__ __volatile__ (                                  \
>> +               " gcsrwr %[val], %[reg]\n\t"                    \
>> +               : [val] "+r" (__v)                              \
>> +               : [reg] "i" (csr)                               \
>> +               : "memory");                                    \
>> +})
>> +
>> +#define gcsr_xchg(v, m, csr)                                   \
>> +({                                                             \
>> +       register unsigned long __v = v;                         \
>> +       __asm__ __volatile__(                                   \
>> +               " gcsrxchg %[val], %[mask], %[reg]\n\t"         \
>> +               : [val] "+r" (__v)                              \
>> +               : [mask] "r" (m), [reg] "i" (csr)               \
>> +               : "memory");                                    \
>> +       __v;                                                    \
>> +})
>> +
>> +/* Guest CSRS read and write */
>> +#define read_gcsr_crmd()               gcsr_read(LOONGARCH_CSR_CRMD)
>> +#define write_gcsr_crmd(val)           gcsr_write(val, LOONGARCH_CSR_CRMD)
>> +#define read_gcsr_prmd()               gcsr_read(LOONGARCH_CSR_PRMD)
>> +#define write_gcsr_prmd(val)           gcsr_write(val, LOONGARCH_CSR_PRMD)
>> +#define read_gcsr_euen()               gcsr_read(LOONGARCH_CSR_EUEN)
>> +#define write_gcsr_euen(val)           gcsr_write(val, LOONGARCH_CSR_EUEN)
>> +#define read_gcsr_misc()               gcsr_read(LOONGARCH_CSR_MISC)
>> +#define write_gcsr_misc(val)           gcsr_write(val, LOONGARCH_CSR_MISC)
>> +#define read_gcsr_ecfg()               gcsr_read(LOONGARCH_CSR_ECFG)
>> +#define write_gcsr_ecfg(val)           gcsr_write(val, LOONGARCH_CSR_ECFG)
>> +#define read_gcsr_estat()              gcsr_read(LOONGARCH_CSR_ESTAT)
>> +#define write_gcsr_estat(val)          gcsr_write(val, LOONGARCH_CSR_ESTAT)
>> +#define read_gcsr_era()                        gcsr_read(LOONGARCH_CSR_ERA)
>> +#define write_gcsr_era(val)            gcsr_write(val, LOONGARCH_CSR_ERA)
>> +#define read_gcsr_badv()               gcsr_read(LOONGARCH_CSR_BADV)
>> +#define write_gcsr_badv(val)           gcsr_write(val, LOONGARCH_CSR_BADV)
>> +#define read_gcsr_badi()               gcsr_read(LOONGARCH_CSR_BADI)
>> +#define write_gcsr_badi(val)           gcsr_write(val, LOONGARCH_CSR_BADI)
>> +#define read_gcsr_eentry()             gcsr_read(LOONGARCH_CSR_EENTRY)
>> +#define write_gcsr_eentry(val)         gcsr_write(val, LOONGARCH_CSR_EENTRY)
>> +
>> +#define read_gcsr_tlbidx()             gcsr_read(LOONGARCH_CSR_TLBIDX)
>> +#define write_gcsr_tlbidx(val)         gcsr_write(val, LOONGARCH_CSR_TLBIDX)
>> +#define read_gcsr_tlbhi()              gcsr_read(LOONGARCH_CSR_TLBEHI)
>> +#define write_gcsr_tlbhi(val)          gcsr_write(val, LOONGARCH_CSR_TLBEHI)
>> +#define read_gcsr_tlblo0()             gcsr_read(LOONGARCH_CSR_TLBELO0)
>> +#define write_gcsr_tlblo0(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO0)
>> +#define read_gcsr_tlblo1()             gcsr_read(LOONGARCH_CSR_TLBELO1)
>> +#define write_gcsr_tlblo1(val)         gcsr_write(val, LOONGARCH_CSR_TLBELO1)
>> +
>> +#define read_gcsr_asid()               gcsr_read(LOONGARCH_CSR_ASID)
>> +#define write_gcsr_asid(val)           gcsr_write(val, LOONGARCH_CSR_ASID)
>> +#define read_gcsr_pgdl()               gcsr_read(LOONGARCH_CSR_PGDL)
>> +#define write_gcsr_pgdl(val)           gcsr_write(val, LOONGARCH_CSR_PGDL)
>> +#define read_gcsr_pgdh()               gcsr_read(LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgdh(val)           gcsr_write(val, LOONGARCH_CSR_PGDH)
>> +#define write_gcsr_pgd(val)            gcsr_write(val, LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pgd()                        gcsr_read(LOONGARCH_CSR_PGD)
>> +#define read_gcsr_pwctl0()             gcsr_read(LOONGARCH_CSR_PWCTL0)
>> +#define write_gcsr_pwctl0(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL0)
>> +#define read_gcsr_pwctl1()             gcsr_read(LOONGARCH_CSR_PWCTL1)
>> +#define write_gcsr_pwctl1(val)         gcsr_write(val, LOONGARCH_CSR_PWCTL1)
>> +#define read_gcsr_stlbpgsize()         gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
>> +#define write_gcsr_stlbpgsize(val)     gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
>> +#define read_gcsr_rvacfg()             gcsr_read(LOONGARCH_CSR_RVACFG)
>> +#define write_gcsr_rvacfg(val)         gcsr_write(val, LOONGARCH_CSR_RVACFG)
>> +
>> +#define read_gcsr_cpuid()              gcsr_read(LOONGARCH_CSR_CPUID)
>> +#define write_gcsr_cpuid(val)          gcsr_write(val, LOONGARCH_CSR_CPUID)
>> +#define read_gcsr_prcfg1()             gcsr_read(LOONGARCH_CSR_PRCFG1)
>> +#define write_gcsr_prcfg1(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG1)
>> +#define read_gcsr_prcfg2()             gcsr_read(LOONGARCH_CSR_PRCFG2)
>> +#define write_gcsr_prcfg2(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG2)
>> +#define read_gcsr_prcfg3()             gcsr_read(LOONGARCH_CSR_PRCFG3)
>> +#define write_gcsr_prcfg3(val)         gcsr_write(val, LOONGARCH_CSR_PRCFG3)
>> +
>> +#define read_gcsr_kscratch0()          gcsr_read(LOONGARCH_CSR_KS0)
>> +#define write_gcsr_kscratch0(val)      gcsr_write(val, LOONGARCH_CSR_KS0)
>> +#define read_gcsr_kscratch1()          gcsr_read(LOONGARCH_CSR_KS1)
>> +#define write_gcsr_kscratch1(val)      gcsr_write(val, LOONGARCH_CSR_KS1)
>> +#define read_gcsr_kscratch2()          gcsr_read(LOONGARCH_CSR_KS2)
>> +#define write_gcsr_kscratch2(val)      gcsr_write(val, LOONGARCH_CSR_KS2)
>> +#define read_gcsr_kscratch3()          gcsr_read(LOONGARCH_CSR_KS3)
>> +#define write_gcsr_kscratch3(val)      gcsr_write(val, LOONGARCH_CSR_KS3)
>> +#define read_gcsr_kscratch4()          gcsr_read(LOONGARCH_CSR_KS4)
>> +#define write_gcsr_kscratch4(val)      gcsr_write(val, LOONGARCH_CSR_KS4)
>> +#define read_gcsr_kscratch5()          gcsr_read(LOONGARCH_CSR_KS5)
>> +#define write_gcsr_kscratch5(val)      gcsr_write(val, LOONGARCH_CSR_KS5)
>> +#define read_gcsr_kscratch6()          gcsr_read(LOONGARCH_CSR_KS6)
>> +#define write_gcsr_kscratch6(val)      gcsr_write(val, LOONGARCH_CSR_KS6)
>> +#define read_gcsr_kscratch7()          gcsr_read(LOONGARCH_CSR_KS7)
>> +#define write_gcsr_kscratch7(val)      gcsr_write(val, LOONGARCH_CSR_KS7)
>> +
>> +#define read_gcsr_timerid()            gcsr_read(LOONGARCH_CSR_TMID)
>> +#define write_gcsr_timerid(val)                gcsr_write(val, LOONGARCH_CSR_TMID)
>> +#define read_gcsr_timercfg()           gcsr_read(LOONGARCH_CSR_TCFG)
>> +#define write_gcsr_timercfg(val)       gcsr_write(val, LOONGARCH_CSR_TCFG)
>> +#define read_gcsr_timertick()          gcsr_read(LOONGARCH_CSR_TVAL)
>> +#define write_gcsr_timertick(val)      gcsr_write(val, LOONGARCH_CSR_TVAL)
>> +#define read_gcsr_timeroffset()                gcsr_read(LOONGARCH_CSR_CNTC)
>> +#define write_gcsr_timeroffset(val)    gcsr_write(val, LOONGARCH_CSR_CNTC)
>> +
>> +#define read_gcsr_llbctl()             gcsr_read(LOONGARCH_CSR_LLBCTL)
>> +#define write_gcsr_llbctl(val)         gcsr_write(val, LOONGARCH_CSR_LLBCTL)
>> +
>> +#define read_gcsr_tlbrentry()          gcsr_read(LOONGARCH_CSR_TLBRENTRY)
>> +#define write_gcsr_tlbrentry(val)      gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
>> +#define read_gcsr_tlbrbadv()           gcsr_read(LOONGARCH_CSR_TLBRBADV)
>> +#define write_gcsr_tlbrbadv(val)       gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
>> +#define read_gcsr_tlbrera()            gcsr_read(LOONGARCH_CSR_TLBRERA)
>> +#define write_gcsr_tlbrera(val)                gcsr_write(val, LOONGARCH_CSR_TLBRERA)
>> +#define read_gcsr_tlbrsave()           gcsr_read(LOONGARCH_CSR_TLBRSAVE)
>> +#define write_gcsr_tlbrsave(val)       gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
>> +#define read_gcsr_tlbrelo0()           gcsr_read(LOONGARCH_CSR_TLBRELO0)
>> +#define write_gcsr_tlbrelo0(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
>> +#define read_gcsr_tlbrelo1()           gcsr_read(LOONGARCH_CSR_TLBRELO1)
>> +#define write_gcsr_tlbrelo1(val)       gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
>> +#define read_gcsr_tlbrehi()            gcsr_read(LOONGARCH_CSR_TLBREHI)
>> +#define write_gcsr_tlbrehi(val)                gcsr_write(val, LOONGARCH_CSR_TLBREHI)
>> +#define read_gcsr_tlbrprmd()           gcsr_read(LOONGARCH_CSR_TLBRPRMD)
>> +#define write_gcsr_tlbrprmd(val)       gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
>> +
>> +#define read_gcsr_directwin0()         gcsr_read(LOONGARCH_CSR_DMWIN0)
>> +#define write_gcsr_directwin0(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN0)
>> +#define read_gcsr_directwin1()         gcsr_read(LOONGARCH_CSR_DMWIN1)
>> +#define write_gcsr_directwin1(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN1)
>> +#define read_gcsr_directwin2()         gcsr_read(LOONGARCH_CSR_DMWIN2)
>> +#define write_gcsr_directwin2(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN2)
>> +#define read_gcsr_directwin3()         gcsr_read(LOONGARCH_CSR_DMWIN3)
>> +#define write_gcsr_directwin3(val)     gcsr_write(val, LOONGARCH_CSR_DMWIN3)
>> +
>> +/* Guest related CSRs */
>> +#define read_csr_gtlbc()               csr_read64(LOONGARCH_CSR_GTLBC)
>> +#define write_csr_gtlbc(val)           csr_write64(val, LOONGARCH_CSR_GTLBC)
>> +#define read_csr_trgp()                        csr_read64(LOONGARCH_CSR_TRGP)
>> +#define read_csr_gcfg()                        csr_read64(LOONGARCH_CSR_GCFG)
>> +#define write_csr_gcfg(val)            csr_write64(val, LOONGARCH_CSR_GCFG)
>> +#define read_csr_gstat()               csr_read64(LOONGARCH_CSR_GSTAT)
>> +#define write_csr_gstat(val)           csr_write64(val, LOONGARCH_CSR_GSTAT)
>> +#define read_csr_gintc()               csr_read64(LOONGARCH_CSR_GINTC)
>> +#define write_csr_gintc(val)           csr_write64(val, LOONGARCH_CSR_GINTC)
>> +#define read_csr_gcntc()               csr_read64(LOONGARCH_CSR_GCNTC)
>> +#define write_csr_gcntc(val)           csr_write64(val, LOONGARCH_CSR_GCNTC)
>> +
>> +#define __BUILD_GCSR_OP(name)          __BUILD_CSR_COMMON(gcsr_##name)
>> +
>> +__BUILD_GCSR_OP(llbctl)
>> +__BUILD_GCSR_OP(tlbidx)
>> +__BUILD_CSR_OP(gcfg)
>> +__BUILD_CSR_OP(gstat)
>> +__BUILD_CSR_OP(gtlbc)
>> +__BUILD_CSR_OP(gintc)
>> +
>> +#define set_gcsr_estat(val)    \
>> +       gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
>> +#define clear_gcsr_estat(val)  \
>> +       gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
>> +
>> +#define kvm_read_hw_gcsr(id)           gcsr_read(id)
>> +#define kvm_write_hw_gcsr(csr, id, val)        gcsr_write(val, id)
>> +
>> +int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *v);
>> +int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 v);
>> +
>> +int _kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
>> +
>> +#define kvm_save_hw_gcsr(csr, gid)     (csr->csrs[gid] = gcsr_read(gid))
>> +#define kvm_restore_hw_gcsr(csr, gid)  (gcsr_write(csr->csrs[gid], gid))
>> +
>> +static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
>> +{
>> +       return csr->csrs[gid];
>> +}
>> +
>> +static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr,
>> +                                             int gid, unsigned long val)
>> +{
>> +       csr->csrs[gid] = val;
>> +}
>> +
>> +static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
>> +                                           int gid, unsigned long val)
>> +{
>> +       csr->csrs[gid] |= val;
>> +}
>> +
>> +static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
>> +                                              int gid, unsigned long mask,
>> +                                              unsigned long val)
>> +{
>> +       unsigned long _mask = mask;
>> +
>> +       csr->csrs[gid] &= ~_mask;
>> +       csr->csrs[gid] |= val & _mask;
>> +}
>> +#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
>> diff --git a/arch/loongarch/include/asm/kvm_vcpu.h b/arch/loongarch/include/asm/kvm_vcpu.h
>> new file mode 100644
>> index 0000000000..3d23a656fe
>> --- /dev/null
>> +++ b/arch/loongarch/include/asm/kvm_vcpu.h
>> @@ -0,0 +1,95 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
>> +#define __ASM_LOONGARCH_KVM_VCPU_H__
>> +
>> +#include <linux/kvm_host.h>
>> +#include <asm/loongarch.h>
>> +
>> +/* Controlled by 0x5 guest exst */
>> +#define CPU_SIP0                       (_ULCAST_(1))
>> +#define CPU_SIP1                       (_ULCAST_(1) << 1)
>> +#define CPU_PMU                                (_ULCAST_(1) << 10)
>> +#define CPU_TIMER                      (_ULCAST_(1) << 11)
>> +#define CPU_IPI                                (_ULCAST_(1) << 12)
>> +
>> +/* Controlled by 0x52 guest exception VIP
>> + * aligned to exst bit 5~12
>> + */
>> +#define CPU_IP0                                (_ULCAST_(1))
>> +#define CPU_IP1                                (_ULCAST_(1) << 1)
>> +#define CPU_IP2                                (_ULCAST_(1) << 2)
>> +#define CPU_IP3                                (_ULCAST_(1) << 3)
>> +#define CPU_IP4                                (_ULCAST_(1) << 4)
>> +#define CPU_IP5                                (_ULCAST_(1) << 5)
>> +#define CPU_IP6                                (_ULCAST_(1) << 6)
>> +#define CPU_IP7                                (_ULCAST_(1) << 7)
>> +
>> +#define MNSEC_PER_SEC                  (NSEC_PER_SEC >> 20)
>> +
>> +/* KVM_IRQ_LINE irq field index values */
>> +#define KVM_LOONGSON_IRQ_TYPE_SHIFT    24
>> +#define KVM_LOONGSON_IRQ_TYPE_MASK     0xff
>> +#define KVM_LOONGSON_IRQ_VCPU_SHIFT    16
>> +#define KVM_LOONGSON_IRQ_VCPU_MASK     0xff
>> +#define KVM_LOONGSON_IRQ_NUM_SHIFT     0
>> +#define KVM_LOONGSON_IRQ_NUM_MASK      0xffff
>> +
>> +/* Irq_type field */
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IP   0
>> +#define KVM_LOONGSON_IRQ_TYPE_CPU_IO   1
>> +#define KVM_LOONGSON_IRQ_TYPE_HT       2
>> +#define KVM_LOONGSON_IRQ_TYPE_MSI      3
>> +#define KVM_LOONGSON_IRQ_TYPE_IOAPIC   4
>> +#define KVM_LOONGSON_IRQ_TYPE_ROUTE    5
>> +
>> +/* Out-of-kernel GIC cpu interrupt injection irq_number field */
>> +#define KVM_LOONGSON_IRQ_CPU_IRQ       0
>> +#define KVM_LOONGSON_IRQ_CPU_FIQ       1
>> +#define KVM_LOONGSON_CPU_IP_NUM                8
>> +
>> +typedef union loongarch_instruction  larch_inst;
>> +typedef int (*exit_handle_fn)(struct kvm_vcpu *);
>> +
>> +int  _kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
>> +int  _kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int  _kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int  _kvm_emu_idle(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_pv_hcall(struct kvm_vcpu *vcpu);
>> +int  _kvm_pending_timer(struct kvm_vcpu *vcpu);
>> +int  _kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
>> +void _kvm_deliver_intr(struct kvm_vcpu *vcpu);
>> +
>> +void kvm_own_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_lose_fpu(struct kvm_vcpu *vcpu);
>> +void kvm_save_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fpu(struct loongarch_fpu *fpu);
>> +void kvm_restore_fcsr(struct loongarch_fpu *fpu);
>> +
>> +void kvm_acquire_timer(struct kvm_vcpu *vcpu);
>> +void kvm_reset_timer(struct kvm_vcpu *vcpu);
>> +void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
>> +void kvm_restore_timer(struct kvm_vcpu *vcpu);
>> +void kvm_save_timer(struct kvm_vcpu *vcpu);
>> +
>> +int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq);
>> +/*
>> + * Loongarch KVM guest interrupt handling
>> + */
>> +static inline void _kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
>> +{
>> +       set_bit(irq, &vcpu->arch.irq_pending);
>> +       clear_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +static inline void _kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
>> +{
>> +       clear_bit(irq, &vcpu->arch.irq_pending);
>> +       set_bit(irq, &vcpu->arch.irq_clear);
>> +}
>> +
>> +#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
>> diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
>> index 10748a20a2..b9044c8dfa 100644
>> --- a/arch/loongarch/include/asm/loongarch.h
>> +++ b/arch/loongarch/include/asm/loongarch.h
>> @@ -269,6 +269,7 @@ __asm__(".macro     parse_r var r\n\t"
>>   #define LOONGARCH_CSR_ECFG             0x4     /* Exception config */
>>   #define  CSR_ECFG_VS_SHIFT             16
>>   #define  CSR_ECFG_VS_WIDTH             3
>> +#define  CSR_ECFG_VS_SHIFT_END         (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
>>   #define  CSR_ECFG_VS                   (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
>>   #define  CSR_ECFG_IM_SHIFT             0
>>   #define  CSR_ECFG_IM_WIDTH             14
>> @@ -357,13 +358,14 @@ __asm__(".macro   parse_r var r\n\t"
>>   #define  CSR_TLBLO1_V                  (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
>>
>>   #define LOONGARCH_CSR_GTLBC            0x15    /* Guest TLB control */
>> -#define  CSR_GTLBC_RID_SHIFT           16
>> -#define  CSR_GTLBC_RID_WIDTH           8
>> -#define  CSR_GTLBC_RID                 (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
>> +#define  CSR_GTLBC_TGID_SHIFT          16
>> +#define  CSR_GTLBC_TGID_WIDTH          8
>> +#define  CSR_GTLBC_TGID_SHIFT_END      (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
>> +#define  CSR_GTLBC_TGID                        (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
>>   #define  CSR_GTLBC_TOTI_SHIFT          13
>>   #define  CSR_GTLBC_TOTI                        (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
>> -#define  CSR_GTLBC_USERID_SHIFT                12
>> -#define  CSR_GTLBC_USERID              (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
>> +#define  CSR_GTLBC_USETGID_SHIFT       12
>> +#define  CSR_GTLBC_USETGID             (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
>>   #define  CSR_GTLBC_GMTLBSZ_SHIFT       0
>>   #define  CSR_GTLBC_GMTLBSZ_WIDTH       6
>>   #define  CSR_GTLBC_GMTLBSZ             (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
>> @@ -518,6 +520,7 @@ __asm__(".macro     parse_r var r\n\t"
>>   #define LOONGARCH_CSR_GSTAT            0x50    /* Guest status */
>>   #define  CSR_GSTAT_GID_SHIFT           16
>>   #define  CSR_GSTAT_GID_WIDTH           8
>> +#define  CSR_GSTAT_GID_SHIFT_END       (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
>>   #define  CSR_GSTAT_GID                 (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
>>   #define  CSR_GSTAT_GIDBIT_SHIFT                4
>>   #define  CSR_GSTAT_GIDBIT_WIDTH                6
>> @@ -568,6 +571,12 @@ __asm__(".macro    parse_r var r\n\t"
>>   #define  CSR_GCFG_MATC_GUEST           (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
>>   #define  CSR_GCFG_MATC_NEST            (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
>> +#define  CSR_GCFG_MATP_NEST_SHIFT      2
>> +#define  CSR_GCFG_MATP_NEST            (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
>> +#define  CSR_GCFG_MATP_ROOT_SHIFT      1
>> +#define  CSR_GCFG_MATP_ROOT            (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
>> +#define  CSR_GCFG_MATP_GUEST_SHIFT     0
>> +#define  CSR_GCFG_MATP_GUEST           (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
>>
>>   #define LOONGARCH_CSR_GINTC            0x52    /* Guest interrupt control */
>>   #define  CSR_GINTC_HC_SHIFT            16
>> diff --git a/arch/loongarch/kvm/trace.h b/arch/loongarch/kvm/trace.h
>> new file mode 100644
>> index 0000000000..17b28d94d5
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/trace.h
>> @@ -0,0 +1,168 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
>> +#define _TRACE_KVM_H
>> +
>> +#include <linux/tracepoint.h>
>> +#include <asm/kvm_csr.h>
>> +
>> +#undef TRACE_SYSTEM
>> +#define TRACE_SYSTEM   kvm
>> +
>> +/*
>> + * Tracepoints for VM enters
>> + */
>> +DECLARE_EVENT_CLASS(kvm_transition,
>> +       TP_PROTO(struct kvm_vcpu *vcpu),
>> +       TP_ARGS(vcpu),
>> +       TP_STRUCT__entry(
>> +               __field(unsigned long, pc)
>> +       ),
>> +
>> +       TP_fast_assign(
>> +               __entry->pc = vcpu->arch.pc;
>> +       ),
>> +
>> +       TP_printk("PC: 0x%08lx",
>> +                 __entry->pc)
>> +);
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_enter,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_reenter,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +DEFINE_EVENT(kvm_transition, kvm_out,
>> +            TP_PROTO(struct kvm_vcpu *vcpu),
>> +            TP_ARGS(vcpu));
>> +
>> +/* Further exit reasons */
>> +#define KVM_TRACE_EXIT_IDLE            64
>> +#define KVM_TRACE_EXIT_CACHE           65
>> +#define KVM_TRACE_EXIT_SIGNAL          66
>> +
>> +/* Tracepoints for VM exits */
>> +#define kvm_trace_symbol_exit_types                    \
>> +       { KVM_TRACE_EXIT_IDLE,          "IDLE" },       \
>> +       { KVM_TRACE_EXIT_CACHE,         "CACHE" },      \
>> +       { KVM_TRACE_EXIT_SIGNAL,        "Signal" }
> Consider to use Idle, Cache which has the same style of Signal?
The trace point of signal is not used, and I will remove it.
>
> And why the types here are not the same as those in kvm_vcpu_stat?
As the idle_exits is the statistics of idle exiting, so it is different 
from idle trace point.
>
> struct kvm_vcpu_stat {
>   struct kvm_vcpu_stat_generic generic;
>   u64 idle_exits;
>   u64 signal_exits;
>   u64 int_exits;
>   u64 cpucfg_exits;
> };
>
>> +
>> +TRACE_EVENT(kvm_exit_gspr,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
>> +           TP_ARGS(vcpu, inst_word),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned int, inst_word)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->inst_word = inst_word;
>> +           ),
>> +
>> +           TP_printk("inst word: 0x%08x",
>> +                     __entry->inst_word)
>> +);
>> +
>> +
>> +DECLARE_EVENT_CLASS(kvm_exit,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +           TP_ARGS(vcpu, reason),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, pc)
>> +                       __field(unsigned int, reason)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->pc = vcpu->arch.pc;
>> +                       __entry->reason = reason;
>> +           ),
>> +
>> +           TP_printk("[%s]PC: 0x%08lx",
>> +                     __print_symbolic(__entry->reason,
>> +                                      kvm_trace_symbol_exit_types),
>> +                     __entry->pc)
>> +);
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit_idle,
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit_cache,
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +DEFINE_EVENT(kvm_exit, kvm_exit,
> I'm not sure, but it may be DEFINE_EVENT(kvm_exit, kvm_exit_signal),
> which is corresponding to the types above?
The trace point of signal is not used, so there need not the 
kvm_exit_signal function.

Thanks
Tianrui Zhao
>
>
> Huacai
>
>> +            TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
>> +            TP_ARGS(vcpu, reason));
>> +
>> +#define KVM_TRACE_AUX_RESTORE          0
>> +#define KVM_TRACE_AUX_SAVE             1
>> +#define KVM_TRACE_AUX_ENABLE           2
>> +#define KVM_TRACE_AUX_DISABLE          3
>> +#define KVM_TRACE_AUX_DISCARD          4
>> +
>> +#define KVM_TRACE_AUX_FPU              1
>> +
>> +#define kvm_trace_symbol_aux_op                                \
>> +       { KVM_TRACE_AUX_RESTORE,        "restore" },    \
>> +       { KVM_TRACE_AUX_SAVE,           "save" },       \
>> +       { KVM_TRACE_AUX_ENABLE,         "enable" },     \
>> +       { KVM_TRACE_AUX_DISABLE,        "disable" },    \
>> +       { KVM_TRACE_AUX_DISCARD,        "discard" }
>> +
>> +#define kvm_trace_symbol_aux_state                     \
>> +       { KVM_TRACE_AUX_FPU,     "FPU" }
>> +
>> +TRACE_EVENT(kvm_aux,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
>> +                    unsigned int state),
>> +           TP_ARGS(vcpu, op, state),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, pc)
>> +                       __field(u8, op)
>> +                       __field(u8, state)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->pc = vcpu->arch.pc;
>> +                       __entry->op = op;
>> +                       __entry->state = state;
>> +           ),
>> +
>> +           TP_printk("%s %s PC: 0x%08lx",
>> +                     __print_symbolic(__entry->op,
>> +                                      kvm_trace_symbol_aux_op),
>> +                     __print_symbolic(__entry->state,
>> +                                      kvm_trace_symbol_aux_state),
>> +                     __entry->pc)
>> +);
>> +
>> +TRACE_EVENT(kvm_vpid_change,
>> +           TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
>> +           TP_ARGS(vcpu, vpid),
>> +           TP_STRUCT__entry(
>> +                       __field(unsigned long, vpid)
>> +           ),
>> +
>> +           TP_fast_assign(
>> +                       __entry->vpid = vpid;
>> +           ),
>> +
>> +           TP_printk("vpid: 0x%08lx",
>> +                     __entry->vpid)
>> +);
>> +
>> +#endif /* _TRACE_LOONGARCH64_KVM_H */
>> +
>> +#undef TRACE_INCLUDE_PATH
>> +#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
>> +#undef TRACE_INCLUDE_FILE
>> +#define TRACE_INCLUDE_FILE trace
>> +
>> +/* This part must be outside protection */
>> +#include <trace/define_trace.h>
>> --
>> 2.27.0
>>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations
  2023-09-07 19:57   ` WANG Xuerui
@ 2023-09-12  9:42     ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-12  9:42 UTC (permalink / raw)
  To: WANG Xuerui, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao


在 2023/9/8 上午3:57, WANG Xuerui 写道:
> On 8/31/23 16:30, Tianrui Zhao wrote:
>> Implement LoongArch kvm mmu, it is used to switch gpa to hpa when
>> guest exit because of address translation exception. This patch
>> implement allocate gpa page table, search gpa from it and flush guest
>> gpa in the table.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/kvm/mmu.c | 678 +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 678 insertions(+)
>>   create mode 100644 arch/loongarch/kvm/mmu.c
>>
>> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
>> new file mode 100644
>> index 0000000000..4bb20393f4
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/mmu.c
>> @@ -0,0 +1,678 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#include <linux/highmem.h>
>> +#include <linux/page-flags.h>
>> +#include <linux/kvm_host.h>
>> +#include <linux/uaccess.h>
>> +#include <asm/mmu_context.h>
>> +#include <asm/pgalloc.h>
>> +#include <asm/tlb.h>
>> +
>> +/*
>> + * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table 
>> translation levels
>> + * for which pages need to be cached.
>> + */
>> +#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1)
>> +
>> +static inline void kvm_set_pte(pte_t *ptep, pte_t pteval)
>> +{
>> +    *ptep = pteval;
>> +}
>> +
>> +/**
>> + * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory.
>> + *
>> + * Allocate a blank KVM GPA page directory (PGD) for representing 
>> guest physical
>> + * to host physical page mappings.
>> + *
>> + * Returns:    Pointer to new KVM GPA page directory.
>> + *        NULL on allocation failure.
>> + */
>> +pgd_t *kvm_pgd_alloc(void)
>> +{
>> +    pgd_t *pgd;
>> +
>> +    pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 0);
>> +    if (pgd)
>> +        pgd_init((void *)pgd);
>> +
>> +    return pgd;
>> +}
>> +
>> +/*
>> + * Caller must hold kvm->mm_lock
>> + *
>> + * Walk the page tables of kvm to find the PTE corresponding to the
>> + * address @addr. If page tables don't exist for @addr, they will be 
>> created
>> + * from the MMU cache if @cache is not NULL.
>> + */
>> +static pte_t *kvm_populate_gpa(struct kvm *kvm,
>> +                struct kvm_mmu_memory_cache *cache,
>> +                unsigned long addr)
>> +{
>> +    pgd_t *pgd;
>> +    p4d_t *p4d;
>> +    pud_t *pud;
>> +    pmd_t *pmd;
>> +
>> +    pgd = kvm->arch.pgd + pgd_index(addr);
>> +    p4d = p4d_offset(pgd, addr);
>> +    if (p4d_none(*p4d)) {
>> +        if (!cache)
>> +            return NULL;
>> +
>> +        pud = kvm_mmu_memory_cache_alloc(cache);
>> +        pud_init(pud);
>> +        p4d_populate(NULL, p4d, pud);
>> +    }
>> +
>> +    pud = pud_offset(p4d, addr);
>> +    if (pud_none(*pud)) {
>> +        if (!cache)
>> +            return NULL;
>> +        pmd = kvm_mmu_memory_cache_alloc(cache);
>> +        pmd_init(pmd);
>> +        pud_populate(NULL, pud, pmd);
>> +    }
>> +
>> +    pmd = pmd_offset(pud, addr);
>> +    if (pmd_none(*pmd)) {
>> +        pte_t *pte;
>> +
>> +        if (!cache)
>> +            return NULL;
>> +        pte = kvm_mmu_memory_cache_alloc(cache);
>> +        clear_page(pte);
>> +        pmd_populate_kernel(NULL, pmd, pte);
>> +    }
>> +
>> +    return pte_offset_kernel(pmd, addr);
>> +}
>> +
>> +typedef int (*kvm_pte_ops)(pte_t *pte);
>> +
>> +struct kvm_ptw_ctx {
>> +    kvm_pte_ops    ops;
>> +    int        need_flush;
>> +};
>> +
>> +static int kvm_ptw_pte(pmd_t *pmd, unsigned long addr, unsigned long 
>> end,
>> +            struct kvm_ptw_ctx *context)
>> +{
>> +    pte_t *pte;
>> +    unsigned long next, start;
>> +    int ret;
>> +
>> +    ret = 0;
>> +    start = addr;
>> +    pte = pte_offset_kernel(pmd, addr);
>> +    do {
>> +        next = addr + PAGE_SIZE;
>> +        if (!pte_present(*pte))
>> +            continue;
>> +
>> +        ret |= context->ops(pte);
>> +    } while (pte++, addr = next, addr != end);
>> +
>> +    if (context->need_flush && (start + PMD_SIZE == end)) {
>> +        pte = pte_offset_kernel(pmd, 0);
>> +        pmd_clear(pmd);
>> +        free_page((unsigned long)pte);
>> +    }
>> +
>> +    return ret;
>> +}
>> +
>> +static int kvm_ptw_pmd(pud_t *pud, unsigned long addr, unsigned long 
>> end,
>> +            struct kvm_ptw_ctx *context)
>> +{
>> +    pmd_t *pmd;
>> +    unsigned long next, start;
>> +    int ret;
>> +
>> +    ret = 0;
>> +    start = addr;
>> +    pmd = pmd_offset(pud, addr);
>> +    do {
>> +        next = pmd_addr_end(addr, end);
>> +        if (!pmd_present(*pmd))
>> +            continue;
>> +
>> +        ret |= kvm_ptw_pte(pmd, addr, next, context);
>> +    } while (pmd++, addr = next, addr != end);
>> +
>> +#ifndef __PAGETABLE_PMD_FOLDED
>> +    if (context->need_flush && (start + PUD_SIZE == end)) {
>> +        pmd = pmd_offset(pud, 0);
>> +        pud_clear(pud);
>> +        free_page((unsigned long)pmd);
>> +    }
>> +#endif
>> +
>> +    return ret;
>> +}
>> +
>> +static int kvm_ptw_pud(pgd_t *pgd, unsigned long addr, unsigned long 
>> end,
>> +            struct kvm_ptw_ctx *context)
>> +{
>> +    p4d_t *p4d;
>> +    pud_t *pud;
>> +    int ret = 0;
>> +    unsigned long next;
>> +#ifndef __PAGETABLE_PUD_FOLDED
>> +    unsigned long start = addr;
>> +#endif
>> +
>> +    p4d = p4d_offset(pgd, addr);
>> +    pud = pud_offset(p4d, addr);
>> +    do {
>> +        next = pud_addr_end(addr, end);
>> +        if (!pud_present(*pud))
>> +            continue;
>> +
>> +        ret |= kvm_ptw_pmd(pud, addr, next, context);
>> +    } while (pud++, addr = next, addr != end);
>> +
>> +#ifndef __PAGETABLE_PUD_FOLDED
>> +    if (context->need_flush && (start + PGDIR_SIZE == end)) {
>> +        pud = pud_offset(p4d, 0);
>> +        p4d_clear(p4d);
>> +        free_page((unsigned long)pud);
>> +    }
>> +#endif
>> +
>> +    return ret;
>> +}
>> +
>> +static int kvm_ptw_pgd(pgd_t *pgd, unsigned long addr, unsigned long 
>> end,
>> +            struct kvm_ptw_ctx *context)
>> +{
>> +    unsigned long next;
>> +    int ret;
>> +
>> +    ret = 0;
>> +    if (addr > end - 1)
>> +        return ret;
>> +    pgd = pgd + pgd_index(addr);
>> +    do {
>> +        next = pgd_addr_end(addr, end);
>> +        if (!pgd_present(*pgd))
>> +            continue;
>> +
>> +        ret |= kvm_ptw_pud(pgd, addr, next, context);
>> +    }  while (pgd++, addr = next, addr != end);
>> +
>> +    return ret;
>> +}
>> +
>> +/*
>> + * clear pte entry
>> + */
>> +static int kvm_flush_pte(pte_t *pte)
>> +{
>> +    kvm_set_pte(pte, __pte(0));
>> +    return 1;
>> +}
>> +
>> +/**
>> + * kvm_flush_range() - Flush a range of guest physical addresses.
>> + * @kvm:    KVM pointer.
>> + * @start_gfn:    Guest frame number of first page in GPA range to 
>> flush.
>> + * @end_gfn:    Guest frame number of last page in GPA range to flush.
>> + *
>> + * Flushes a range of GPA mappings from the GPA page tables.
>> + *
>> + * The caller must hold the @kvm->mmu_lock spinlock.
>> + *
>> + * Returns:    Whether its safe to remove the top level page 
>> directory because
>> + *        all lower levels have been removed.
>> + */
>> +static bool kvm_flush_range(struct kvm *kvm, gfn_t start_gfn, gfn_t 
>> end_gfn)
>> +{
>> +    struct kvm_ptw_ctx ctx;
>> +
>> +    ctx.ops = kvm_flush_pte;
>> +    ctx.need_flush = 1;
>> +
>> +    return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
>> +                end_gfn << PAGE_SHIFT, &ctx);
>> +}
>> +
>> +/*
>> + * kvm_mkclean_pte
>> + * Mark a range of guest physical address space clean (writes fault) 
>> in the VM's
>> + * GPA page table to allow dirty page tracking.
>> + */
>> +static int kvm_mkclean_pte(pte_t *pte)
>> +{
>> +    pte_t val;
>> +
>> +    val = *pte;
>> +    if (pte_dirty(val)) {
>> +        *pte = pte_mkclean(val);
>> +        return 1;
>> +    }
>> +    return 0;
>> +}
>> +
>> +/*
>> + * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses 
>> clean.
>> + * @kvm:    KVM pointer.
>> + * @start_gfn:    Guest frame number of first page in GPA range to 
>> flush.
>> + * @end_gfn:    Guest frame number of last page in GPA range to flush.
>> + *
>> + * Make a range of GPA mappings clean so that guest writes will 
>> fault and
>> + * trigger dirty page logging.
>> + *
>> + * The caller must hold the @kvm->mmu_lock spinlock.
>> + *
>> + * Returns:    Whether any GPA mappings were modified, which would 
>> require
>> + *        derived mappings (GVA page tables & TLB enties) to be
>> + *        invalidated.
>> + */
>> +static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, 
>> gfn_t end_gfn)
>> +{
>> +    struct kvm_ptw_ctx ctx;
>> +
>> +    ctx.ops = kvm_mkclean_pte;
>> +    ctx.need_flush = 0;
>> +    return kvm_ptw_pgd(kvm->arch.pgd, start_gfn << PAGE_SHIFT,
>> +                end_gfn << PAGE_SHIFT, &ctx);
>> +}
>> +
>> +/*
>> + * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty 
>> pages
>> + * @kvm:    The KVM pointer
>> + * @slot:    The memory slot associated with mask
>> + * @gfn_offset:    The gfn offset in memory slot
>> + * @mask:    The mask of dirty pages at offset 'gfn_offset' in this 
>> memory
>> + *        slot to be write protected
>> + *
>> + * Walks bits set in mask write protects the associated pte's. 
>> Caller must
>> + * acquire @kvm->mmu_lock.
>> + */
>> +void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>> +        struct kvm_memory_slot *slot,
>> +        gfn_t gfn_offset, unsigned long mask)
>> +{
>> +    gfn_t base_gfn = slot->base_gfn + gfn_offset;
>> +    gfn_t start = base_gfn +  __ffs(mask);
> One extra space after the plus sign?
Thanks, I will remove the extra space.
>> +    gfn_t end = base_gfn + __fls(mask) + 1;
>> +
>> +    kvm_mkclean_gpa_pt(kvm, start, end);
>> +}
>> +
>> +void kvm_arch_commit_memory_region(struct kvm *kvm,
>> +                   struct kvm_memory_slot *old,
>> +                   const struct kvm_memory_slot *new,
>> +                   enum kvm_mr_change change)
>> +{
>> +    int needs_flush;
>> +
>> +    /*
>> +     * If dirty page logging is enabled, write protect all pages in 
>> the slot
>> +     * ready for dirty logging.
>> +     *
>> +     * There is no need to do this in any of the following cases:
>> +     * CREATE:    No dirty mappings will already exist.
>> +     * MOVE/DELETE:    The old mappings will already have been 
>> cleaned up by
>> +     *        kvm_arch_flush_shadow_memslot()
>> +     */
>> +    if (change == KVM_MR_FLAGS_ONLY &&
>> +        (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
>> +         new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
>> +        spin_lock(&kvm->mmu_lock);
>> +        /* Write protect GPA page table entries */
>> +        needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn,
>> +                    new->base_gfn + new->npages);
>> +        if (needs_flush)
>> +            kvm_flush_remote_tlbs(kvm);
>> +        spin_unlock(&kvm->mmu_lock);
>> +    }
>> +}
>> +
>> +void kvm_arch_flush_shadow_all(struct kvm *kvm)
>> +{
>> +    /* Flush whole GPA */
>> +    kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
>> +    /* Flush vpid for each vCPU individually */
>> +    kvm_flush_remote_tlbs(kvm);
>> +}
>> +
>> +void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
>> +        struct kvm_memory_slot *slot)
>> +{
>> +    int ret;
>> +
>> +    /*
>> +     * The slot has been made invalid (ready for moving or 
>> deletion), so we
>> +     * need to ensure that it can no longer be accessed by any guest 
>> vCPUs.
>> +     */
>> +    spin_lock(&kvm->mmu_lock);
>> +    /* Flush slot from GPA */
>> +    ret = kvm_flush_range(kvm, slot->base_gfn,
>> +            slot->base_gfn + slot->npages);
>> +    /* Let implementation do the rest */
>> +    if (ret)
>> +        kvm_flush_remote_tlbs(kvm);
>> +    spin_unlock(&kvm->mmu_lock);
>> +}
>> +
>> +void _kvm_destroy_mm(struct kvm *kvm)
>> +{
>> +    /* It should always be safe to remove after flushing the whole 
>> range */
>> +    kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT);
>> +    pgd_free(NULL, kvm->arch.pgd);
>> +    kvm->arch.pgd = NULL;
>> +}
>> +
>> +/*
>> + * Mark a range of guest physical address space old (all accesses 
>> fault) in the
>> + * VM's GPA page table to allow detection of commonly used pages.
>> + */
>> +static int kvm_mkold_pte(pte_t *pte)
>> +{
>> +    pte_t val;
>> +
>> +    val = *pte;
> "pte_t val = *pte" would be enough... You may want to check the entire 
> patch series for simplifications like this.
Thanks, I will fix this.
>> +    if (pte_young(val)) {
>> +        *pte = pte_mkold(val);
>> +        return 1;
>> +    }
>> +    return 0;
>> +}
>> +
>> +bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>> +{
>> +    return kvm_flush_range(kvm, range->start, range->end);
>> +}
>> +
>> +bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>> +{
>> +    gpa_t gpa = range->start << PAGE_SHIFT;
>> +    pte_t hva_pte = range->pte;
> This has become "range->arg.pte" since commit 3e1efe2b67d3 ("KVM: Wrap 
> kvm_{gfn,hva}_range.pte in a per-action union") which is already 
> inside linux-next.
Thanks, I will update it.
>> +    pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
>> +    pte_t old_pte;
>> +
>> +    if (!ptep)
>> +        return false;
>> +
>> +    /* Mapping may need adjusting depending on memslot flags */
>> +    old_pte = *ptep;
>> +    if (range->slot->flags & KVM_MEM_LOG_DIRTY_PAGES && 
>> !pte_dirty(old_pte))
>> +        hva_pte = pte_mkclean(hva_pte);
>> +    else if (range->slot->flags & KVM_MEM_READONLY)
>> +        hva_pte = pte_wrprotect(hva_pte);
>> +
>> +    kvm_set_pte(ptep, hva_pte);
>> +
>> +    /* Replacing an absent or old page doesn't need flushes */
>> +    if (!pte_present(old_pte) || !pte_young(old_pte))
>> +        return false;
>> +
>> +    /* Pages swapped, aged, moved, or cleaned require flushes */
>> +    return !pte_present(hva_pte) ||
>> +           !pte_young(hva_pte) ||
>> +           pte_pfn(old_pte) != pte_pfn(hva_pte) ||
>> +           (pte_dirty(old_pte) && !pte_dirty(hva_pte));
>> +}
>> +
>> +bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>> +{
>> +    struct kvm_ptw_ctx ctx;
>> +
>> +    ctx.ops = kvm_mkold_pte;
>> +    ctx.need_flush = 0;
>> +    return kvm_ptw_pgd(kvm->arch.pgd, range->start << PAGE_SHIFT,
>> +                range->end << PAGE_SHIFT, &ctx);
>> +}
>> +
>> +bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
>> +{
>> +    gpa_t gpa = range->start << PAGE_SHIFT;
>> +    pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa);
>> +
>> +    if (ptep && pte_present(*ptep) && pte_young(*ptep))
>> +        return true;
>> +
>> +    return false;
>> +}
>> +
>> +/**
>> + * kvm_map_page_fast() - Fast path GPA fault handler.
>> + * @vcpu:        vCPU pointer.
>> + * @gpa:        Guest physical address of fault.
>> + * @write:    Whether the fault was due to a write.
>> + *
>> + * Perform fast path GPA fault handling, doing all that can be done 
>> without
>> + * calling into KVM. This handles marking old pages young (for idle 
>> page
>> + * tracking), and dirtying of clean pages (for dirty page logging).
>> + *
>> + * Returns:    0 on success, in which case we can update derived 
>> mappings and
>> + *        resume guest execution.
>> + *        -EFAULT on failure due to absent GPA mapping or write to
>> + *        read-only page, in which case KVM must be consulted.
>> + */
>> +static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa,
>> +                   bool write)
>> +{
>> +    struct kvm *kvm = vcpu->kvm;
>> +    gfn_t gfn = gpa >> PAGE_SHIFT;
>> +    pte_t *ptep;
>> +    kvm_pfn_t pfn = 0;
>> +    bool pfn_valid = false, pfn_dirty = false;
>> +    int ret = 0;
>> +
>> +    spin_lock(&kvm->mmu_lock);
>> +
>> +    /* Fast path - just check GPA page table for an existing entry */
>> +    ptep = kvm_populate_gpa(kvm, NULL, gpa);
>> +    if (!ptep || !pte_present(*ptep)) {
>> +        ret = -EFAULT;
>> +        goto out;
>> +    }
>> +
>> +    /* Track access to pages marked old */
>> +    if (!pte_young(*ptep)) {
>> +        kvm_set_pte(ptep, pte_mkyoung(*ptep));
>> +        pfn = pte_pfn(*ptep);
>> +        pfn_valid = true;
>> +        /* call kvm_set_pfn_accessed() after unlock */
>> +    }
>> +    if (write && !pte_dirty(*ptep)) {
>> +        if (!pte_write(*ptep)) {
>> +            ret = -EFAULT;
>> +            goto out;
>> +        }
>> +
>> +        /* Track dirtying of writeable pages */
>> +        kvm_set_pte(ptep, pte_mkdirty(*ptep));
>> +        pfn = pte_pfn(*ptep);
>> +        pfn_dirty = true;
>> +    }
>> +
>> +out:
>> +    spin_unlock(&kvm->mmu_lock);
>> +    if (pfn_valid)
>> +        kvm_set_pfn_accessed(pfn);
>> +    if (pfn_dirty) {
>> +        mark_page_dirty(kvm, gfn);
>> +        kvm_set_pfn_dirty(pfn);
>> +    }
>> +    return ret;
>> +}
>> +
>> +/**
>> + * kvm_map_page() - Map a guest physical page.
>> + * @vcpu:        vCPU pointer.
>> + * @gpa:        Guest physical address of fault.
>> + * @write:    Whether the fault was due to a write.
>> + *
>> + * Handle GPA faults by creating a new GPA mapping (or updating an 
>> existing
>> + * one).
>> + *
>> + * This takes care of marking pages young or dirty (idle/dirty page 
>> tracking),
>> + * asking KVM for the corresponding PFN, and creating a mapping in 
>> the GPA page
>> + * tables. Derived mappings (GVA page tables and TLBs) must be 
>> handled by the
>> + * caller.
>> + *
>> + * Returns:    0 on success
>> + *        -EFAULT if there is no memory region at @gpa or a write was
>> + *        attempted to a read-only memory region. This is usually 
>> handled
>> + *        as an MMIO access.
>> + */
>> +static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, 
>> bool write)
>> +{
>> +    bool writeable;
>> +    int srcu_idx, err = 0, retry_no = 0;
>> +    unsigned long hva;
>> +    unsigned long mmu_seq;
>> +    unsigned long prot_bits;
>> +    pte_t *ptep, new_pte;
>> +    kvm_pfn_t pfn;
>> +    gfn_t gfn = gpa >> PAGE_SHIFT;
>> +    struct vm_area_struct *vma;
>> +    struct kvm *kvm = vcpu->kvm;
>> +    struct kvm_memory_slot *memslot;
>> +    struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
>> +
>> +    /* Try the fast path to handle old / clean pages */
>> +    srcu_idx = srcu_read_lock(&kvm->srcu);
>> +    err = kvm_map_page_fast(vcpu, gpa, write);
>> +    if (!err)
>> +        goto out;
>> +
>> +    memslot = gfn_to_memslot(kvm, gfn);
>> +    hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable);
>> +    if (kvm_is_error_hva(hva) || (write && !writeable))
>> +        goto out;
>> +
>> +    mmap_read_lock(current->mm);
>> +    vma = find_vma_intersection(current->mm, hva, hva + 1);
>> +    if (unlikely(!vma)) {
>> +        kvm_err("Failed to find VMA for hva 0x%lx\n", hva);
>> +        mmap_read_unlock(current->mm);
>> +        err = -EFAULT;
>> +        goto out;
>> +    }
>> +    mmap_read_unlock(current->mm);
>> +
>> +    /* We need a minimum of cached pages ready for page table 
>> creation */
>> +    err = kvm_mmu_topup_memory_cache(memcache, 
>> KVM_MMU_CACHE_MIN_PAGES);
>> +    if (err)
>> +        goto out;
>> +
>> +retry:
>> +    /*
>> +     * Used to check for invalidations in progress, of the pfn that is
>> +     * returned by pfn_to_pfn_prot below.
>> +     */
>> +    mmu_seq = kvm->mmu_invalidate_seq;
>> +    /*
>> +     * Ensure the read of mmu_invalidate_seq isn't reordered with 
>> PTE reads in
>> +     * gfn_to_pfn_prot() (which calls get_user_pages()), so that we 
>> don't
>> +     * risk the page we get a reference to getting unmapped before 
>> we have a
>> +     * chance to grab the mmu_lock without mmu_invalidate_retry() 
>> noticing.
>> +     *
>> +     * This smp_rmb() pairs with the effective smp_wmb() of the 
>> combination
>> +     * of the pte_unmap_unlock() after the PTE is zapped, and the
>> +     * spin_lock() in 
>> kvm_mmu_invalidate_invalidate_<page|range_end>() before
>> +     * mmu_invalidate_seq is incremented.
>> +     */
>> +    smp_rmb();
>> +
>> +    /* Slow path - ask KVM core whether we can access this GPA */
>> +    pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable);
>> +    if (is_error_noslot_pfn(pfn)) {
>> +        err = -EFAULT;
>> +        goto out;
>> +    }
>> +
>> +    /* Check if an invalidation has taken place since we got pfn */
>> +    if (mmu_invalidate_retry(kvm, mmu_seq)) {
>> +        /*
> Wrong indentation?
I will fix this indentation.

Thanks
Tianrui Zhao
>> +         * This can happen when mappings are changed asynchronously, 
>> but
>> +         * also synchronously if a COW is triggered by
>> +         * gfn_to_pfn_prot().
>> +         */
>> +        kvm_set_pfn_accessed(pfn);
>> +        kvm_release_pfn_clean(pfn);
>> +        if (retry_no > 100) {
>> +            retry_no = 0;
>> +            schedule();
>> +        }
>> +        retry_no++;
>> +        goto retry;
>> +    }
>> +
>> +    /*
>> +     * For emulated devices such virtio device, actual cache 
>> attribute is
>> +     * determined by physical machine.
>> +     * For pass through physical device, it should be uncachable
>> +     */
>> +    prot_bits = _PAGE_PRESENT | __READABLE;
>> +    if (vma->vm_flags & (VM_IO | VM_PFNMAP))
>> +        prot_bits |= _CACHE_SUC;
>> +    else
>> +        prot_bits |= _CACHE_CC;
>> +
>> +    if (writeable) {
>> +        prot_bits |= _PAGE_WRITE;
>> +        if (write)
>> +            prot_bits |= __WRITEABLE;
>> +    }
>> +
>> +    /* Ensure page tables are allocated */
>> +    spin_lock(&kvm->mmu_lock);
>> +    ptep = kvm_populate_gpa(kvm, memcache, gpa);
>> +    new_pte = pfn_pte(pfn, __pgprot(prot_bits));
>> +    kvm_set_pte(ptep, new_pte);
>> +
>> +    err = 0;
>> +    spin_unlock(&kvm->mmu_lock);
>> +
>> +    if (prot_bits & _PAGE_DIRTY) {
>> +        mark_page_dirty(kvm, gfn);
>> +        kvm_set_pfn_dirty(pfn);
>> +    }
>> +
>> +    kvm_set_pfn_accessed(pfn);
>> +    kvm_release_pfn_clean(pfn);
>> +out:
>> +    srcu_read_unlock(&kvm->srcu, srcu_idx);
>> +    return err;
>> +}
>> +
>> +int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, 
>> bool write)
>> +{
>> +    int ret;
>> +
>> +    ret = kvm_map_page(vcpu, gpa, write);
>> +    if (ret)
>> +        return ret;
>> +
>> +    /* Invalidate this entry in the TLB */
>> +    return kvm_flush_tlb_gpa(vcpu, gpa);
>> +}
>> +
>> +void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot 
>> *memslot)
>> +{
>> +
>> +}
>> +
>> +int kvm_arch_prepare_memory_region(struct kvm *kvm,
>> +                   const struct kvm_memory_slot *old,
>> +                   struct kvm_memory_slot *new,
>> +                   enum kvm_mr_change change)
>> +{
>> +    return 0;
>> +}
>> +
>> +void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm,
>> +                    const struct kvm_memory_slot *memslot)
>> +{
>> +    kvm_flush_remote_tlbs(kvm);
>> +}
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile
  2023-09-07 20:10   ` WANG Xuerui
  2023-09-08  1:40     ` Huacai Chen
@ 2023-09-12  9:47     ` zhaotianrui
  1 sibling, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-12  9:47 UTC (permalink / raw)
  To: WANG Xuerui, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao, kernel test robot


在 2023/9/8 上午4:10, WANG Xuerui 写道:
>
> On 8/31/23 16:30, Tianrui Zhao wrote:
>> Enable LoongArch kvm config and add the makefile to support build kvm
>> module.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Reported-by: kernel test robot <lkp@intel.com>
>> Link: 
>> https://lore.kernel.org/oe-kbuild-all/202304131526.iXfLaVZc-lkp@intel.com/
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/Kbuild                      |  1 +
>>   arch/loongarch/Kconfig                     |  3 ++
>>   arch/loongarch/configs/loongson3_defconfig |  2 +
>>   arch/loongarch/kvm/Kconfig                 | 45 ++++++++++++++++++++++
>>   arch/loongarch/kvm/Makefile                | 22 +++++++++++
>>   5 files changed, 73 insertions(+)
>>   create mode 100644 arch/loongarch/kvm/Kconfig
>>   create mode 100644 arch/loongarch/kvm/Makefile
>>
>> diff --git a/arch/loongarch/Kbuild b/arch/loongarch/Kbuild
>> index b01f5cdb27..40be8a1696 100644
>> --- a/arch/loongarch/Kbuild
>> +++ b/arch/loongarch/Kbuild
>> @@ -2,6 +2,7 @@ obj-y += kernel/
>>   obj-y += mm/
>>   obj-y += net/
>>   obj-y += vdso/
>> +obj-y += kvm/
> Do we want to keep the list alphabetically sorted here?
It could be resorted by alphabetical.
>>     # for cleaning
>>   subdir- += boot
>> diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
>> index ecf282dee5..7f2f7ccc76 100644
>> --- a/arch/loongarch/Kconfig
>> +++ b/arch/loongarch/Kconfig
>> @@ -123,6 +123,7 @@ config LOONGARCH
>>       select HAVE_KPROBES
>>       select HAVE_KPROBES_ON_FTRACE
>>       select HAVE_KRETPROBES
>> +    select HAVE_KVM
>>       select HAVE_MOD_ARCH_SPECIFIC
>>       select HAVE_NMI
>>       select HAVE_PCI
>> @@ -650,3 +651,5 @@ source "kernel/power/Kconfig"
>>   source "drivers/acpi/Kconfig"
>>     endmenu
>> +
>> +source "arch/loongarch/kvm/Kconfig"
>> diff --git a/arch/loongarch/configs/loongson3_defconfig 
>> b/arch/loongarch/configs/loongson3_defconfig
>> index d64849b4cb..7acb4ae7af 100644
>> --- a/arch/loongarch/configs/loongson3_defconfig
>> +++ b/arch/loongarch/configs/loongson3_defconfig
>> @@ -63,6 +63,8 @@ CONFIG_EFI_ZBOOT=y
>>   CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
>>   CONFIG_EFI_CAPSULE_LOADER=m
>>   CONFIG_EFI_TEST=m
>> +CONFIG_VIRTUALIZATION=y
>> +CONFIG_KVM=m
>>   CONFIG_MODULES=y
>>   CONFIG_MODULE_FORCE_LOAD=y
>>   CONFIG_MODULE_UNLOAD=y
>> diff --git a/arch/loongarch/kvm/Kconfig b/arch/loongarch/kvm/Kconfig
>> new file mode 100644
>> index 0000000000..bf7d6e7cde
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/Kconfig
>> @@ -0,0 +1,45 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# KVM configuration
>> +#
>> +
>> +source "virt/kvm/Kconfig"
>> +
>> +menuconfig VIRTUALIZATION
>> +    bool "Virtualization"
>> +    help
>> +      Say Y here to get to see options for using your Linux host to run
>> +      other operating systems inside virtual machines (guests).
>> +      This option alone does not add any kernel code.
>> +
>> +      If you say N, all options in this submenu will be skipped and
>> +      disabled.
>> +
>> +if VIRTUALIZATION
>> +
>> +config AS_HAS_LVZ_EXTENSION
>> +    def_bool $(as-instr,hvcl 0)
>> +
>> +config KVM
>> +    tristate "Kernel-based Virtual Machine (KVM) support"
>> +    depends on HAVE_KVM
>> +    depends on AS_HAS_LVZ_EXTENSION
>> +    select MMU_NOTIFIER
>> +    select ANON_INODES
>> +    select PREEMPT_NOTIFIERS
>> +    select KVM_MMIO
>> +    select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>> +    select KVM_GENERIC_HARDWARE_ENABLING
>> +    select KVM_XFER_TO_GUEST_WORK
>> +    select HAVE_KVM_DIRTY_RING_ACQ_REL
>> +    select HAVE_KVM_VCPU_ASYNC_IOCTL
>> +    select HAVE_KVM_EVENTFD
>> +    select SRCU
> Make the list of selects also alphabetically sorted?
It also could be resorted by alphabetical.
>> +    help
>> +      Support hosting virtualized guest machines using hardware
>> +      virtualization extensions. You will need a fairly processor
>> +      equipped with virtualization extensions.
>
> The word "fairly" seems extraneous here, and can be simply dropped.
Thanks, I will remove this "fairly" word.
>
> (I suppose you forgot to delete it after tweaking the original 
> sentence, that came from arch/x86/kvm: "You will need a fairly recent 
> processor ..." -- all LoongArch processors are recent!)
>
>> +
>> +      If unsure, say N.
>> +
>> +endif # VIRTUALIZATION
>> diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile
>> new file mode 100644
>> index 0000000000..2335e873a6
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/Makefile
>> @@ -0,0 +1,22 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# Makefile for LOONGARCH KVM support
> "LoongArch" -- you may want to check the entire patch series for such 
> ALL-CAPS references to LoongArch in natural language paragraphs, they 
> all want to be spelled "LoongArch".
Thanks, I will fix it.
>> +#
>> +
>> +ccflags-y += -I $(srctree)/$(src)
>> +
>> +include $(srctree)/virt/kvm/Makefile.kvm
>> +
>> +obj-$(CONFIG_KVM) += kvm.o
>> +
>> +kvm-y += main.o
>> +kvm-y += vm.o
>> +kvm-y += vmid.o
>> +kvm-y += tlb.o
>> +kvm-y += mmu.o
>> +kvm-y += vcpu.o
>> +kvm-y += exit.o
>> +kvm-y += interrupt.o
>> +kvm-y += timer.o
>> +kvm-y += switch.o
>> +kvm-y += csr_ops.o
> I'd suggest sorting this list too to better avoid editing conflicts in 
> the future.
I will also resort them by alphabetical.

Thanks
Tianrui Zhao


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch
  2023-09-07 20:04   ` WANG Xuerui
@ 2023-09-12  9:55     ` zhaotianrui
  0 siblings, 0 replies; 56+ messages in thread
From: zhaotianrui @ 2023-09-12  9:55 UTC (permalink / raw)
  To: WANG Xuerui, linux-kernel, kvm
  Cc: Paolo Bonzini, Huacai Chen, Greg Kroah-Hartman, loongarch,
	Jens Axboe, Mark Brown, Alex Deucher, Oliver Upton, maobibo,
	Xi Ruoyao


在 2023/9/8 上午4:04, WANG Xuerui 写道:
>
> On 8/31/23 16:30, Tianrui Zhao wrote:
>> Implement LoongArch vcpu world switch, including vcpu enter guest and
>> vcpu exit from guest, both operations need to save or restore the host
>> and guest registers.
>>
>> Reviewed-by: Bibo Mao <maobibo@loongson.cn>
>> Signed-off-by: Tianrui Zhao <zhaotianrui@loongson.cn>
>> ---
>>   arch/loongarch/kernel/asm-offsets.c |  32 ++++
>>   arch/loongarch/kvm/switch.S         | 255 ++++++++++++++++++++++++++++
>>   2 files changed, 287 insertions(+)
>>   create mode 100644 arch/loongarch/kvm/switch.S
>>
>> diff --git a/arch/loongarch/kernel/asm-offsets.c 
>> b/arch/loongarch/kernel/asm-offsets.c
>> index 505e4bf596..d4bbaa74c1 100644
>> --- a/arch/loongarch/kernel/asm-offsets.c
>> +++ b/arch/loongarch/kernel/asm-offsets.c
>> @@ -9,6 +9,7 @@
>>   #include <linux/mm.h>
>>   #include <linux/kbuild.h>
>>   #include <linux/suspend.h>
>> +#include <linux/kvm_host.h>
>>   #include <asm/cpu-info.h>
>>   #include <asm/ptrace.h>
>>   #include <asm/processor.h>
>> @@ -285,3 +286,34 @@ void output_fgraph_ret_regs_defines(void)
>>       BLANK();
>>   }
>>   #endif
>> +
>> +static void __used output_kvm_defines(void)
>> +{
>> +    COMMENT(" KVM/LOONGARCH Specific offsets. ");
> "LoongArch"?
Thanks, I will fix it.
>> +
>> +    OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr);
>> +    OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc);
>> +    BLANK();
>> +
>> +    OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch);
>> +    OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
>> +    OFFSET(KVM_VCPU_RUN, kvm_vcpu, run);
>> +    BLANK();
>> +
>> +    OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp);
>> +    OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp);
>> +    OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
>> +    OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
>> +    OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);
>> +    OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc);
>> +    OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs);
>> +    OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat);
>> +    OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv);
>> +    OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi);
>> +    OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg);
>> +    OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
>> +    OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu);
>> +
>> +    OFFSET(KVM_GPGD, kvm, arch.pgd);
>> +    BLANK();
>> +}
>> diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
>> new file mode 100644
>> index 0000000000..f637fcd56c
>> --- /dev/null
>> +++ b/arch/loongarch/kvm/switch.S
>> @@ -0,0 +1,255 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
>> + */
>> +
>> +#include <linux/linkage.h>
>> +#include <asm/stackframe.h>
>> +#include <asm/asm.h>
>> +#include <asm/asmmacro.h>
>> +#include <asm/regdef.h>
>> +#include <asm/loongarch.h>
>> +
>> +#define PT_GPR_OFFSET(x)    (PT_R0 + 8*x)
>> +#define GGPR_OFFSET(x)        (KVM_ARCH_GGPR + 8*x)
>> +
>> +.macro kvm_save_host_gpr base
>> +    .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
>> +    st.d    $r\n, \base, PT_GPR_OFFSET(\n)
>> +    .endr
>> +.endm
>> +
>> +.macro kvm_restore_host_gpr base
>> +    .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
>> +    ld.d    $r\n, \base, PT_GPR_OFFSET(\n)
>> +    .endr
>> +.endm
>> +
>> +/*
>> + * save and restore all gprs except base register,
>> + * and default value of base register is a2.
>> + */
>> +.macro kvm_save_guest_gprs base
>> +    .irp 
>> n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
>> +    st.d    $r\n, \base, GGPR_OFFSET(\n)
>> +    .endr
>> +.endm
>> +
>> +.macro kvm_restore_guest_gprs base
>> +    .irp 
>> n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
>> +    ld.d    $r\n, \base, GGPR_OFFSET(\n)
>> +    .endr
>> +.endm
>> +
>> +/*
>> + * prepare switch to guest, save host reg and restore guest reg.
>> + * a2: kvm_vcpu_arch, don't touch it until 'ertn'
>> + * t0, t1: temp register
>> + */
>> +.macro kvm_switch_to_guest
>> +    /* set host excfg.VS=0, all exceptions share one exception entry */
>> +    csrrd        t0, LOONGARCH_CSR_ECFG
>> +    bstrins.w    t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT
>> +    csrwr        t0, LOONGARCH_CSR_ECFG
>> +
>> +    /* Load up the new EENTRY */
>> +    ld.d    t0, a2, KVM_ARCH_GEENTRY
>> +    csrwr    t0, LOONGARCH_CSR_EENTRY
>> +
>> +    /* Set Guest ERA */
>> +    ld.d    t0, a2, KVM_ARCH_GPC
>> +    csrwr    t0, LOONGARCH_CSR_ERA
>> +
>> +    /* Save host PGDL */
>> +    csrrd    t0, LOONGARCH_CSR_PGDL
>> +    st.d    t0, a2, KVM_ARCH_HPGD
>> +
>> +    /* Switch to kvm */
>> +    ld.d    t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH
>> +
>> +    /* Load guest PGDL */
>> +    li.w    t0, KVM_GPGD
>> +    ldx.d   t0, t1, t0
>> +    csrwr    t0, LOONGARCH_CSR_PGDL
>> +
>> +    /* Mix GID and RID */
>> +    csrrd        t1, LOONGARCH_CSR_GSTAT
>> +    bstrpick.w    t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT
>> +    csrrd        t0, LOONGARCH_CSR_GTLBC
>> +    bstrins.w    t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
>> +    csrwr        t0, LOONGARCH_CSR_GTLBC
>> +
>> +    /*
>> +     * Switch to guest:
>> +     *  GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0
>> +     *  ertn
>> +     */
>> +
>> +    /*
>> +     * Enable intr in root mode with future ertn so that host interrupt
>> +     * can be responsed during VM runs
>> +     * guest crmd comes from separate gcsr_CRMD register
>> +     */
>> +    ori    t0, zero, CSR_PRMD_PIE
> Use "li.w" like the place several lines before?
Thanks for the advice, and this issue has been discussed before, and the 
conclusion is that it need not replace "ori" with the pseudo instruction 
like "li.w", as it has the same meaning.
>> +    csrxchg    t0, t0, LOONGARCH_CSR_PRMD
>> +
>> +    /* Set PVM bit to setup ertn to guest context */
>> +    ori    t0, zero, CSR_GSTAT_PVM
> Similarly here...
>> +    csrxchg    t0, t0, LOONGARCH_CSR_GSTAT
>> +
>> +    /* Load Guest gprs */
>> +    kvm_restore_guest_gprs a2
>> +    /* Load KVM_ARCH register */
>> +    ld.d    a2, a2,    (KVM_ARCH_GGPR + 8 * REG_A2)
>> +
>> +    ertn
>> +.endm
>> +
>> +    /*
>> +     * exception entry for general exception from guest mode
>> +     *  - IRQ is disabled
>> +     *  - kernel privilege in root mode
>> +     *  - page mode keep unchanged from previous prmd in root mode
>> +     *  - Fixme: tlb exception cannot happen since registers 
>> relative with TLB
>> +     *  -        is still in guest mode, such as pgd table/vmid 
>> registers etc,
>> +     *  -        will fix with hw page walk enabled in future
>> +     * load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to 
>> KVM_TEMP_KS
>> +     */
>> +    .text
>> +    .cfi_sections    .debug_frame
>> +SYM_CODE_START(kvm_vector_entry)
>> +    csrwr    a2,   KVM_TEMP_KS
>> +    csrrd    a2,   KVM_VCPU_KS
>> +    addi.d    a2,   a2, KVM_VCPU_ARCH
>> +
>> +    /* After save gprs, free to use any gpr */
>> +    kvm_save_guest_gprs a2
>> +    /* Save guest a2 */
>> +    csrrd    t0,    KVM_TEMP_KS
>> +    st.d    t0,    a2,    (KVM_ARCH_GGPR + 8 * REG_A2)
>> +
>> +    /* a2: kvm_vcpu_arch, a1 is free to use */
>> +    csrrd    s1,   KVM_VCPU_KS
>> +    ld.d    s0,   s1, KVM_VCPU_RUN
>> +
>> +    csrrd    t0,   LOONGARCH_CSR_ESTAT
>> +    st.d    t0,   a2, KVM_ARCH_HESTAT
>> +    csrrd    t0,   LOONGARCH_CSR_ERA
>> +    st.d    t0,   a2, KVM_ARCH_GPC
>> +    csrrd    t0,   LOONGARCH_CSR_BADV
>> +    st.d    t0,   a2, KVM_ARCH_HBADV
>> +    csrrd    t0,   LOONGARCH_CSR_BADI
>> +    st.d    t0,   a2, KVM_ARCH_HBADI
>> +
>> +    /* Restore host excfg.VS */
>> +    csrrd    t0, LOONGARCH_CSR_ECFG
>> +    ld.d    t1, a2, KVM_ARCH_HECFG
>> +    or    t0, t0, t1
>> +    csrwr    t0, LOONGARCH_CSR_ECFG
>> +
>> +    /* Restore host eentry */
>> +    ld.d    t0, a2, KVM_ARCH_HEENTRY
>> +    csrwr    t0, LOONGARCH_CSR_EENTRY
>> +
>> +    /* restore host pgd table */
>> +    ld.d    t0, a2, KVM_ARCH_HPGD
>> +    csrwr   t0, LOONGARCH_CSR_PGDL
>> +
>> +    /*
>> +     * Disable PGM bit to enter root mode by default with next ertn
>> +     */
>> +    ori    t0, zero, CSR_GSTAT_PVM
> And here.
>> +    csrxchg    zero, t0, LOONGARCH_CSR_GSTAT
>> +    /*
>> +     * Clear GTLBC.TGID field
>> +     *       0: for root  tlb update in future tlb instr
>> +     *  others: for guest tlb update like gpa to hpa in future tlb 
>> instr
>> +     */
>> +    csrrd    t0, LOONGARCH_CSR_GTLBC
>> +    bstrins.w    t0, zero, CSR_GTLBC_TGID_SHIFT_END, 
>> CSR_GTLBC_TGID_SHIFT
>> +    csrwr    t0, LOONGARCH_CSR_GTLBC
>> +    ld.d    tp, a2, KVM_ARCH_HTP
>> +    ld.d    sp, a2, KVM_ARCH_HSP
>> +    /* restore per cpu register */
>> +    ld.d    u0, a2, KVM_ARCH_HPERCPU
>> +    addi.d    sp, sp, -PT_SIZE
>> +
>> +    /* Prepare handle exception */
>> +    or    a0, s0, zero
>> +    or    a1, s1, zero
> Similarly "move X, Y" should be clearer here.
>> +    ld.d    t8, a2, KVM_ARCH_HANDLE_EXIT
>> +    jirl    ra, t8, 0
>> +
>> +    or    a2, s1, zero
>> +    addi.d    a2, a2, KVM_VCPU_ARCH
>> +
>> +    /* resume host when ret <= 0 */
>> +    bge    zero, a0, ret_to_host
> "blez a0, ret_to_host"
>> +
>> +    /*
>> +         * return to guest
>> +         * save per cpu register again, maybe switched to another cpu
>> +         */
>> +    st.d    u0, a2, KVM_ARCH_HPERCPU
>> +
>> +    /* Save kvm_vcpu to kscratch */
>> +    csrwr    s1, KVM_VCPU_KS
>> +    kvm_switch_to_guest
>> +
>> +ret_to_host:
>> +    ld.d    a2, a2, KVM_ARCH_HSP
>> +    addi.d  a2, a2, -PT_SIZE
>> +    kvm_restore_host_gpr    a2
>> +    jr      ra
>> +
>> +SYM_INNER_LABEL(kvm_vector_entry_end, SYM_L_LOCAL)
>> +SYM_CODE_END(kvm_vector_entry)
>> +
>> +/*
>> + * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu)
>> + *
>> + * @register_param:
>> + *  a0: kvm_run* run
>> + *  a1: kvm_vcpu* vcpu
>> + */
>> +SYM_FUNC_START(kvm_enter_guest)
>> +    /* allocate space in stack bottom */
>> +    addi.d    a2, sp, -PT_SIZE
>> +    /* save host gprs */
>> +    kvm_save_host_gpr a2
>> +
>> +    /* save host crmd,prmd csr to stack */
>> +    csrrd    a3, LOONGARCH_CSR_CRMD
>> +    st.d    a3, a2, PT_CRMD
>> +    csrrd    a3, LOONGARCH_CSR_PRMD
>> +    st.d    a3, a2, PT_PRMD
>> +
>> +    addi.d    a2, a1, KVM_VCPU_ARCH
>> +    st.d    sp, a2, KVM_ARCH_HSP
>> +    st.d    tp, a2, KVM_ARCH_HTP
>> +    /* Save per cpu register */
>> +    st.d    u0, a2, KVM_ARCH_HPERCPU
>> +
>> +    /* Save kvm_vcpu to kscratch */
>> +    csrwr    a1, KVM_VCPU_KS
>> +    kvm_switch_to_guest
>> +SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL)
>> +SYM_FUNC_END(kvm_enter_guest)
>> +
>> +SYM_FUNC_START(kvm_save_fpu)
>> +    fpu_save_csr    a0 t1
>> +    fpu_save_double a0 t1
>> +    fpu_save_cc    a0 t1 t2
>> +    jr              ra
>> +SYM_FUNC_END(kvm_save_fpu)
>> +
>> +SYM_FUNC_START(kvm_restore_fpu)
>> +    fpu_restore_double a0 t1
>> +    fpu_restore_csr    a0 t1
> This needs to become "fpu_restore_csr a0 t1 t2" after commit 
> bd3c5798484a ("LoongArch: Add Loongson Binary Translation (LBT) 
> extension support") which is slated for Linux 6.6 and already inside 
> linux-next.
I will update it.

Thanks
Tianrui Zhao
>> +    fpu_restore_cc       a0 t1 t2
>> +    jr                 ra
>> +SYM_FUNC_END(kvm_restore_fpu)
>> +
>> +    .section ".rodata"
>> +SYM_DATA(kvm_vector_size, .quad kvm_vector_entry_end - 
>> kvm_vector_entry)
>> +SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - 
>> kvm_enter_guest)
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2023-09-12  9:55 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-31  8:29 [PATCH v20 00/30] Add KVM LoongArch support Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 01/30] LoongArch: KVM: Add kvm related header files Tianrui Zhao
2023-09-11  4:59   ` Huacai Chen
2023-09-11  9:41     ` zhaotianrui
2023-08-31  8:29 ` [PATCH v20 02/30] LoongArch: KVM: Implement kvm module related interface Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 03/30] LoongArch: KVM: Implement kvm hardware enable, disable interface Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 04/30] LoongArch: KVM: Implement VM related functions Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 05/30] LoongArch: KVM: Add vcpu related header files Tianrui Zhao
2023-09-11  8:07   ` Huacai Chen
2023-09-12  8:26     ` zhaotianrui
2023-08-31  8:29 ` [PATCH v20 06/30] LoongArch: KVM: Implement vcpu create and destroy interface Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 07/30] LoongArch: KVM: Implement vcpu run interface Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 08/30] LoongArch: KVM: Implement vcpu handle exit interface Tianrui Zhao
2023-08-31  8:29 ` [PATCH v20 09/30] LoongArch: KVM: Implement vcpu get, vcpu set registers Tianrui Zhao
2023-09-11  9:03   ` Huacai Chen
2023-09-11 10:03     ` zhaotianrui
2023-09-11 10:13       ` zhaotianrui
2023-09-11 11:49       ` Huacai Chen
2023-09-12  2:41         ` bibo mao
2023-08-31  8:30 ` [PATCH v20 10/30] LoongArch: KVM: Implement vcpu ENABLE_CAP ioctl interface Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 11/30] LoongArch: KVM: Implement fpu related operations for vcpu Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 12/30] LoongArch: KVM: Implement vcpu interrupt operations Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 13/30] LoongArch: KVM: Implement misc vcpu related interfaces Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 14/30] LoongArch: KVM: Implement vcpu load and vcpu put operations Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 15/30] LoongArch: KVM: Implement vcpu status description Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 16/30] LoongArch: KVM: Implement update VM id function Tianrui Zhao
2023-09-11 10:00   ` Huacai Chen
2023-09-11 10:23     ` bibo mao
2023-09-12  3:51       ` Huacai Chen
2023-08-31  8:30 ` [PATCH v20 17/30] LoongArch: KVM: Implement virtual machine tlb operations Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 18/30] LoongArch: KVM: Implement vcpu timer operations Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 19/30] LoongArch: KVM: Implement kvm mmu operations Tianrui Zhao
2023-09-07 19:57   ` WANG Xuerui
2023-09-12  9:42     ` zhaotianrui
2023-08-31  8:30 ` [PATCH v20 20/30] LoongArch: KVM: Implement handle csr excption Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 21/30] LoongArch: KVM: Implement handle iocsr exception Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 22/30] LoongArch: KVM: Implement handle idle exception Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 23/30] LoongArch: KVM: Implement handle gspr exception Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 24/30] LoongArch: KVM: Implement handle mmio exception Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 25/30] LoongArch: KVM: Implement handle fpu exception Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 26/30] LoongArch: KVM: Implement kvm exception vector Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 27/30] LoongArch: KVM: Implement vcpu world switch Tianrui Zhao
2023-09-07 20:04   ` WANG Xuerui
2023-09-12  9:55     ` zhaotianrui
2023-08-31  8:30 ` [PATCH v20 28/30] LoongArch: KVM: Enable kvm config and add the makefile Tianrui Zhao
2023-09-07 20:10   ` WANG Xuerui
2023-09-08  1:40     ` Huacai Chen
2023-09-08  1:49       ` bibo mao
2023-09-08  1:54         ` Huacai Chen
2023-09-12  9:47     ` zhaotianrui
2023-09-11  7:30   ` WANG Xuerui
2023-09-12  1:57     ` zhaotianrui
2023-08-31  8:30 ` [PATCH v20 29/30] LoongArch: KVM: Supplement kvm document about LoongArch-specific part Tianrui Zhao
2023-08-31  8:30 ` [PATCH v20 30/30] LoongArch: KVM: Add maintainers for LoongArch KVM Tianrui Zhao
2023-09-11  4:02 ` [PATCH v20 00/30] Add KVM LoongArch support Huacai Chen
2023-09-11  9:34   ` zhaotianrui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.